Why Robots (Not Human Moderators) Should Be Filtering Out Child Pornography

Humans (moderators and innocent users) should never be asked to see harrowing and disturbing content. AI will make manual moderation a thing of the past and if this means letting the robots take our jobs then for once, this is surely a good thing.
ktsimage via Getty Images

Depression, anxiety, hallucinations, and panic attacks - these are just a few of the PTSD-like symptoms that many content moderators experience from work. In December 2016, two content moderators - Henry Soto and Greg Blauert - from Microsoft filed a lawsuit for "negligent infliction of emotional distress" after being forced to filter photos and videos of child pornography, murder, child abuse, and more. While helping keep others safe online, Soto and Blauert in-turn suffered deep-psychological distress, saying they are now 'triggered' by seeing children.

They claimed they were not warned about the psychological impact of the job and were not allowed to turn it down, but were offered access to (allegedly insufficient) counselling services, and were encouraged to take 'smoke breaks' and play video games to help ease stress. These measures were not enough. How could they be? It is only human to find content such as child abuse and child pornography abhorrent and offensive. In fact, you couldn't do a content moderation role if you didn't know these things were wrong. But in a world where we use tech to filter many other parts of our lives, why do we continue to allow people to filter explicit content?

The case of Soto and Blauert may seem like a drop in the ocean, but there's undoubtedly more like them.

In 2015 and 2016 there were 11,992 child sexual abuse images recorded in England alone - a figure that is up 64% from previous years. Social platforms such as Facebook and Instagram have a responsibility to remove explicit or violent content, however there are not enough moderators to filter it all and not enough is being done. In 2016 the IWF found that 35% of child sexual abuse imagery takes more than 120 minutes to take down. Content moderators have an incredibly important job, but the psychological impact is huge and, worryingly, inappropriate content is being missed in the moderation process too. In March, a BBC investigation found Facebook failed to remove more than 80% of sexualised or abusive images of children. This is a huge failure and it's clear that human moderation is not working.

Bring in the robots

Every day we see stories on how robots are going to take over our jobs, but this is certainly a time when we should consider the benefits of artificial intelligence (AI) and machine learning. These technologies now have the capability to recognise nudity, violence, weaponry, gore and other illegal content through algorithms that can quickly process vast amounts of content at one time, without psychological consequences. It can extract key visual features, colour, shape and texture to flag explicit or inappropriate content in real-time, with false positives only being reported 7.9% of the time for images and 4.3% for videos.

Of course, the system isn't perfect, and humans and robots will need to work together initially to avoid false positives. But the nature of machine learning means that the technology will become more accurate as it processes more, thus relieving people of the trauma that comes with content moderation.

So why hasn't it happened yet?

Social media giants like Facebook and Instagram have the power to implement this technology to filter through their content, so, why aren't they? False positives may be the concern.

If the technology flags appropriate content as inappropriate and inhibits freedom of speech it could potentially annoy users, which may result in them going elsewhere to socialise. That means a reduction in average daily users and of course less profit and a lower share price for the social networks. So, instead of implementing technology they hire more human content moderators, exposing more people to psychological upset. A few months ago, Facebook hired 3,000 new content moderators, leaving them with 7,500 in total. However, on average over 300m photos are uploaded every day with over 510,000 comments made every 60 seconds. How are moderators meant to filter through that much content, quickly, without any errors? How can those people cope with seeing such horrendous images every day without it having an impact? It's impossible.

The argument for human moderation has classically centred around accurate contextualisation (how can a computer determine the difference between innocent and harmful content?). AI is seen by many as the answer. Companies are now writing technologies that are changing the moderation landscape by combining natural language processing and big data analytics with machine learning to determine the sentiment and context of uploaded images and text. Filtering happens in real time safeguarding users before they see harmful content and before the damage is done.

Humans (moderators and innocent users) should never be asked to see harrowing and disturbing content. AI will make manual moderation a thing of the past and if this means letting the robots take our jobs then for once, this is surely a good thing.

Close

What's Hot