The Importance of Human Moderation in Online Safety

Moderation is about keeping people and brands safe. It is a fundamental part of what Safer Internet Day is all about: creating a better internet.

Online safety has taken a serious battering over the last year. In the week of Safer Internet Day (11 February), I've been thinking a lot about what companies and networks do to keep their users safe online.

I've been in the business of online communities and social media for 11 years now - before Facebook or Twitter were even launched. I got into this business because I believe passionately that social media can be a force for good: a great tool for communications, knowledge and exploration. I also believed then, as I do now, that the internet can be made a safer place for children and adults, through a combination of social media management and education.

Part of good social media management is to moderate content: protecting users from harmful or unsafe content and diffusing difficult situations such as bullying or abuse. Technology is wonderful for helping to filter content from websites. There are some fantastic tools that automatically filter things like illegal content, bad language, hate speech and so on, and even spot dangerous trends in behaviour based on analysis of activity on a site. But what technology can't do is take action, make a decision on whether something adheres to guidelines, offer help, or intervene when a child - or an adult - is in immediate danger.

As Wendy Christie, eModeration's Chief Production Officer, says: "Technology has come close to understanding some forms of conversational activity but you still need human interpretation to process the results and decide how to respond to this behaviour. In some extreme instances, this will involve contacting emergency or social services."

It's tempting to see technology alone as the answer to staying safe online. But the reality is that it's only part of the solution. We need technology and humans to work together. The biggest brands invariably understand this: firstly, they have a responsibility to their customers, fans or followers; and secondly their reputations are at risk if they allow their names to be associated with harmful content.

Most of the big networks are starting to take moderation seriously, too, although there's clearly more to be done. After the shocking, sustained and high-profile abuse on Twitter against Caroline Criado-Perez, Stella Creasy and others, Twitter last year introduced a 'report abuse' button across all its platforms, and announced it was increasing moderation staff to deal with the ensuing reports. Although the process of reporting abuse is far from simple, this is at least a step in the right direction.

As a result, human moderation of sites is becoming more important, not less, despite advances in technology. Moderation, when done properly, is highly-skilled, with moderators often making - quite literally - life and death decisions about how to deal with reported content. It is becoming an accredited industry, with initiatives like Moderation Gateway (which I'm proud to say eModeration has been heavily involved with) training moderators and setting new standards that are becoming recognised across the industry.

Moderation is about keeping people and brands safe. It is a fundamental part of what Safer Internet Day is all about: creating a better internet.

Close

What's Hot