Should Tech Companies Police The Internet?

The German government has gone as far as announcing that fines of up to €50m will be imposed on social networks that fail to delete hate speech or fake news, expressing concerns that illegal content was not being taken down quickly enough.

This week Mark Zuckerberg announced, in a post on his Facebook page, that he would be hiring 3,000 content moderators.

The news doesn't come as any surprise. In the last year tech companies, and particularly social media sites like Facebook, have come under fire for the content that is shared on their platforms.

In the weeks following the US election the the social media platform was accused of allowing turbo-charged 'fake news' to spread across the internet. President Barack Obama complained about the "dust cloud of nonsense" on the social network.

In response, Zuckerberg, Facebook's founder and chief executive, said the company would tweak its technology to ensure the veracity of links and offered to hire fact checkers - which it has done.

But now Facebook and the other tech giants are faced with a new headache.

Last week a man killed his baby daughter and broadcast a video of the murder on Facebook Live. It was visible for anyone to see for more than 24 hours. In April a man from Ohio shot another man, seemingly at random, and posted the video on Facebook. It too stayed online for several hours, even after it had been flagged by viewers.

In these instances - which are just two grisly, media-attracting tips of an iceberg - the problem was not fake news, but news that was horrifyingly real.

Both the spread of fake news stories and the use of the social media to share extreme content call into question the responsibility that social media companies, Google, Apple, Facebook, Amazon, and even second order companies such as Twitter, Snapchat and emerging messaging app companies have to edit and police their platforms.

There is certainly a growing consensus among policy-makers that the tech companies should take a more active role. Facebook has always maintained that it is a neutral platform, with limited responsibility for the content it hosts. All it is responsible for, it says, is connecting people across the globe. But it's becoming more and more difficult to stand that argument up.

Following an inquiry into dangerous and illegal material that is available online, a cross-party group of British politicians said the failure of tech companies to deal with such content was a disgrace.

"These are among the biggest, richest and cleverest companies in the world, and their services have become a crucial part of people's lives. This isn't beyond them to solve, yet they are failing to do so. They continue to operate as platforms for hatred and extremism," said one MP.

The German government has gone as far as announcing that fines of up to €50m will be imposed on social networks that fail to delete hate speech or fake news, expressing concerns that illegal content was not being taken down quickly enough.

Until now, Facebook has resisted calls for hiring editors or taking steps that would make it more like a traditional news organisation. It has never called itself a media company, because that would risk alienating users or potential advertisers and would have an impact on how they are treated by media law.

But when, according to a recent poll, 40 per cent of people view Facebook as a news source, the company surely has no choice but to reassess the role it plays in the distribution of information.

The impact that will have on the internet as we know it remains to be seen.

Close

What's Hot