Facebook, Frankenstein And Free Speech

Social media companies have shown a willingness to work with governments and law enforcement in the past, and streamlining and improving this relationship is crucial. Unrealistic expectations do little to improve this relationship though, and some of the criticisms laid out in the report do seem unrealistic.
Archives

The release of the Home Affairs Select Committee Report last week reignited some familiar debates. In what must feel to some in the technology industry a never-ending series of accusations and demands, internet companies were again deemed to be failing in their duty to protect their users from extremist content. "Social media companies", the report writes, "are consciously failing to combat the use of their sites to promote terrorism and killings."

Whether they like it or not, social media companies have taken on the responsibilities of past communications providers. The report is right to demand more of them on this front: being accessible 24 hours a day and dealing rapidly with the demands of law enforcement are not unrealistic expectations.

Social media companies have shown a willingness to work with governments and law enforcement in the past, and streamlining and improving this relationship is crucial. Unrealistic expectations do little to improve this relationship though, and some of the criticisms laid out in the report do seem unrealistic.

The question of how and why governments around the world go knocking on the doors of Twitter, Facebook and Google when stuff they don't like is hosted on their websites is a question of control. In the eyes of the government, the buck stops with the content host who must, surely, be the central administrator.

The Internet is the death of central administration. Communication in the 21st century is instant. No delays are acceptable. We feel a pang of irritation when our comment is subjected to 'moderator approval' before upload.

The core principle of what was once called 'web 2.0' is that anyone, anywhere can contribute to the patchwork of trillions of comments, images, videos and so on. The price we paid was the middle man. Berating Facebook for not employing enough people to "monitor billions of accounts", for not employing enough moderators, is at best a misunderstanding of how the internet now works.

Long, long gone are the days of the editor, the curator. We humans are powerless in the face of the sheer volume of data. Only machines - algorithms - can pick through it all and make head or tail of it.

It's therefore natural that those most concerned by the lack of control should turn to the artificers behind these machines. The geeks and nerds who spawned these Frankenstein networks and made billions from them are now holed up in their glass-clad Californian towers as the angry crowd gathers around. "You created these monsters, surely you can control them?"

"We're trying!", they shout back. And they are. Google, Twitter and Facebook have all spent the last two years trying to tame their platforms. It isn't in their interest, after all, to have dick pics and beheading videos cropping up next to the advertising bar. It isn't in their interests for terrorist propaganda or racist abuse to be branded with their logo.

But for all their efforts, they will never completely muzzle it. Not without killing it. The impact on Facebook's share price, not to mention the free and open debate the site enshrines, of pre-moderation - hold on, we're just checking the photo you've tried to post is okay - would be absolutely catastrophic.

An alternative - algorithmically determining acceptable content - is, at the moment, a technological bridge too far. Working out the book you're most likely to buy, the advert you're most likely to click, is one thing. But teaching a computer to recognise extremist content is extremely difficult. The reasons are legion.

For one, who teaches it? Given ten videos, or ten comments, a group of humans will rarely agree which ones should be allowed on the Internet. Who should the machine agree with?

Secondly, the effectiveness of this kind of technology will always be measured in percentages below 100. A recent attempt by researchers at Yahoo (link to MIT Tech Review article) to algorithmically filter abusive content was found to be 90% accuracy - a real leap and an impressive technological feat. But that still means that one in every ten message will be misclassified: one in every ten abusive messages will reach their target. Of a million Facebook discussions, a hundred thousand users would find their speech incorrectly curtailed.

Finally, we might question how far we want the machines to dictate to us what we can and can't see. In a world where the people we speak to, the news we read, the opinions we digest are increasingly dictated to us by algorithm, we ought to be wary of handing more and more power to them. Giving young people the skills they need to wander the marketplace of ideas and decide which ones to value and which to discard is vital, and will prove a stronger barrier against radicalisation than vain attempts to filter the content from ever reaching their computer screens.

Other mooted alternatives are more realistic. The best examples of successful online communities - Wikipedia, StackOverflow, for instance - rely much more heavily on community policing than Facebook and Twitter. Many hands make light work, and by opening up administrative powers to more people, other major internet companies may find they too can improve the governance of their sites. But people power is not always a model that is well-trusted by governments around the world, and is not a model that is watertight: mistakes are made, offensive or extremist content is still uploaded, and the pleas for stronger central control will continue.

The relationship between states and the internet is a question that will be asked time and again over the coming decades, particularly when governments feel a sense of powerlessness. Establishing the rules, let alone enforcing them, will be a marathon, while technology sprints away into the distance. One thing is clear, though - preparing British citizens for a life on the Internet is paramount. If this neverending debate proves one thing, it's that good, educated digital citizens might be a more worthy and more achievable goal than a good, compliant internet.

Close

What's Hot