26/11/2014 11:33 GMT | Updated 26/01/2015 05:59 GMT

Crimestoppers: Facebook on the Front Line

The long awaited report by Sir Malcom Rifkind into the way the murder of Lee Rigby contained a surprising sting in the tail. MI5, chaired by Sir Rifkind, found an unnamed internet company, now thought to be Facebook, to be hosting an online exchange between the killers that "could have been decisive" in preventing the attack. Writing in the Telegraph, he slams Facebook for failing "to notify the authorities when their systems appear to be used by terrorists".

This appears to have taken place as a direct message (which Facebook doesn't monitor) and on an account that Michael Adebowale shut down himself - which suggests that it's very unlikely anyone at Facebook ever saw it. (Remember - they have over a billion users). Still, it is perfectly reasonable for Facebook and other social media companies to cooperate with the law in providing evidence to support criminal investigations. Laws already exist to facilitate this, though doubtless these companies could do more. The report highlights the unwillingness of American tech firms in cooperating with UK security, and the findings will hopefully nudge that along.

So to what extent could Facebook have done more? Their content administration is manual. Its moderators, bleakly brought to life in this Wired article, are outsourced in their thousands. Their job is to review content (usually flagged by users as offensive) and in a matter of seconds to decide whether or not this content should be removed. It appears the Committee wants this to be insta-sent to the security services, and perhaps for some additional algorithm based monitoring.

This is harder than it sounds. First, tech companies and their employees cannot easily identify credible threats on this scale. Recent estimates place the number of Facebook posts at 2.5million a minute (not including private messages). Actively moderating all of this content is an almost impossible task, let alone sifting through it to pre-emptively identify security threats. Then there's the difference between a credible threat, and extreme or extremist language, which can be very subtle. It is a difficult task at the best of times for investigators with time, context and, frequently, language skills on their side.

And so at this scale, some automation would be required. Can an algorithm be built to automatically detect and report content that could be considered suggestive of terrorist activity? Some of the most commonly used companies used this of course. Typing a word into your Gmail search bar, for instance, will produce results built on 'reading' the content of your emails. However, we are a long way from being able to do this with the accuracy expected to underpin an effort of national security nor the subtlety needed to credible threats. Natural Language Processing - the science of teaching computer systems to recognise our words - is not yet sophisticated enough to cater to the level and type of automation being called for. (This shortfall may explain why the CIA put out a work tender just five months ago looking for a Twitter analytics tool.)

Moreover, the majority of social media sites tend only to moderate public content. Extending this to the 'private' messages that Lee Rigby's murderers allegedly used when discussing the plot would be come with exorbitant costs, both financial and to its users' sense of privacy. Whether or not average users of these services would be put off by this level of surveillance is up for debate, but one thing seems certain: those plotting serious acts of terrorism would be. Indeed, the noises made by parliament could prove counter-productive. In the wake of the 2013 Snowden revelations, there has been a huge upturn in the number of internet users using encryption software. Tor is an encrypted web browser. Pretty Good Privacy (PGP) is a means of encrypting emails. ISIS-sympathetic Twitter accounts have circulated Telegram and Surespot - encrypted Whatsapp-style messaging apps - to encourage its fighters to remain hidden and their messages secure. The vast majority of 'extremist' content - child abuse imagery, criminal activity and terrorism - already takes place under the relative safety of encryption. This trend - the 'Snowden Effect' - will be accelerated by attempted mass-surveillance of current mainstream sites.

More must be done to tackle online crime, and Facebook should cooperate. But expectations of pre-emptive screening of social media content to detect threats are fantasy. Suggestions of wide-scale, pre-emptive internet surveillance probably aren't nefarious, 'Orwellian' attempts to watch our every move; but they do misunderstand what's really possible when dealing with the internet.