The recent report by the UK Security and Intelligence Committee named Facebook as one of those responsible for the horrific death of Fusilier Lee Rigby at the hands of two terrorists. I'm unsure if the MPs on the committee have any idea how completely ridiculous this is - but I'm very sure that it demonstrates how unfamiliar they are with the technology used to police the Internet.
The Prime Minister only compounded the feeling that the Government doesn't understand technology by saying: "We must not accept that these communications are beyond the reach of the authorities or the internet companies themselves." It's breathtaking when you think about the way David Cameron dismisses all the User Generated Content (UGC) on the entire Internet as 'these communications'. He clearly has no idea of the volume of stuff he's talking about or the scale not only of the Clearnet but also the Darknet - or perhaps he does and is just pandering to those who believed him when he suggested at the time of the London Riots that he might 'turn off the Internet'.
First let's unpack Mr Cameron's problem. UGC is all the chat, Tweets, Facebook posts, forum posts, videos and photos posted on the web. Just to give you an idea of volume, Twitter users post more than 300,000 tweets per minute and Facebook users share information 2.5 million times per minute. That's per minute - not per hour or day. That means that vast numbers of potentially dangerous conversations are beyond the reach of the authorities and the internet companies, and with the technology we have at the moment it's going to stay that way.
It's also a little disingenuous of the Committee to, in effect, suggest that the private sector should have all the answers as to how to police the internet. Facebook does use automated moderation software and there are companies out there that have developed it for brands to use to moderate their own feeds of content from places like Twitter and Facebook. This machine-learning moderation software analyses the content and context of UGC and 'learns' what constitutes a dangerous or unwanted conversation. It can, depending on the system used, detect racism, bad language, spam, pornography and threats. It then flags these conversations, normally escalating them up to a human moderator for assessment and action. But it's not infallible - after all, it's a piece of software.
However, I suspect that the UK security services - and certainly the US services - have access to much more sophisticated automated content monitoring tools than any private company can find or develop in the outside world. In fact, it would be remarkable if nations such as the US had not already developed incredibly advanced, machine-learning software that identifies risks across everything from social media to email and phone conversations. After all, we've been researching virtual neural networks since the 1940s. So, let's face it, we can safely assume that the people at GCHQ don't sit there all day reading Facebook or listening to wire taps to see if someone is plotting to blow something up. They undoubtedly use sophisticated automated tools that analyse words and context to assess risk. And if they don't then they are completely mad because they could buy it from at least one UK company today - or perhaps they're too busy collecting data via PRISM rather than spending any money on analysing it.
However, I think not. And because the UK has always been at the forefront of computer technology I'd bet my last quid that GCHQ are more than on top of the problem of assessing risk in UGC all over the Internet - and are well aware that it's an almost impossible task. And that's the thing about the Parliamentary report. It makes out that the security services are powerless against the threats of terrorist-related UGC on the Internet; that only private companies can save us from people plotting against us online. Yes, GCHQ don't have back-end access to Facebook or any other social media site so they can't use their advanced software to analyse risk from inside Facebook. But that's a problem they have to live with. They are never going to get carte blanche access to Facebook - and rightly so.
But there is another important issue at play here which makes we wonder about the conclusions of the report. The Internet is a vast place. Bigger than anyone, except a computer scientist, can imagine. It's a massive iceberg. What we see via Google and any other search engine is called the Clearnet and is potentially less than two per cent of what's actually out there, buried deep down in the Darknet or Deep Web. What we see on the Clearnet is all kittens and flowers and fairies compared to what's going on down there. That's why it's almost laughable to blame companies on the Clearnet for not picking up on terrorist threats. It's also incredibly likely that the killers of Lee Rigby were also talking, and possibly much more frequently, on the Darknet, which makes me wonder if the security services didn't see the MPs on the Committee coming: 'I know, just tell them that Facebook are nasty - they've heard of them - and they'll leave us in peace to get on with our virtually impossible task of analysing content on the Darknet.'
All this means that the Government should not go around stamping their feet, saying it's not fair and shifting the blame for every terror-related incident onto a social network you can find on Google. Sometimes they have to acknowledge that they don't know everything - not even everything that their own security services are doing. As MP Bruce George, Chairman of the Commons Defence Committee said of Porton Down, the UK government's military science facility, in 1999: "It would be quite erroneous of me and misleading for me to say that we know everything that's going on in Porton Down. It's too big for us to know, and secondly, there are many things happening there that I'm not even certain Ministers are fully aware of, let alone Parliamentarians."