In The Online War On Terror, How Do We Protect Digital Freedoms?

In a bid to stem the tide of digital radicalisation by terrorist groups such as Islamic State, the European Parliament has approved plans for new legislation which will allow rapid and widespread removal of extremist content from the internet. Digital rights activists are up in arms over the decision, which they fear will lead to private organisations policing and censoring internet users with impunity.

In a bid to stem the tide of digital radicalisation by terrorist groups such as Islamic State, the European Parliament has approved plans for new legislation which will allow rapid and widespread removal of extremist content from the internet. Digital rights activists are up in arms over the decision, which they fear will lead to private organisations policing and censoring internet users with impunity.

The phrasing of this legislation, proposed by MEP Monika Hohlmeier and cribbed from an existing law relating to child pornography, is somewhat vague in its practical applications. According to the new text, EU member states "may take all necessary measures to remove or to block access to webpages publicly inciting to commit terrorist offences."

A couple of things don't quite sound right there. Firstly, there is no consensus on what exactly constitutes terrorist propaganda; it is an imprecisely defined term, the interpretation of which is highly subjective. Secondly, the authorisation of "all necessary measures" essentially gives carte blanche to flout user rights and monitor, flag and delete anything they're not sure about.

This kind of conduct already has a damning precedent. Europol's Internet Referral Unit (IRU) has been subjected to criticisms by internet freedom commentators for going too far in censoring content. In May of this year, the Council of Europe accused the UK of infringing on civil liberties, and warned of the dangers of over-blocking. And now, the European Parliament's Committee on Civil Liberties (LIBE) has waved through a compromise text for a directive on combating terrorism, filled with unclear provisions which are open to abuse.

"Governments have an obligation to combat the promotion of terrorism, child abuse material, hate speech and other illegal content online," says Thorbjørn Jagland, Secretary General of the Council of Europe. "However, I am concerned that some states are not clearly defining what constitutes illegal content. Decisions are often delegated to authorities who are given a wide margin for interpreting content, potentially to the detriment of freedom of expression."

In the aftermath of the recent spate of jihadist attacks on the continent, European governments are rushing to erase propaganda wherever possible, and they are seeking the help of tech companies in doing so. This in turn has sparked serious concerns that the fight against online extremism is in danger of being privatised, and what should be the sole province of a judiciary is now the remit of social networks and internet providers.

Earlier this year, tech giants including Facebook and Twitter put their support behind the European Commission's new code of conduct, which demands that hate speech be removed from social platforms within 24 hours of posting. But as always, there has been a learning curve in trying to find the balance between effectively moderating discourse and creating an outright social police state.

Jillian C. York of Quartz writes that Facebook and Twitter have built a "culture of snitching" into their platforms in much the same way that, throughout the 20th Century, governments persuaded citizens to spy and inform on their neighbours. But while this system is in place to enable quick and easy reporting of abuse, it also means petty grievances and vendettas can be escalated.

So if human beings can't be trusted to keep each other in line, should we turn to machines?

In June, the Counter Extremism Project (CEP) unveiled an algorithm with robust hashing capability, which enables it to identify extremist content even in audio and video files. And over on Facebook, artificially intelligent moderators actually report more offensive visual content than human users.

Encouraging developments, right? Sorry to be a downer, but these solutions suffer from the same drawback as human arbitration. Namely, algorithms and AI rely on humans first being able to define exactly what extremist content looks like, and provide a comprehensive database of sufficient examples.

Huge strides are being made in machine learning every day, and it is entirely possible that we will reach a point where AI is able to independently make that decision. But we're not there yet.

"Technical quick fixes are an illusion," writes journalist Nick Cohen in the foreword of Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter It. "If the radical underground is not confronted, we know what may follow."

Cohen is far from alone in his belief that simply blocking access does not address the root of the problem. "Deleting content won't make the hatred go away," says Facebook COO Sheryl Sandberg. "We can't just treat the symptoms -- we have to treat the cause." Speaking at the G20 summit in January, Sandberg positioned counter-speech as "by far the best answer" to limiting the distribution and impact of extremist propaganda.

"The best thing to speak against recruitment by ISIS are the voices of people who were recruited by ISIS, understand what the true experience is, have escaped, and have come back to tell the truth," she says. Facebook's Online Civil Courage Initiative was founded to facilitate such counter-speech, and to encourage users to spread positivity and empathy in the wake of terrorist attacks.

"Terrorism is a topic that requires more time and not less," say advocacy group EDRi. "It requires more public debate and not less. It requires more safeguards to defend our fundamental freedoms from the threat of violence and not fewer."

The keenness of policy makers to scrub the web clean of all propaganda is well intentioned, especially amid reports that the perpetrators of recent atrocities were influenced or even "recruited" via the internet by terrorist organisations. Ensuring that as few people as possible are exposed to hateful, violent material is just one part of limiting the damage that extremist groups can inflict. But a blanket approach to censorship ignores the racial tensions, economic deprivation and other factors which feed radicalisation. And while we wait for the underlying causes to be addressed, we are left with the question: in the online war on terror, must we choose between our freedom of speech and our safety?

Originally published at Imperica. © 2016 Perini

Close

What's Hot