During the 2016-17 Home Affairs Select Committee on Radicalisation, Google claimed that, in 2014, they had removed over 14 million videos relating to different forms of abuse, including content that promoted violent extremism. They have now announced they will be employing an additional 10,000 staff as human moderators to take down videos and comments that violate their policies. It seems the problem of abusive online content is getting worse.
While we can agree that content which directly promotes or incites violence or abuse has no place online, there is a grey zone that sits uncomfortably between freedom of speech and hate speech that is becoming an increasingly urgent problem.
Until their bans from Twitter and Facebook (in 2017 and 2018 respectively), Britain First personified this grey zone. Through a barrage of anti-Muslim and anti-immigration content, they have poisoned the air of cyberspace with a cloud so toxic it has seeped from the virtual world into the physical world, creating a polarised environment for minority groups in the UK.
While Britain First has never explicitly called for violent attacks against Muslims (despite their contemptible ‘Mosque invasions’, where they thrust copies of the Bible into the hands of frightened worshippers), my colleague at the Centre for the Analysis of the Radical Right, Dr Craig McCann, recently noted that Facebook eventually removed Britain First from its platform for “repeatedly posting content designed to incite animosity and hatred against minority groups”. In other words, they created the mood music to which violent extremists dance.
We can commend Facebook and Twitter for taking the bold step of banning Britain First, but I cannot help but wonder if the very public arrests and convictions of its leaders, Paul Golding and Jayda Fransen, were the impetus to implement their ban. Would social media companies have taken note, for instance, had the President of the United States not raised their profile to his 50million followers and the world’s media by retweeting some of Fransen’s anti-Muslim messages?
And herein lies a bigger problem; Britain First are far from the only game in town. There are countless websites, Facebook pages and Twitter accounts that promote the same animosity towards Muslim communities, both in Britain and abroad. Some cultivate an air of authority and do not shy away from self-promotion. Others are similarly brazen in their contempt for Muslims, but operate below the radar of mainstream media and thus the glare of public scrutiny and outrage.
By framing every conversation about Islam and Muslims through a negative prism, these online platforms want to change the value of those words to represent all that is wrong in society; it’s a tragic indictment of today’s society (and our mainstream media) that they should be succeeding. Those that study the concept of corpus linguistics will be familiar with this method of defining public opinion through the normalisation of harmful language. This serves not only to foment hostility against Muslims, but alienates our Muslim youth who increasingly see the UK as unwelcoming of their faith and its symbols of worship. Extremists on both sides of the equation exploit this anxiety.
These issues are not the preserve of one community over another and surely society can recognise that Islamist terrorism is currently the most prevalent terrorist threat in the UK without denigrating all Muslims in the process; we shouldn’t need reminding that the overwhelming majority of Muslims neither condone nor support terrorism. However, these anti-Muslim platforms pump out such a relentless and persistent torrent of negative stories that, in the words of Baroness Sayeeda Warsi, anti-Muslim prejudice has become “Britain’s bigotry blind spot”.
So how do we define these websites, groups and individuals who stay the right side of our hate crime laws but whistle the tune which advances the rhetoric of violent extremism, and what can be done about them?
The UK’s counter-terrorism strategy defines them as ‘non-violent extremists’ because they are sympathetic to the aims of violent extremists, but without engaging in or promoting the acts of violence themselves. The term itself has been contentious when applied to Islamist extremists, but that is mostly because critics of counter-terrorism policies wrongly ascribe religious conservatism to its remit. Few, however, would deny it aptly describes the bile of the radical right.
In the United States, the Dangerous Speech Project is creating a framework for tackling this issue. They define ‘dangerous speech’ as any form of expression (speech, text or images) that can increase the risk that its audience will condone or participate in violence against members of another group. To reduce its impact and still preserve our right to freedom of speech, they suggest two approaches: the first is education of what constitutes dangerous speech and why it can be so harmful (effectively inoculating society by being able to recognise and resist it); and the second is to counter dangerous speech directly, by responding to it in a way that undermines it. Likewise, the US Holocaust Memorial Museum has produced a guide on counteracting dangerous speech.
Here in the UK, much of this work already forms part of the Prevent Strategy, although this is more explicitly focused on terrorist propaganda. However, the Counter Extremism Strategy, led by its new commissioner Sara Khan, will have dangerous speech and non-violent extremism directly in its sights.
Whilst the appointment of a Counter Extremism Commissioner has received some predictably cynical responses, new approaches to tackle this problem should be welcomed and we must be mature enough to assess Ms Khan and the Commission upon their results, not personal grudges and hyperbole.
Tackling non-violent extremists is a contentious but increasingly urgent problem with no simple solution, but unless we find a way to curtail their Danse Macabre, the will band play on.
An edited version of this blog first appeared on the Centre for the Analysis of the Radical Right