In December 2016, the Met's anti-cyber crime unit arrested a 55 year-old man on suspicion of "racially aggravated malicious communications" aimed at Gina Miller, the woman behind the Brexit court case. The arrest joined various incidents in 2016 that highlighted the proliferation of hate speech, yet it was one of the first to flag that there are legal implications of the 'post-truth' era that we are now apparently living in.
The implications of post-truth, especially in relation to hate-speech, are particularly interesting in British law. The crux of the issue lies in the problematic nature of prosecution given that freedom of speech is generally considered to be one of the bedrocks of modern democracy. However, as a society, we cannot ignore hate-speech, and the targeting of vulnerable communities represents a huge challenge to a stable democracy and the rule of law.
In the 21st century, technological developments are not only benefiting populist figures (such as Trump) but also encouraging inflammatory and extreme speech, which then spreads rapidly online. The so-called 'alt-right' movement has come under fire for peddling fake news stories on social media, under the guise of rejecting the biased mainstream media-agenda, but that has not stopped them. Facebook, Twitter et al. have been used to spread hateful messages, inflicting psychological and physical harm on minorities.
So where does the law stand? In the UK, freedom of expression is protected under Article 10 of the ECHR as a qualified right. It's protection is wide in scope, and includes communications that 'offend, shock and disturb'. A number of key pieces of UK legislation seek to sanction extreme bigotry. For example, under Section 127 of the Communications Act (2003), it is illegal to send by means of a public electronic communications network a message or other matter that is grossly offensive. Such an offence can result in a custodial sentence, and a person can be found guilty under the act irrespective of their intention.
Convictions under the Communications Act have been controversial due to this disregard for intention. In 2012, 28 year-old Paul Chambers was prosecuted for posting on Twitter about "blowing up" Robin Hood airport, after his travels were disrupted due to its closure. When Chambers was later found guilty and ordered to pay a fine, public outcry ensued. A High Court appeal subsequently quashed the conviction, on the basis that the message was not menacing in character - it was 'banter' and intended for a small group of recipients.
As this case demonstrates, when it comes to determining what constitutes hate speech, context is crucial. Part 3 of the Public Order Act criminalises displaying or distributing threatening or insulting language where the intent is to stir racial hatred. Anthony Norwood, a member of the British National Party, was convicted under the act for displaying a poster depicting the Twin Towers in flames and the words "Islam out of Britain - Protect the British People".
Norwood claimed this was a violation of his human rights, under article 10 of the ECHR which provides the right to the freedom of expression. However, his appeal to the European Court of Human Rights failed. In the judgement, The Court referred to the fundamental principle that no one has any right to engage in activity that would undermine the ethos under the European Declaration of Human Rights (Article 17, ECHR). Simply put, the protection of one's freedom of expression is curtailed by the overriding right to maintain or safeguard a stable, open and inclusive society.
It is a triumph of common sense that Chambers was acquitted whilst Norwood's conviction was upheld. Those who express concern over hate speech legislation undermining the democratic right to free speech tend to rely on slippery-slope arguments. In a post-truth era, criminalising certain uses of language could easily lead to tyranny. Yet, one could argue, with no sanctions on menacing language, we risk eroding the rights of marginalised individuals to live free from fear and harassment. The legal framework within the UK is well equipped to strike the right balance, especially following the sensible interpretation of the interplay between Article 10 and Article 17, as set out in Norwood. The greater challenge is perhaps the availability of resources to effectively police the internet and provide an effective deterrence to the kind of inflammatory and extreme speech we saw increasingly circulated online throughout 2016.