Sensitive Content - The State of Racial Slurs on Twitter

Now, every social platform has its own internal universe of slang terms, in-jokes, unreadable acronyms and memes, and the rules of discourse on Twitter are not those of general conversation. This doesn't, however, mean that discussions on social media exist in an online bubble, isolated from the world, and language, at large.
|

Last year, researchers at Demos conducted a study, which you can read here, looking into the use of racial slurs on Twitter. We wanted to contribute the discussion on free speech, which often centres around racial, religious and ethnic slurs, by helping people understand not only how often these terms were used on Twitter, but also in what context. Crucially, although the areas overlap, this was not a study on hate speech in general - not only is it evidently possible to post a racist tweet without using a racial slur, but our research suggested that people often post messages containing racial slurs without intending to be understood as racist.

To this end we analysed 127,000 tweets, collected over a nine day period in 2012, all containing a word from a list of slurs sourced from Wikipedia. We found that the most commonly used terms, in order of prevalence, were 'white boy', 'paki', 'whitey', 'pikey' and 'nigga'. The terms weren't equally used, with 'white boy' appearing in 49 percent of the tweets, and only the top two terms comprising more than 5% of the data. After collecting the tweets, we used machine learning and natural language processing technologies, developed in partnership with the University of Sussex, to examine every single one and work out how the slur it contained was being used.

Earlier this month, interested to see whether these trends had changed, we repeated the collection step of this study using the same terms, and picked up 6.7 million tweets over two weeks - a 34 fold increase in the number of slurs used daily. This far outpaces the rise in the overall number of tweets since 2012, which has risen by only a fifth over the same period. We also saw a remarkable shift in the terms used. The five most commonly used racial slurs on Twitter, as of September 2015, are 'nigga', 'white boy', 'nigger', 'niger' and 'paki'. The first of these appears in an average of 417,000 tweets per day, almost 10 times the use of the most prevalent term in 2012. The imbalance in distribution has also increased, with an overwhelming 84.5% of tweets collected containing the word 'nigga', compared to only 1.7% containing 'white boy'.

These numbers may seem shocking - it's tempting to assume they indicate a looming swell of online nastiness. I'd be wary, however, of seeing this data as part of a rise in racism, online or off. In our report last year, we found that in the vast majority of cases, slurs were not being used as terms of abuse. Some were, of course, but overwhelmingly, these words were used for irony, jokes, reporting others, and often 'appropriated' by the groups they were originally intended to harm. Context, as ever, is king.

What these latest findings show is a continuation of this theme. While some people are undoubtedly still using slurs to injure or offend, this surge in their use over the last few years is likely to tell us more about the changing meaning of language than it does about an increase in hate speech; these terms have in many instances simply become part of the everyday for users on Twitter, employed in messages of affection, humour and solidarity, rather than hate. Now, every social platform has its own internal universe of slang terms, in-jokes, unreadable acronyms and memes, and the rules of discourse on Twitter are not those of general conversation. This doesn't, however, mean that discussions on social media exist in an online bubble, isolated from the world, and language, at large.

Twitter gives its users the opportunity to use and experiment with taboo subjects away from the judgemental eye of society. By encouraging people to communicate in writing more frequently than ever before, with little or no censorship, social media helps speed up the evolution and redefinition of language. Crucially, this experimentation and evolution is no longer taking place solely in local communities, but on a global platform which can be viewed, and contributed to, by anyone. Not only does this open up valuable opportunities for social research, it also allows people to repurpose hateful language for their own means.

We already know that the internet can change the way we speak to each other - unless they were signing off a letter, no-one ever said 'lol' in the Eighties. Maybe it can change the way we think, and speak, about race.