In conjunction with leading social intelligence company Brandwatch, Ditch the Label analysed misogynistic language and neutral/constructive debate surrounding issues relating to misogyny as expressed across social media.
For Ditch the Label to better understand the current climate and how we can effectively campaign for greater gender equality, we worked in conjunction with social intelligence company, Brandwatch, to gain further insight into how misogynistic language is being used across social media, looking specifically at dialogue on Twitter. We analysed almost 19 million public Tweets from both the UK and US over a four year period to get a broad understanding of the landscape.
Directly related to issues surrounding toxic masculinity (which we explored concurrently in the same report) is the issue of femininity and addressing the ways in which women are viewed societally. In addition to exploring the unbiased climate of masculinity, the research identifies the current rates and perpetration of language that actively discriminates against, and is reductive towards women and femininity.
Although this project sheds light on discriminatory language, it should not be viewed as an argument for online censorship - rather, the data points to the need for a nuanced approach, further open debate and awareness, and positive role models. While there are many signs of positive progress throughout, there are also key challenges to address for those of us hoping to facilitate social change.
Out of the 19 million tweets analysed, nearly 3 million of those were flagged as misogynistic insults. However, in contrast to results from prior Ditch the Label research, which found males to be the most likely to act as an aggressor of bullying behaviours, females were found to be the largest perpetrators of misogynistic language on the social network, with 52% of all misogynistic tweets authored by women. This discovery warrants further exploration into the ways in which women engage with each other in both online and offline environments.
It seems we must not only challenge how men view and treat women, but also how women view and treat one another.
Shockingly, misogyny was in fact the only area in the study for which females were more likely than males to use pejorative language, breaking the trend seen across other constructs, wherein male authors were more highly engaged in insulting discussion. We found that females were most likely to use abusive language relating to animals (e.g.: bitch, cow, mare), promiscuity (e.g.: slut, slag, whore) and appearance (e.g.: "You're so ugly"), whereas male authors contributed the majority of language relating to sexual orientation, intelligence and anatomy.
The normalisation of misogynistic language may be a reflection that a subset of female authors no longer consciously consider such terms offensive. This theory is supported by the casual and common usage of insults such as "lil' bitch" by female authors, with self-reference/self-deprecation also common (e.g.:"I need to stop being such an emotional bitch"), as well as the personification of things (e.g.:"divorce is a bitch"). It appeared that no offence was intended by many of the authors of such comments, but they are nonetheless a reinforcement of negative generalisations and connotations based on gender. Even with the aforementioned desensitisation taken into consideration, a common theme of derision, references to promiscuity and the body, as well as lack of 'male-equivalent' terms, supports a view of the language as undeniably misogynistic.
Comparing ratios of these query volumes also gives a detailed picture of how discourse varies between demographics and regions. For example, insults relating to female anatomy were significantly more likely to come from the UK than the US. Posts engaging in insulting discussion were twice as likely to be authored by students, compared to conversations driving neutral conversation.
Interestingly, the data raised awareness of other issues beyond misogyny; influential authors noting conflicting Republican opinion towards Trump's comments on Obama and Mexicans. This highlights the extent to which highly visible issues bring to the fore other concerns around discrimination, suggesting a sense of community and alignment among active individuals seeking to raise awareness following high-profile cases. The report also revealed a strong correlation between racism and misogyny - meaning states with high levels of misogynistic language usage were more likely to exhibit less racial tolerance.
Positively, discussion of misogyny has consistently grown since June 2014 and has far overtaken the use of misogynistic language across the Twitter platform; neutral misogynistic discussion was around two times more visible than misogynistic insults on the network.
You can read the full report hereSuggest a correction