How Facebook Is Rethinking Suicide Prevention Efforts

Although it might seem intuitive that this prevents users from being exposed to troubling content, blocking content too liberally can just as easily push offensive content or conversations to the shadowy fringes of the internet where troubling behaviour can be normalised.
Prykhodov via Getty Images

Those with an interest in online safeguarding and recurring debates about social media moderation might have found an interesting anecdote hidden in the latest Facebook earnings call.

When asked about content moderation efforts, in light of recent incidences of troubling videos being broadcasted live on Facebook, Mark Zuckerberg used the opportunity to discuss the company's approach to suicide prevention:

"A lot of what we're trying to do here is not just about getting content off Facebook. Last week there was this case where someone was using Facebook Live to broadcast - or was thinking about suicide. And we saw that video and actually didn't take it down and helped get in touch with law enforcement who used that live video to communicate with that person and help save their life. So a lot of what we're trying to do is not just about taking the content down, but also about helping people when they're in need on the platform, and we take that very, very seriously."

As Zuckerberg implies, the instinctive approach for administrators dealing with disturbing content has tended to be to remove the content and line of communication immediately. This reduces the number of users exposed to the material, and minimises the risk of a PR crisis. However it doesn't necessarily help the individual concerned or those already exposed to the content. One of the reasons for this is purely technical. For professionals to be able to de-escalate an incident, they need a direct line of communication with the individual affected, and just as with emergency calls, keeping the affected individual communicating can often help them to be located by emergency services.

This shift away from instinctively shutting down content reflects a more thoughtful, measured approach to safeguarding -- one which I wrote about in a postgraduate public health thesis in 2015. Media depictions of the internet have often been marked by hysteria over outlier horror stories that were often as indicative of digital dangers as rare shark attacks are of the dangers of going for a swim. This isn't to say that there aren't dangers, but excessive fear doesn't put us in a good condition to make rational judgments about how to manage the opportunities and risks, let alone prepare for threats should they arise.

If we consider moderation to involve employing a finite set of resources, then, to date, internet administrators have tended to heavily favour forms of what can be termed 'negative moderation' (as in subtractive), which is to remove, block, and ban offensive content and users.

Although it might seem intuitive that this prevents users from being exposed to troubling content, blocking content too liberally can just as easily push offensive content or conversations to the shadowy fringes of the internet where troubling behaviour can be normalised. Just as adolescents might discuss risky behaviours after school or in the playground that they were not allowed to discuss in class, the conversations can still take place -- just in far murkier settings. A good school will recognise this and allow some issues to be discussed in the classroom so that tutors can monitor conversations and offer forms of 'positive moderation' through factual information, supportive resources, and balanced perspective.

Mark Zuckerberg's anecdote, which he shared despite not being directly asked about suicide prevention, suggests both that Facebook take the issue very seriously, and that they will not be cowed by media sensationalism into taking a simplistic route of trying to block every potentially troubling piece of content irrespective of whether this approach helps or harms users. These are encouraging signs.

Perhaps the simplest example of positive moderation is with efforts by media organisations to provide relevant factfiles and helplines at the end of media content that features potentially upsetting themes (as recommended by Samaritans). In the case of social media and chat rooms, platforms can provide easily accessible links to authoritative content and resources, and also educate users about how to respond to content they find concerning or disagreeable. For example, the youth peer-support platform TalkLife is working to train volunteers to provide forms of peer-support and to signpost to resources.

Where content is deemed necessary for removal, rather than pretending it never existed, administrators can provide forms of follow-up support to those affected and pre-emptive educational material in the event of incidents reoccurring. These are difficult to do at scale, but they will become easier as platforms employ predictive algorithms and machine learning.

In his 1958 inaugural lecture at the University of Oxford, political philosopher Isaiah Berlin introduces the concept of negative liberty to describe freedom from imprisonment and coercion. Negative moderation is the digital antithesis of negative liberty; it interferes with the behaviour of some, but on its own it's a crude method that can harm as many as it helps. A mature philosophy of the internet must be one that employs a balanced approach towards digital content if it's to allow individuals to express themselves creatively, to learn and grow intellectually, and to access help when needed.

Close

What's Hot