History may well decide that this week has been a pivotal moment in the development of digital services for well-being. The introduction of the Samaritans Radar App on 29th October aroused a great deal of controversy (see #samaritansradar) perhaps especially from those who have experienced mental health difficulties, and it highlights the challenges of future digital health interventions. Superficially it could seem like a beneficial addition to the App world however, a deeper analysis raises many concerns about data privacy at the intersection of healthcare and open, digital forums such as Twitter.
As we face a future in which our lives and homes become filled with smart, connected devices, tracking almost every aspect of our being, the controversy over the Radar App could not have come at a better moment. But whatever the failings that some perceive regarding this App, we should be grateful that the Samaritans have, if inadvertently, helped us understand the interface between well-being and social media rather better.
The Samaritans Radar App monitors the tweets of those you follow and uses an algorithm to identify tweets that might suggest a person is experiencing suicidal thoughts. It "will flag potentially worrying tweets that you may have missed, giving you the option to reach out to those who may need your support." (wwwsamartiansradar.org). The App also gives ideas of how to support a person who has been flagged, and one can see how this well-meaning attempt to support those in difficulty might aid their recovery, and its use break down further the barriers of stigma and shame.
However, it also goes straight to the heart of the debates concerning privacy and surveillance in the digital age, and the risks that future digital services do not promote self-management, but, at best, foster an intrusive nannying that does not improve well-being.
The App also highlights how much we still need to learn about our use of social media sites such as Twitter in its relationship relation to our mental well-being. For example, even though use of sites such as Twitter mean that our communications are public, this does not mean that we expect them to be registered by everyone, even if we understand there is a risk of that. On Twitter, although public in one sense there is also an expectation of a certain level of privacy; private conversations can occur in such places, as they would in a street or bar. But even if we 'know' that a tweet rather than a direct message will be viewed by followers, how do we understand the meaning of any particular comment? This applies as much to a tweet deemed as evidence of trolling as someone who may express despair or suicidal thoughts.
One approach to this puzzling question is to recognise that smartphones, permanently connected and capable of rapid media creation, can allow them to become agents of musings, daydreams or even, perhaps for prayer; we do not expect our daydreams to be evaluated as an indicator of future action. Indeed, the Apps Environment report from Ofcom published this year revealed a surprisingly high perceived level of privacy with certain Apps. The report showed that we perceive apps as more private and secure than web browser, and may be less conscious that they are connected to the online world or even public. An App might be therefore experienced as a more private space than other online spaces.
So, curiously, we may tweet something highly personal, as we might daydream, and not expect it to stimulate a response. A response to such a tweet could therefore be very unwelcome, or even quite intrusive. And of course, a response from someone you don't know might feel even more so, even if it is from a health practitioner or doctor. So, if algorithms could accurately predict those most at risk, would a response feel more like being confronted, unexpectedly in the street? Such responses might even inhibit rather than support the expression of how one truly felt, at least for some.
The Samaritans have been a critical, under recognised part of our country's mental health services, often supporting people out of hours, or as NHS services are restructured. Its progress in use of new technologies has been courageous and exemplary, such that a person in crisis, who cannot pick up the phone can email them for support. This new App is in line with such developments. And although controversial, the Radar App is advancing our understanding of what works and what doesn't in relation to social media. This comes at a time when many of the large social media companies are wanting to develop and move into the health space. Trackers and monitoring will have their place, but we might feel there is quite a difference between, to borrow from Gershwin, someone 'to watch over me', and someone who watches but then acts.Suggest a correction