10/07/2018 11:00 BST | Updated 10/07/2018 11:00 BST

The Ethical Implications Of Killer Technology

While we know that technology can bring us much needed social and scientific advancements, we must always keep in mind the dangers that, if left unchecked, it can reap upon our humanity

gremlin via Getty Images

When working in Haiti after the 2010 earthquake, I discovered that Twitter had been instrumental in saving lives in at least three separate emergencies at the Petionville Club IDP Camp helping to find a rabies vaccine, locate an oxygen tank, and to alert emergency teams to what one nurse described as “the most horrific car accident” she had ever seen. While mindful of the many uses of social media in these sorts of crises as well as in my work as a researcher and writer, I have become aware that for each benefit of social media there are contiguous abuses that can be extreme.

Nothing is more evidence of this fact than this past week where the government of Tripura in northeast India has been forced to cut the Internet due to a spate of WhatsApp-inspired murders. These three killings, although separate incidents, all replicate the same methods where individuals are targeted due to fake news reports spread through WhatsApp. While India has had problems in recent years over the lynching of Muslims, WhatsApp texts have been sent to “warn” people of child traffickers in Bengaluru, Karnataka where a 26-year old migrant construction worker was lynched on suspicion that he was a kidnapper as he was walking. Other messages in WhatsApp show staged kidnapping videos accompanied by claims that the perpetrators are near. This, of course, results in a mob formation of locals who are angry with their most likely object of ire will be the non-local, usually the worker from another Indian state. And WhatsApp has seen violence in the UK with the recent case of the “Ice city boyz” group which ended in a dispute about who was the “least gay” which ended in a Blackheath stabbing of Paul Akinnuoye. While we have heard about deadly accidents of people taking selfies and the man who died in last year’s gaming marathon, logically nobody would make the leap to believe that apps actually kill people. Nor do we think that apps can force anyone to be dishonest or violent any more than we believe that cars are responsible for drink drivers. Yet, there seems to be a pattern in recent years where our society wishes to displace the responsibilities of the individual onto the “evils” of technology. Certainly app development has invariably focused on niche markets such as helping the visually impaired with daily tasks and creative apps for artists on the go.

Yet, perfectly intelligent people in the tech sector are worried about the possible malevolence that artificial intelligence presents to humans. Stephen Hawking, in an interview with The Times last year, spoke at length about how survival within civilisation necessitates aggression stating that it is “hard-wired into our genes by Darwinian evolution.” Hawking framed the links between human aggression and the encroachment of technology as posing a risk that may end civilisation, emphasising, “We need to control this inherited instinct by our logic and reason.” Even Apple co-founder Steve Wozniak and over 3,978 AI/Robotics researchers signed an open letter in 2015 by Research Priorities for Robust and Beneficial Artificial Intelligence with an attached document with recommended research, warning that artificial intelligence potentially poses a greater danger than nuclear weaponry. Neither Hawking nor those in AI are suggesting that technology can in and of itself commit crimes against humanity, but they are scratching the surface of the possible dangers which can occur when humans who create technology are not bound to preserving any sort of legal or professional ethics after they unleash their creation onto the world.

We have seen the dangers of technology destroy human life when used unchecked. The killer “machine-learning algorithm” which steers the U.S. drone program has killed thousands of innocent people according to The Intercept and the UK police which have recently acknowledged that facial recognition software is wrong 90 percent of the time. And these are not minor details when human life is dependent upon accuracy. Yet, months later and the UK police are still using the same faulty technology and are now under fire by the United Nations and the drone strikes in the Middle East not only continue, but there is now a new technology that allows the drone to decide who, exactly, should be killed, and Unmanned Aerial System (UAS) with a learning algorithm. This translates to killing machines with little to no human oversight.

While the greatest cyber crime that the police are dealing with today usually involves the Darknet, file sharing, and crimes against children, many of these crimes can be resolved by contacting Internet service providers (ISPs) or by simply having web hosting services shut down malicious sites. Still, the sorts of technological crimes that are covered by heavily encoded apps like WhatsApp or the technological glitches of AI that has gone awry are next to impossible to contain, even if you create a speciality force to survey 45 million social media accounts.

What are our options to resist the technological flattening out of our human rights and the devastation to humanity that can and does occur when humans abuse technology? Italian philosopher, Giorgio Agamben, writes of the expansion the biopolitical imperative of the state, what he terms “bare life,” as it moves from the margins to the center of the state’s power where these exceptional acts of “emergency” increasingly becomes the rule. The world post 9/11 has shown us how technology invigorates this reading as technology drove the wars in Afghanistan and Iraq and resulted in over 1.5million dead. In a twist to Agamben’s work in this field, in 2004, he turned down his visiting professorships to New York University and UCLA when he learned that the US government was engaging in biometrics—the recording of fingerprints and retina scans for visitors from certain countries. While we know that technology can bring us much needed social and scientific advancements, we must always keep in mind the dangers that, if left unchecked, it can reap upon our humanity. We must push our leaders to engage in responsible debates surrounding the ethics of technology today.