Artificial Intelligence (AI); the technology we love to hate. We see daily stories highlighting its danger - AI is going to destroy humanity or robots will take all of our jobs. With Elon Musk and Stephen Hawking claiming that AI is a threat to humanity, no wonder everyone is panicking. But the rise of artificial intelligence is not a new concept.
Mary Shelley fascinated readers with her idea of Dr Frankenstein being able to create intelligent life from corpses. The notion that people could create beings that could think and feel as humans do is by no means a modern invention. As society and technology develop, the concept is increasingly becoming a reality. But the fear that AI will turn on humanity and become like HAL 900 or the Terminator may be a tad dramatic.
Instead of focusing our attention on the scaremongering of AI sceptics, we should explore how AI and machine learning are helping our planet and ultimately benefitting humanity.
There is a great deal of disagreement about just how many jobs will be automated by AI in the future - from 9% to 47%. However, the over-riding priority should be to make certain jobs safer, easier and less unpleasant by using robots and AI algorithms.
A good example of this is currently being trialled in Spain. Sadako Technologies has developed an AI waste sorting system that sorts recyclable material from landfill rubbish. A human takes twice the time to do the same job.
Another potential application for AI is within the prison system in which voice recognition would identify anomalous speech patterns during phone calls, indicating the use of code to facilitate crime.
So, are the robots going to take all of our jobs? Probably not. In fact, it's more likely that AI will help humans with their jobs or make them easier. Last month the UK government reported that robots could be used to perform a multitude of benign and "dangerous" jobs which would save £630bn for the UK economy. Instead of thinking AI will destroy us, maybe the opposite is true.
Just as the discovery of nuclear power led to the creation of global institutions to regulate its application, so AI will need a similar policy-making and policing infrastructure. We cannot afford to lose the benefits of AI because of its potential for nefarious purposes.
Will AI dumb down the act of living?
AI can spot patterns that humans often miss. Patterns of behaviour are distinctive, and when analytics are overlaid, they can provide accurate predictions that lead to predictable outcomes. This principle is already well recognised in medicine.
For example, Andrew Beck from Harvard Medical School conducted research on breast cancer. He ran patient details through an algorithm to see if the AI could identify whether a biopsy was cancerous, determining the course of treatment. The only extra information he gave the system was the life expectancy. The AI could identity 11 signs that the biopsy was cancerous - three more than the human medical community. So, perhaps AI can save lives too.
The Orwellian era
If algorithms are constantly monitoring us, we will have no privacy. Was Orwell right? Will we be stripped of our right to privacy and have a Big Brother that watches us 24/7? Not if we don't want to. If we use our personal data correctly, we can optimise it without having our privacy taken away.
In Kenneth Cukier's lecture he highlighted the need to start treating our personal data as an asset, like money. There is an economic value to our data but we're not currently utilising it. The banking system is a good example of how we use our money as an asset and then put it back into society. We should think of our personal data like this. We put other forms of data into systems where it can be tapped into and used, and the value of it can increase through use. You can do the same for personal data. Of course, it would be anonymised but the principle is that the value can be fully exploited in order to benefit us. As long as anonymity remains intact, our privacy will not be violated and we can use our data to improve society.
In conclusion, Kenneth Cukier defined a word he created: homointellicentrism. His definition of this word is "to view the universe as revolving around the human mind as the centre of what is knowable and what constitutes reality." In short, humans created this technology and are therefore in control of it. We need to programme AI accordingly and, if we do, it's unlikely that we'll end up as slaves to the robots.