In 2019, a photo-altering app took the world by storm. In mere seconds, you could make yourself look really old and a new viral sensation emerged.
Now, 4 years later, FaceApp has been downloaded more than 500 million times.
Seems harmless enough, until you start to read the small print. Did you know FaceApp is owned by Russian software developer Wireless Lab? Did you know by using the app, you hand over all rights to the use of your likeness?
“Perpetual use”, in the words of FaceApp’s terms of service, “to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your user content and any name, username or likeness provided in… all media formats and channels now known or later developed without compensation to you”.
Basically, “you give us all of this for free, and we give you a digital picture of yourself looking old. And you’re welcome.”
The app doesn’t stop there, though. It also collects other data stored on your device, such as the websites you visit. At an individual level, the implications of this are severe. Clearly, the risk of identity theft would be high if that sort of data – your face – was to fall into the wrong hands.
But the real dangers go much further than that.
Numerous tech companies are using facial image and video data to “train” facial recognition algorithms.
This technology is already so sophisticated that tiny modules can scan a person’s facial points to interpret imperceptible micro-expressions and eye movements, then recognise from them human emotions, moods, and even intentions. Our faces can reveal a lot about us, from whether we are ill to how a bit of information makes us feel - happy, sad, angry, and fearful.
A study published by Stanford University in 2017 found that AI could even deduce someone’s sexuality just by scanning photos of their face. At the moment, several states in the USA and multiple countries around the world are actively working to legalise discrimination against the LGBTQIA community.
Were they to decide to incorporate facial recognition software in making determinations about whether or not someone is gay to then violate their fundament human rights, we would be entering a new normal that subjugates humanity to artificial intelligence.
The implications and risks of this technology - from law enforcement to politics, business to medicine - are huge and have already arrived.
We learned only this month that a company called Clearview AI legally downloaded billions and billions of photos posted to Facebook without the user’s consent and then used those images to train their own facial recognition software that they are now selling to law enforcement agencies across the United States.
How far away are we from a government or company saying they have the ability to read from someone’s face their “likelihood to commit a crime,” a la Tom Cruise’s film Minority Report? They could then decide to arrest someone because they “fit a description” that includes propensities determined via their data. Or even pre-arrest someone before they commit a crime.
This technology would also inevitably further entrench the massive racial disparities that already plague the real world. A federal study in the USA recently found that Black, Asian, and Native American people were up to 100 times more likely to be misidentified by facial recognition systems than white people. Not only are these technologies performing poorly, but giving them primacy over human decision-making is extremely dangerous.
Facial recognition technology is a major threat to people’s civil liberties and freedoms.
What happens when the burden of proof shifts to where law enforcement doesn’t have to prove you’re guilty, but you have to prove you’re innocent? It could turn the entire justice system on its head, and not in a good way.
There are early strides being made in the European Union and the White House to begin defining and defending people’s rights in the era of artificial intelligence. As has been seen with the fight to reign in the power of social media companies, however, the pace of technological development far outpaces the machinations of deliberative democracy.
The broader risk to society from artificial intelligence like facial recognition has never been greater, and the need to challenge these technologies must move faster to not only keep up but stay ahead of the worst-case scenarios. After all, shouldn’t your face belong to you?
Kyle Taylor is the Founder and Director of Fair Vote UK, which published whistleblower evidence of Vote Leave’s lawbreaking in the EU referendum and supported Chris Wylie’s whistleblower revelations around Cambridge Analytica’s global data theft and misappropriation. He is a leading campaigner on digital democracy reform and platform regulation in the UK and internationally. Kyle was the Campaign Director and Chief of Staff to a UK government minister and has worked on half a dozen election campaigns in the UK and the USA, including the 2016 US Presidential Campaign and the 2020 US Georgia Senate Runoff, which Democrats won by less than 56,000 votes.
He is the author of new book The Little Black Book of Social Media (£9.99).