Police Facial Recognition Is An Authoritarian And Oppressive Surveillance Tool

Adding real-time facial recognition to our surveillance state’s already worryingly militaristic arsenal would not be good for the health of our democracy
JOHNGOMEZPIX via Getty Images

Dancers at a Caribbean carnival in a west London street, peaceful protestors at a lawful demonstration against an arms fair, and citizens and veterans paying their respects to war dead on Remembrance Sunday – these people have all been targeted by police’s new authoritarian surveillance tool invading our public spaces: automated facial recognition.

Automated facial recognition is an artificially intelligent (AI) computer system which runs alongside a surveillance camera on the street, recognising people’s faces in real time and matching them against watch-lists created by the police.

If the concept of this dystopian and authoritarian policing tool turning us all into walking ID cards wasn’t enough on its own, there are huge problems with both the technology and the police’s intrusive and oppressive use of it.

First of all, the police’s facial recognition technology itself is dangerously inaccurate, with our report published today revealing misidentification rates of up to 98%. If you think this isn’t worth worrying about, bear in mind that on the basis of an incorrect match the police have the power to stop you in the street and require you to identify yourself, in order to prove you aren’t the person their computer tells them you are.

That the police think these embarrassing inaccuracy rates are acceptable – claiming that “no facial recognition system is 100% accurate” – is even more worrying, as South Wales Police has already planned future deployments and Metropolitan Police is in fact growing its use of facial recognition throughout 2018.

This is even more alarming in light of multiple studies showing that many high-profile and widely used facial recognition systems have much higher misidentification rates of people of colour and women (particularly women of colour).

I witnessed the Metropolitan Police use automated facial recognition at Notting Hill Carnival last year, and while watching for only five minutes I saw the system wrongly identify two innocent women walking down the street as men on the police’s ‘watch-list’.

The Metropolitan Police have also targeted people known to have mental health issues who weren’t wanted for any crimes at Remembrance Sunday in 2017, while South Wales Police targeted peaceful and lawful demonstrators in Cardiff in March.

South Wales Police have admitted they keep hold of images of innocent people wrongly identified by their facial recognition cameras for a year, meaning that every innocent person wrongly identified at all these events (over 2,400 people in South Wales Police’s case) has their image on a police database – and these people are completely unaware about it.

National police databases are brimming full of people’s images – 19million at the last count – hundreds of thousands of which are of innocent people. Some 12.5million of those images have been made into facial biometrics, available for scanning by facial recognition software. Despite a court ruling in 2012 that the retention of innocent people’s images was “unlawful”, the Home Office has refused to delete them, claiming it’s “too expensive”. Meanwhile, we have discovered that the Home Office has funded South Wales Police’s use of automated facial recognition with £2.6million.

There is no legal basis for the police’s use of automated facial recognition: there is no law permitting or governing its use and no guidelines for the police to follow – the police seem to think they have free reign to do whatever they want with whatever new tech they can get their hands on. It’s likely that the police’s use of this intrusive and oppressive technology is not compatible with the UK’s human rights laws, as it poses a significant threat to people’s right to privacy and freedom of expression.

The UK already has one of the world’s largest CCTV networks. Adding real-time facial recognition to our surveillance state’s already worryingly militaristic arsenal would fundamentally change policing in the UK, and indeed the health of our democracy. Innocent citizens being constantly tracked, located and identified – or, as is currently most likely, misidentified as a criminal – by an artificially intelligent camera system conjures up images of futuristic dystopian societies that even Orwell would be proud of.

UK police facial recognition is lawless, undemocratic, and dangerously inaccurate. Police must stop using it now.

If you or anyone else you know has been affected by police use of automated facial recognition at public events and in public spaces, please get in touch with us: info@bigbrotherwatch.org.uk

Big Brother Watch’s campaign against UK police use of real-time facial recognition has support from David Lammy MP, as well as 15 rights and race equality groups, including Liberty, the Football Supporters Federation, Runnymede Trust, Institute of Race Relations, the Police Action Lawyers Group, ARTICLE 19, and Index on Censorship

Close

What's Hot