We Must Protect Our Rights From Automated Decisions

We mustn’t ignore the great potential of new technologies to improve our society – but nor must we turn a blind eye to the risks they pose
Pyotr Sivkov via Getty Images

Imagine a future where algorithms or even supercharged artificial intelligence (AI) make decisions in multiple aspects of your life: your job, your education, your welfare, and even your health.

In this future, the police use algorithms and AI systems to predict where crime will be committed and automated alerts trigger police to despatch vehicles. Police vehicles are fitted with automated number plate and face recognition cameras that identify people on watch lists. Police can ‘stop and scan’ people to verify their identity, using on-the-spot fingerprint scanners to check against crime and immigration databases. Following arrests, an algorithm can assess information held about suspects and decide whether they should be kept in custody.

Meanwhile, the intelligence agencies use automated programs to sift through billions of communications intercepted from entire populations, home and abroad, every day. Their programs automatically read, listen to, and watch private conversations and web browsing, advising who to subject to more intense surveillance.

That future is here and now.

As the trend for automation takes pace, our lives and indeed our freedoms risk being increasingly governed by machines. New technologies provide great promise in a range of sectors, from science to transport to health - but when automation is used to make decisions about citizens’ basic rights, the risks are extremely grave.

European law provides us with the vital right not to be subjected to automated decisions. However, our Government is abandoning this vital right in the Data Protection Bill currently going through parliament – opening the door to employers, authorities and even the police using machines to make life-altering decisions about us.

Is it right to rely on machines to decide who is eligible for a job; who is entitled to welfare; or even, who is innocent or guilty?

Unsurprisingly, automated computer programs don’t tend to deliver humane solutions.

An automated benefits system in the US resulted in a million benefits applications being denied over a three year period – a 54% increase from the three years before. It often blamed its own mistakes on claimants’ “failure to co-operate”. One such claimant was a woman suffering ovarian cancer. Without welfare, she lost the ability to pay for her medication, her transport to medical appointments, and even her rent. She died the day before she won her appeal against the system.

There are rapid advances in the use of automated systems in the jobs market – but beware that Google’s job advertisement algorithm shows prestigious, high-paying jobs to men more often than to women. Algorithms are common at the hiring stage, searching for keywords in CVs and cover letters, while some even include ‘chatbots’. Many applicants are rejected by this automated system before they even come into contact with a human.

Some algorithms purport to be able to track and rate an employee’s performance, even making firing decisions, supposedly beneficial because they “eliminate human emotional volatility.

The automation of these processes and the online nous required to navigate their blunt interface don’t make it easy for the older jobseeker, nor anyone whose job doesn’t require a digital skillset.

Perhaps even more chilling than being hired and fired by machines is the very real prospect of being policed by machines.

Automated identity checkpoints have recently crept onto our streets, with the controversial introduction of automated facial recognition cameras. The Metropolitan Police and South Wales Police are currently using the technology with watch lists of people they want to keep an eye on – whether it’s petty criminals or people with mental health problems. The Met has used facial recognition cameras for the last two years at Notting Hill Carnival – despite similar technology showing a disturbing likelihood to misidentify black faces.

The Met now even uses predictive policing and has experimented with a commercial product called PredPol, as have forces in Kent, Greater Manchester, West Midlands, and West Yorkshire. The algorithm predicts crime hotspots and alerts police to despatch resources. However, multiple studies have found that PredPol can reinforce biased policing - typically in areas with high numbers of racial minorities. This occurs where crime statistics that represent over-policing are used to predict where crime will occur in the future, resulting in self-fulfilling – and discriminatory – prophecies.

Durham Police has taken an ever greater leap towards automation, using artificial intelligence to decide whether to keep suspects in custody. Their algorithm assesses information about suspects and estimates their risk of reoffending. A similar program used by US authorities was found to incorrectly over-estimate the risk of black defendants more often than whites – despite data on race not even being used. Durham Police recently removed one of the postcode fields in their tool, acknowledging that they risked discriminating against the poor.

We mustn’t ignore the great potential of new technologies to improve our society – but nor must we turn a blind eye to the risks they pose. By automating important decisions about people’s lives, we risk encoding discrimination in place of human fairness and shielding bad decisions with a veil of ‘objectivity’.

Substituting human decision-makers for unaccountable machines allows us to avoid the pressing moral and political dilemmas our society faces.

Automated processing may well support our decisions, but where the stakes are high, they should never replace human decisions.

That is why we’re calling on MPs to change the Data Protection Bill to uphold the vital EU right of citizens not to be subject to automated decisions, where our fundamental rights are at stake. The advent of new technologies forces us to ask some existential questions about our relationship to machines, and on one thing we’re clear – our human rights must always be protected by human decisions.

Close

What's Hot