2020 Is The Year Britain Woke Up To The Politics Of Data

We must stop thinking that algorithms can do things better than humans, Women in AI founder Ivana Bartoletti writes.
Nearly 280 thousand students saw their A-Levels downgraded after the introduction of the controversial award model Ofqual. Protesters say Ofqual privileges private schools and downgrades students of less privileged backgrounds.
Nearly 280 thousand students saw their A-Levels downgraded after the introduction of the controversial award model Ofqual. Protesters say Ofqual privileges private schools and downgrades students of less privileged backgrounds.
NurPhoto via Getty Images

Following the outbreak of coronavirus at the beginning of the year, our first glimpse into the power of data in the pandemic came in the form of the much maligned digital contact tracing app.

It openly championed large tech companies over epidemiologists in deciding what was best for people, and citizens, rightly, worried about what would happen to their most valuable and personal information.

Would Covid data today become surveillance tomorrow? An important concern, especially after the Cambridge Analytica scandal, when research data being manipulated showcased the danger of unaccountable use of personal information.

After the app, a new data debate sparked, this time around the A levels results fiasco.

In grading the students, the Ofqual algorithm ended up discriminating against those coming from poorer backgrounds.

Was that avoidable? Yes. Algorithms perpetuate and amplify the status quo, maximise its inequalities and scale up existing bias.

This is what happened: by taking into consideration teachers’ assessments only for smaller groups, private school kids benefited.

Similarly, by taking into account previous schools results, the algorithm ended up grading the school and the student as a number within it rather than an individual with aspirations, dreams and unique experiences through life.

Students reacted firmly, and rightly so. Fuck the algorithm quickly became the new slogan, in a new and dystopian version of the traditional demo with chants and flags. The government performed a u-turn and apologised.

So, what have we learnt from that lesson?

Several things – in my view, some good and some far less positive.

First of all, an algorithm cannot be blamed for anything. An algorithm is a set of instructions, where ingested data churns out a result based on what parameters is given.

No doubt the data it is fed is biased, and how could it not be? Data is simply a picture of the world as it is now, thus presenting the same characteristics, inequalities, racism, sexism and social immobility of the reality out there.

By feeding it data, the algorithm will simply replicate things as they are. It can be argued that the data can be amended and tricks deployed to make the data less biased. While this is certainly true, it won’t necessarily fix the problem. This is because bias can emerge for many other reasons – the size of the groups in the A levels case is one example of what is called discrimination by proxy.

Another example could be postcodes. Keen employers could for example decide to remove gender or race when automating the HR processes and yet, a postcode may still lead to bias. Postcodes represent more than just a postal address – they may indicate social background and act as a proxy for race.

“Without accountability for algorithms in the public domain, we are facing the risk of locking people out of essential services and access to primary needs.”

But even if we were able to identify all the possible technological and mathematical fixes to bias, would we be able to trust algorithms? It’s up for debate. And that is because the use of automation and the choice of a technological artefact is a political choice.

Take facial recognition. There is an obvious, huge problem with bias as facial recognition software does not yet recognise Black women. But if we were able to achieve the perfect tool, would we still want to be watched, spied on and recognised wherever we go?

Surveillance will end up being wrapped around the most vulnerable, with the rich able to purchase their freedom.

So, the first lesson we have learned is that any artefact is biased, data is not neutral and even the most perfect tool can be deployed in a biased way. This means that we must stop thinking that there is something sacred in data, or that algorithms would necessarily do it better than humans. Machines scale up what we humans are trying to eliminate, and that includes racism and social inequality.

The second lesson we have learned is that we have been able to fight the A levels debacle because we knew there was an algorithm behind it. However, the reality is that algorithms are increasingly replacing human decision making and we, the public, know so little about them.

Bank loans, insurance premiums, public sector repayments: increasingly, algorithms churn out results which can lead to a scaling of up inequality. A bank could give less credit to a woman because historic data shows women have less earning power, or to a Black person from a certain postcode because of that postcode being traditionally associated to higher crime.

But what if we don’t know that a loan denial or a lower position on a waiting list is the outcome of an automated decision with some obscure parameters set by a group of people?

Without accountability we are facing the risk of locking people out of essential services and access to those primary needs which enable people to progress in life: housing, health and education, exactly the areas that function as levers of equal opportunities.

The third lesson we have learned is that all this is going on with little say from us. Predictions driving policy making, legal decisions made by computation, adverts informed by our online activities, voting suggestions and information ads served on the basis of patterns of behaviour – all this is the reality, the here and now. Not only are we being softwared out, democracy is too.

The time to react is now. We need clear rules, a licensing agency for algorithms making life impacting decisions, liability safe harbours for products meeting due diligence. We vitally need redress for individuals affected by unjust decisions made by algorithms.

The greatest lesson of the A levels fiasco must be that the time to act is now.

Ivana Bartoletti is a privacy law professional, founder of Women Leading in AI and author of An Artificial Revolution: On Power, Politics and AI

Close

What's Hot