Understanding Automated Decisions: Holding The Algorithms To Account

What we need to see are technologies that model automated decision-making on the human expertise

Between 1985 and 1987 a software failure in the Therac-25 radiation therapy machine caused at least six known accidents, several of which were fatal. The software, developed as part of the machine by a company called Atomic Energy of Canada Limited (AECL), was intended to automate the complex, human judgements required when setting up and delivering a dose of radiation.

Granted, this isn’t the cheeriest way to open an article, but the Therac-25 case is a 30-year-old example of an enduring liability problem in technology cases that won’t go away.

So, who was liable for the Therac-25 accidents? The hospitals that implemented the machine? AECL, who repeatedly affirmed the safety of their system? Or was it the fault of the software developers?

An investigation into the accidents reported multiple failings – from the development and design of the machine, to the software engineers and AECL’s assessment of fault events in the system. The Therac-25 case exemplifies the law’s challenge getting to grips with the nuances of new technology and technological error.

Today, the rapid rise of artificial intelligence (AI) presents a different, but no less complex, challenge. As automated decision-making is relied upon to make complex judgements in finance, law and healthcare, the need for transparency becomes greater. But how do we make algorithms accountable?

Many AI algorithms are vastly complicated, and IP is deeply protected – with vendors either refusing or unable to reveal how programmes work. Mark Zuckerberg’s recent attempts to spell out the inner workings of Facebook’s algorithms to Congress (without a great deal of success) works as a perfect allegory for today’s AI industry. There are many current examples where AI programmes make decisions, but the end users have no clue how conclusions were reached. Life-threatening or not, having technologies that operate away from public view will only continue to fuel distrust.

As a member of the All Party Parliamentary Group on AI (APPG AI), I recently took part in the group’s discussion on the explainability of AI. This debate was triggered in part by GDPR – which will provide people with clarity and control over the way in which their data is processed. The legislation is prompting a rethink from certain areas of the tech sector about the kinds of technologies that can be successful.

For me, what we need to see are technologies that model automated decision-making on the human expertise – rather than the elusive data-driven matrixes that underpin machine learning. This kind of expert-down approach is particularly important to regulated industries, where there is a real need for technology that can provide auditable decisions. The biggest benefit to having an AI platform governed by human rules is that a subject matter expert can provide much-needed clarity.

The surest way to clear the air in this rather murky debate, and thus clear the path for progress, is to focus on bringing transparency to all AI technologies. Currently, AI is being used to make decisions that affect a huge number of people, yet few experts can actually explain the reasoning behind those decisions. It’s unbalanced, and as AI continues to grow then that balance needs to be readdressed and those algorithms must be held to account.

Close

What's Hot