AI-Powered Decision-Making In Business Must Be Transparent And Keep Humans 'In The Loop'

Many, well-known companies are already employing automated decision-making systems. However, few of those systems can provide either reasoning for the decisions they make, or records of those decisions alongside the reason

CEO of Cognitive Reasoning platform Rainbird (www.rainbird.ai), shares his views on the future of AI decision-making for business, in the light of recent EU legislation and UK government directive.

Management consultancy McKinsey and Company predicts that over the next 10 years, Knowledge Work Automation will be worth $5-7trillion to the global economy. An important component of this automation will be AI-powered Cognitive Reasoning. Discussions around the future of AI-powered decision-making are largely about transparency, traceability, and the role of people in the decision making process.

The government's Robotics and Artificial Intelligence report made reference to the topic of decision-making transparency, with specific mention of the new EU General Data Protection Directive, which comes into effect in 2018. Amongst other things, the directive will give those individuals subject to automated decisions, a 'right to explanation'. That is, they will be entitled to know how and why a certain decision about them was made.

Many, well-known companies are already employing automated decision-making systems. However, few of those systems can provide either reasoning for the decisions they make, or records of those decisions alongside the reasons. The new directive means that any organisation doing business in the EU will need to record decisions made and the reasoning behind them. The need to provide increased transparency and traceability of automated decisions will have a major impact on existing technologies already employed in business and industry. Those systems not currently able to provide reasons for decisions, will fall short of the EU directive.

What is automated-decision making?

Before discussing the future of AI in more detail, it's important to define the term automated-decision making. Put simply, it's a decision that can be made by a machine, instead of a human. Automated decision support, on the other hand, is a decision made by a machine, but verified by a human. For example, in the financial services sector, a computer can select certain products and services based on a customer's profile, but it will be the adviser who ultimately chooses what to recommend.

Most people already come into daily contact with AI-based decision-making. Satellite navigation systems, especially those with real-time traffic, use it to recommend routes. Online service providers like Netflix use it to suggest movies you might like, based on previous choices. Even applications for personal loans are at least semi-automated, in contrast to those days when a decision was based largely on your personal relationship with the bank.

As with all business-wide systems, AI supported decision-making needs to be scalable and reliable, so that companies can assume across-the-board consistency, eliminating subjective variances in the decisions made by employees. In other words, a scalable, reliable automated decision-making system will allow all employees to consistently make the same quality of decisions, irrespective of individual ability and experience.

This level of consistency, and the ability to record and, subsequently, retrieve and audit why certain decisions were made, will have significant benefits for companies and individuals in terms of accountability and liability. Trust in AI-based systems is paramount, for the user and for customers and employees; those individuals subject to decisions made by automated processes. If automated decision-making systems are transparent, trust will follow. Operators will be able to interrogate the machine's decision and see the reasoning behind it. This is a very important aspect of deploying AI; people do not trust a black box. They want the reassurance of being able to see inside the process.

Ensuring transparent AI implementation

Developers of AI systems must ensure people are considered at all stages of automated decision-making. Otherwise, employees and users can be overwhelmed and disempowered by new processes, especially if they feel their jobs are under threat or that they have not contributed to the decision that is reached.

In summary, there are several areas to be addressed for AI decision-making systems to be deployed effectively. Firstly, consider your data: is it personal data - such as addresses, contact details, vehicle registration, etc? By law, storing personal data requires some degree transparency. Also, look closely at whom you are making decisions about. Is there a 'lurking variable' that can skew the data and lead to an inappropriate decision? If necessary, position decision-making systems as decision-support systems, assisting with, and augmenting peoples choices, rather than overruling or replacing them. Allowing humans to make and influence decisions alongside the technology will go a long way towards acceptance of AI. Finally, implement technology that can justify the decisions it makes. A transparent system, also allowing for retrieval of decision history, is preferable to a black box that prevents scrutiny.

Legislation surrounding the use and archiving of data is evolving to protect consumers and others subject to automated decision making processes. The technology is now available for companies to meet those requirements and, at the same time, to meet their own needs for enterprise-wide reliability and consistency when making decisions that contribute directly to costs and revenues.

Close

What's Hot