In recent years, the notion of true artificial intelligence (AI) has undoubtedly surpassed merely a scientific abstraction. From driverless cars to healthcare, the technology is now more prevalent in our lives than ever before. It is therefore of the utmost importance that not only those driving progress in the field, but also those responsible for regulating it are able to grapple with the ethical implications of its development.
A number of weeks ago, the European Commission announced plans to investigate the ethics of AI via the launch of a new group. Tasked with assessing the benefits of the technology and its potential impact on the future of work, the group’s ultimate goal will be to make informed policy recommendations to facilitate the right kind of deployment.
By the close of 2018, the expectations are that the group will have drafted a thorough set of guidelines for ethical advancements in the field across Europe. Specifically, its work will focus on fairness, transparency, the role of AI in the workplace, democracy, and whether or not the technology infringes upon the Charter of Fundamental Rights.
At this moment in time, tech firms globally are pulling away from regulators in the race to shape the future of AI as it becomes more deeply embedded in our daily lives. And, despite some of the world’s most prosperous and influential corporates already endeavouring to weave the technology into their business models, huge swathes of the media and the general public still remain uncertain of the societal repercussions.
Seemingly we have reached an impasse. Until alliances of futurists, civil-rights activists, social scientists, regulators and the public-at-large can agree upon a universally accepted set of ethical standards or a code of conduct to govern the technology, it’s difficult to see how society as a whole will ever welcome it with open arms.
Doing so successfully thereby relies upon close cooperation between technologists and political institutions, the free exchange of information, a consistently open dialogue and a concerted effort by the tech community not to simply blindside policy-makers altogether.
Two years ago we did actually witness significant progress being made in this area. A number of the darlings of Silicon Valley founded the Partnership on Artificial Intelligence to Benefit People and Society, with the aim of establishing recognised ethics in the field. The intention was that this organisation would run alongside similar groups fuelled by industry funding, including the likes of Open AI, the AI Now Institute, doteveryone and the Centre for Democracy & Technology.
Fortunately, progress is being made, but now it must be maintained and accelerated. Both leaders in the field and our regulators can’t afford to shy away from confronting the elephants in the room.
Advancements are made thanks to unity not secrecy, meaning that collaboration between our institutions, the tech industry and cross-sections of society must be the norm, not the exception.
Yes, for businesses the technology may only serve as a means to assess job applications or an individual’s suitability for a loan, but for humanity it also has the power to cure life-ending diseases and revolutionise age-old industries in dire need of innovation.
With the technology’s societal, political and economic impact for years to come hinging on careful calculation of its ethical parameters, concrete agreement upon them today will be fundamental to its long term success tomorrow.