Theresa May's Target Of Two Hours To Remove Extremist Content Slammed As 'Unreasonable' By Experts

Relying completely on algorithms isn't the answer, yet.
LOADINGERROR LOADING

A proposal from Theresa May for internet companies to remove online extremist content within two hours of it being posted has been criticised as “unreasonable” by experts.

The Prime Minister is to broach the subject of online terrorist propaganda in her keynote speech to the United Nations General Assembly in New York on Wednesday.

May, who has previously accused big internet companies of giving terrorist ideology “the safe space it needs to breed”, is joining with France and Italy in demanding that technology companies go “further and faster”.

Currently tech companies use a combination of thousands of researchers, called labellers, and a technology called machine learning to help flag and then remove illegal or extremist content.

May has previously accused internet giants of giving terrorist ideology a 'safe space it needs to breed'
May has previously accused internet giants of giving terrorist ideology a 'safe space it needs to breed'
Brendan McDermid / Reuters

But a number of experts in artificial intelligence and machine learning have cast doubt on May’s takedown target.

University of Cambridge professor Zoubin Ghahramani called the current system “challenging”.

“Although I am sure the current algorithms can be improved and made faster,” he told HuffPost UK, “It’s unreasonable to expect that they will be able to remove all such content within a few hours.”

Despite tech companies hiring thousands of content labellers, politicians have continued to call on artificial intelligence and machine learning to provide a more robust solution to the problem.

However machine learning expert Shimon Whiteson from Oxford University is not convinced that artificial intelligence is the only solution to the problem.

“The ability of computers to understand natural language content remains quite limited.” he says. “So I am skeptical that any tech company can reliably identify extremist content in an automatic way, regardless of how much time they have to do it.

“In addition, any automatic filter they put in place can probably be quickly foiled by those determined to evade it.”

Facebook recently announced it was hiring 3,000 extra moderators to help sift through content that breaches its terms of use
Facebook recently announced it was hiring 3,000 extra moderators to help sift through content that breaches its terms of use
Ralph Orlowski / Reuters

Humans will still provide part of the answer, according UCL’s Dr Emine Yilmaz, who believes a combination of using both machine learning and trained experts could help tech companies reach the two hour target.

“I think that with the current technology it is doable,” she says. “Not right away as it would take us some time. Right now the algorithms are smart enough and capable of doing really interesting tasks with high accuracy.

“I would expect them to make some errors, and this is a problem where recall is very important because you wouldn’t want to miss any content, so your algorithm should identify all content.”

She believes the biggest hurdle to overcome in meeting this target is creating a system that combines both speed and accuracy.

“The main bottleneck here is the checking,” she says. “Making sure that machine learning is not making big errors.”

To do this, companies will need to hire even more human experts who can double-check the decisions being made by machine learning algorithms.

Even though this slows down the process she still believes that given the resources Facebook and Google have, the two hour target is still “doable”.

Earlier this week a leading think tank argued that companies like Google, Facebook and Twitter should be fined if they fail to remove extremist, terrorist content, warning their progress in in stopping it has been “glacial”.

Policy Exchange said a regulator should have the power to punish the UK subsidiaries of tech giants, just as Ofcom can fine broadcasters, that inadvertently host terrorist messages like propaganda and instructions on how to carry out attacks.

Policy Exchange’s report follows the Parsons Green attack last week, when an improvised explosive device was set off on a packed rush hour train. The device was reportedly built with help from online instructions.

Close

What's Hot