TECH

Facebook, Google, Twitter And Microsoft Join Forces To Tackle Terrorist Content

"There is no place for content that promotes terrorism on our hosted customer services."

06/12/2016 10:23

Facebook, Google, Twitter and Microsoft have joined forces to curb the dissemination of terrorist content on their platforms. 

The four firms have committed to using a shared database of digital fingerprints to swiftly find and remove extremist content.

If one company removes a piece of content, the others will be able to use the same fingerprint, known as a hash, to quickly follow suit.

In a shared statement, the firms said: “There is no place for content that promotes terrorism on our hosted customer services.”

Shutterstock / Twin Design

The companies have different policies on what counts as terrorist content, so they will begin by sharing only “the most extreme and egregious” material.

This is the content “most likely to violate all of our respective companies’ content policies”, according to the statement.

Critically, the four firms will still have the final say on whether content is removed from their site.

The statement added: “Each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances.”

At the start, the database will be exclusive to the four founding firms, but others companies may be invited to participate in the future.

The move comes as the EU puts pressure on US tech firms to address how they tackle hate speech. 

At the start of 2016, White House officials met with Apple Facebook, Twitter and Microsoft to investigate ways to tackle extremism.

But Facebook told the Guardian the initiative had not been a direct result of that meeting. 

Twitter’s spokesperson told the BBC the project would increase the efficiency of efforts to remove offending material.

Henry Farid, a computer scientist who set up a system to categorise child sexual abuse and who proposed a sister program for extremist content, told the Guardian he welcomed the project. 

But he added that he was concerned about the absence of an impartial body to monitor the database.

“There needs to be complete transparency over how material makes it into this hashing database and you want people who have expertise in extremist content making sure it’s up to date. Otherwise you are relying solely on the individual technology companies to do that,” Faird said. 

Suggest a correction
Comments

CONVERSATIONS