Twitter: We Don't Alert Authorities To Terrorist Tweets

Twitter: We Don't Alert Authorities To Terrorist Tweets

Twitter does not pro-actively alert authorities to terrorist content posted by users, a senior executive has said.

Nick Pickles, UK public policy manager at the microblogging site, suggested material was visible because it was a public platform.

It also emerged that Google is planning to introduce a scheme which could see those who put extremist-related entries into its search engine being shown anti-radicalisation links as part of attempts to highlight "counter-narratives".

Social media firms have come under scrutiny after Islamic State built up a vast online propaganda machine.

Officials estimate there are more than 50,000 Twitter accounts used by supporters of the terror group, also known as Daesh, Isil or Isis.

Appearing before the Commons Home Affairs Committee, Mr Pickles and representatives from Google and Facebook were asked about the thresholds they apply on notifying authorities about terrorist material identified by staff or users.

Labour MP Chuka Umunna asked: "What is the threshold beyond which you decide ... that you must pro-actively notify the law enforcement agencies?"

Dr Anthony House, of Google, and Simon Milner, of Facebook, said their threshold was "threat to life".

Mr Pickles said: "We don't pro-actively notify. One of the things... because Twitter's public, that content is available so often it's been seen already.

"Then law enforcement have established criteria to come to us and then request information."

He said proposals to introduce a legal requirement on sites in the US were not supported by authorities there.

"One of the reasons is, if we are taking down tens of thousands of accounts, that's a huge amount of information and we are not in a position to judge credibility of those threats," he said.

"So actually you may end up in a position where you swamp law enforcement with unwanted information."

Dr House disclosed details of upcoming projects to provide a "counter-narrative".

One pilot programme will aim to ensure that "when people put potentially damaging search terms into our search engine they also find this counter-narrative".

The committee heard that Twitter has removed tens of thousands of accounts in relation to violent extremism in the last year.

Chairman Keith Vaz asked how many people are in the sites' "hit squads" that monitor content.

He was told Twitter, which has 320 million users worldwide, has "more than 100" staff, while the Facebook and Google executives did not give a number.

Twitter was at the centre of controversy last year when it emerged it tips off users about requests for their account information "unless we're prohibited".

Mr Pickles stressed that decisions on whether to notify account holders were "context specific" and insisted they work with authorities to ensure they do not disrupt investigations.

He said: "By our policy we allow ourselves to not notify a user where it is counter-productive.

"In the case of an ongoing counter-terrorism investigation, that would be a circumstance where we would not seek to provide user notification that a request for data had been made."

Mr Pickles said the site's policies "clearly prohibit" encouraging and promoting terrorism.

All three emphasised their companies' commitment to combating IS's online activities.

Dr House said: "It's of fundamental importance. We don't want our platform to be an unsafe place."

Mr Milner said Facebook has become a "hostile place" for IS, adding: "Keeping people safe is our number one priority. Isis is part of that but it's absolutely not the only extremist organisation or behaviour that we care about."

Mr Pickles said the issue is "taken seriously across the top ranks of the company".

Close

What's Hot