Google is planning a system where those who put extremist-related entries into its search engine being shown anti-radicalisation links as part of attempts to highlight "counter-narratives".
The world's largest search engine is to run schemes showing the counter-narratives to anyone googling "potentially damaging" terms, to combat the huge online propaganda machine of groups such as the so-called Islamic State (IS), also called Isil, Isis or Daesh.
He told the MPs: “We are working on counter-narratives around the world. This year one of the things we’re looking at is we are running two pilot programmes.
MPs spoke to the representatives of (left to right) Google, Facebook and Twitter
“One is to make sure these types of views are more discoverable. The other is to make sure when people put potentially damaging search terms into our search engine they also find these counter narratives.”
Social media firms have come under scrutiny as IS has used their platforms to spread their messages to impressionable young men in the West, thousands of whom have travelled there.
Nick Pickles, UK public policy manager at Twitter, told the committee the site not pro-actively alert authorities to terrorist content posted by users.
Officials estimate there are more than 50,000 Twitter accounts used by supporters of IS.
Mr Pickles, Dr House and Facebook's Simon Milner were asked about the thresholds they apply on notifying authorities about terrorist material identified by their staff or users.
Labour MP Chuka Umunna asked: "What is the threshold beyond which you decide ... that you must pro-actively notify the law enforcement agencies?"
Dr House and Mr Milner said their threshold was "threat to life".
Mr Pickles said: "We don't pro-actively notify. One of the things... because Twitter's public, that content is available so often it's been seen already.
"Then law enforcement has established criteria to come to us and then request information."
He said proposals to introduce a legal requirement on sites in the US were not supported by authorities there.
"One of the reasons is, if we are taking down tens of thousands of accounts, that's a huge amount of information and we are not in a position to judge credibility of those threats," he said.
"So actually you may end up in a position where you swamp law enforcement with unwanted information."
The committee heard that Twitter has removed tens of thousands of accounts in relation to violent extremism in the last year.
Chairman Keith Vaz asked how many people are in the sites' "hit squads" that monitor content.
He was told Twitter, which has 320m users worldwide, has "more than 100" staff, while the Facebook and Google executives did not give a number.
Twitter was at the centre of controversy last year when it emerged it tips off users about requests for their account information "unless we're prohibited".
Mr Pickles stressed that decisions on whether to notify account holders were "context specific" and insisted they work with authorities to ensure they do not disrupt investigations.
He said: "By our policy we allow ourselves to not notify a user where it is counter-productive.
"In the case of an ongoing counter-terrorism investigation, that would be a circumstance where we would not seek to provide user notification that a request for data had been made."
Mr Pickles said the site's policies "clearly prohibit" encouraging and promoting terrorism.
All three emphasised their companies' commitment to combating IS's online activities.
Dr House said: "It's of fundamental importance. We don't want our platform to be an unsafe place."
Mr Milner said Facebook has become a "hostile place" for IS, adding: "Keeping people safe is our number one priority. Isis is part of that but it's absolutely not the only extremist organisation or behaviour that we care about."
Mr Pickles said the issue is "taken seriously across the top ranks of the company".