How Will Technology Defeat Post-Truth Politics?

Democracy, fragile as it is, is held together not only through the body of laws that define democratic governance but also through the activities of a free press, which can hold power to account.

Democracy, fragile as it is, is held together not only through the body of laws that define democratic governance but also through the activities of a free press, which can hold power to account.

But what happens when the news media (particularly online) becomes a dizzying tsunami of stories - some true, some manipulating the truth and some completely fabricated? And what happens when political figures become perpetrators in the dissemination of lies. For example, here's a list of Donald Trump's litany of outright lies (or post-truths as they have now been euphemistically labelled).

Fact checkers must play a greater role in the post-truth landscape by vetting news for inaccuracies. But manual fact checking is far from optimal. Not only is it subject to its own biases but it is also unable to keep up with the sheer volume of content posted on the web every minute.

So can computers detect fact from fiction? Unfortunately, there isn't an AI algorithm for that just yet, but software can aid the process of fact checking.

There two different approaches to determine whether a story is true or false. The first is to study the metadata surrounding a story, which is often a proxy for how reliable the content of the story is. The metadata in question could be anything from the publication in which a story is published to the way in which the story is shared or engaged with on social media.

This approach can end up being too simplistic. For example, New York Magazine's Brian Feldman put together a Google Chrome Extension which produces a pop up when users visit websites that are deemed unreliable (based on a list of websites compiled by Melissa Zimdars, a communication and media professor from Merrimack College in Massachusetts). This solution, however, lacks rigour and will end up letting too much factually correct news being labelled as fake.

Recently, social media companies have also been under pressure to improve the quality of their content. Facebook, which came under fire recently over spreading bogus news during the 2016 US presidential election, is now making changes to its news algorithms. In a recent blog post, Will Cathcart, VP of Product Management, announced "an improved system to determine what is trending". Trending topics are no longer extracted from a single viral post. Instead, the algorithm will also consider other factors such as the number of publishers also publishing about the same topic and the level of engagement on these posts. In the last week, Facebook has also starting explicitly labelling a post as disputed when its content is deemed bogus.

The second more comprehensive approach to check facts is more direct and studies the internal logic of a story. It involves extracting various claims from a story and matching it against known facts. In an ideal world, we would have a structured database of all the facts in the universe and can instantly parse claims from a story into a structured machine-readable format and validate it against the database.

We are far from such a comprehensive single database and it is still difficult to translate human speech or text into a structured format, but many new organisations are sprouting up attempting to address this challenge. Google Europe's new Digital News Initiative has funded many of these organisations from its €150 million innovation fund.

Factmata, which received a received a €50,000 grant from Google, is using natural language processing and machine intelligence to identify claims in text by extracting named entities, finding out which economic statistics the claims relate to and verify whether they are fact-checkable. Factmata then checks these claims against a knowledge database built on Freebase, an off-shoot of Wikipedia's knowledge graph.

The goal of automating fact checking is in its infancy. Right now, only the simplest statements can be fact checked by computers. But in reality, everything can't be neatly divided into the simple binaries of right and wrong. Often, statements made by public figures rely on numerous assumptions and lie in the muddy grey area between fact and fiction.

But once machines can easily check the more basic claims made in the news, humans can focus on doing the more subjective job of helping people make rational judgements and interpretations on what they see and read in the media.

Close

What's Hot