We Built A Twitter Bot That Replies To People Who Share Fake News

I never wanted to join this discussion. Sometimes I feel like our tech/media bubble is the only bubble that's currently caring about such things as filter bubbles. And others just won't see it and won't care (more on that later). However, as product people, my co-founder Alexander and I were interested in current solutions people suggested could solve this problem.
|
Open Image Modal
Jan Koenig

There have been a lot of people talking about "fake news" and misinformation since the election in November. Could an increase in "filter bubbles" and the deliberate spread of misleading information be the reason for the growing divide in this and other countries? Many people believe so. Fake articles even appear to be more viral than real ones. Here's some more information about the debate, in case you want to catch up: Key Moments in the Fake News Debate.

I never wanted to join this discussion. Sometimes I feel like our tech/media bubble is the only bubble that's currently caring about such things as filter bubbles. And others just won't see it and won't care (more on that later). However, as product people, my co-founder Alexander and I were interested in current solutions people suggested could solve this problem.

Is there anything we can build to prevent this in the future?

Shortly after the election, people started collecting tons of ideas. For example, in this crowd-sourced document many people came together and brainstormed proposals on more than 150 pages. That's impressive. However, it's hard to take action based on huge amounts of information like this.

Some ideas seem a little bit easier to build:

Something that flags fake news? That seems like a simple and helpful solution. And soon, the first products of this kind flooded in. Extensions like the BS Detector, a Fake News Monitor, and the Media Bias Fact Check Icon quickly appeared in the Chrome Web Store.

Open Image Modal

BS Detector

However, with the growing number of apparent solutions to a very multifaceted problem, many people started raising concerns.

Here are two problems that come up most often:

• The curation of "fake news" sources could be (borderline) censorship

• Solutions like Chrome extensions affect the wrong people (who might not share fake news)

1) Are current solutions promoting censorship?

Open Image Modal

Who decides if a news source is trustworthy? Very often, it's difficult to determine right or wrong. Where's the line between fake, satire, opinion, and conspiracy?

Lists that previously tried to call out fake news sources got a lot of criticism to write off everything as fake that doesn't match their creators' political beliefs. Products that are built on top of such lists will always raise concerns.

One more questionable point about some products is the way they treat apparent fake news sources. Solutions that just block news they disagree with are censorship, isolating "filter bubbles" even more.

2) Are current solutions addressing the wrong people?

Here's the thing: If you install a Chrome extension that flags fake news articles, chances are you're very aware of the problem. Thus, not really the target user group. Products should be designed for the people who spread the news without knowing it's fake. I can only tell from my experience in Germany, but usually, those aren't the people who hang out on Twitter all day, think about "filter bubbles," and install Chrome extensions.

So the problem is: How do we make sure the information about misinformation reaches the right people? Reaches the ones that are unaware of the problem?

This is what we were focusing on with this:

Our experiment: HoaxBot

Two weeks ago in a Friday afternoon hack session, we came up with the idea to build a simple, friendly Twitter bot that replies to people who tweet links to fake news articles.

A Twitter bot is an application that produces automated tweets based on certain criteria. They are very often considered spammy, but can also be quite useful in some use cases. Here's a list of Twitter bots for inspiration.

Open Image Modal

HoaxBot was designed to affect two types of users:

1) People who share content from questionable sources get an instant notification when they are mentioned in the tweet reply

2) People who see the shared link hopefully notice the first reply in the thread and don't spread it further

It was just an experiment. We knew that it would become difficult to decide which news sources to use. So we decided to use 21 sites that were flagged as obviously misleading (not satire, not opinionated, just fakes and hoaxes) from this list.

HoaxBot searched for the domains on Twitter and replied to people who shared articles from these sources. Our goal was to be as friendly as possible, as we didn't want people to feel offended by telling them it's fake.

And guess what, at least for some people, it was helpful:

Open Image Modal

For others, not at all:

Open Image Modal

Twitter didn't like it

We liked this experiment, and although it had its glitches, we were excited to see more users' reactions to our replies. However, not even 30 tweets in the career of HoaxBot, Twitter decided to cut us off their write API.

Open Image Modal

Why? Randomly tweeting at people is considered spam by Twitter. However, this went through so fast that I believe several accounts who spread the news on purpose didn't like our bot. And thus reported it.

To conclude, this was a fun experiment with very very low impact. And we're actually glad Twitter took steps early. We knew our selection of news sources was by far not complete, and our own curation would have been as questionable as other tools' in the midterm.

By the way: We're happy that Facebook is now working on a feature that solves parts of the above mentioned problems. However, people are already raising concerns.

This blog originally appeared here.