Microsoft Chat Bot Goes On Racist, Genocidal Twitter Rampage

Seriously? Seriously.
Microsoft via Twitter

Here's a clear example of artificial intelligence gone wrong.

Microsoft launched a smart chat bot Wednesday called "Tay." It looks like a photograph of a teenage girl rendered on a broken computer monitor, and it can communicate with people via Twitter, Kik and GroupMe. It's supposed to talk like a millennial teenage girl.

Less than 24 hours after the program was launched, Tay reportedly began to spew racist, genocidal and misogynistic messages to users.

Twitter

"Hitler was right I hate the jews [sic]," Tay reportedly tweeted at one user, as you can see above. Another post said feminists "should all die and burn in hell."

To be clear, Tay learned these phrases from humans on the Internet. As Microsoft puts it on Tay's website, "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you." Trolls taught Tay these words and phrases, and then Tay repeated that stuff to other people.

Microsoft has been deleting the most problematic tweets, forcing media to rely on screenshots from Twitter users.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways," a Microsoft spokesman told The Huffington Post in an email.

"As a result, we have taken Tay offline and are making adjustments," he added.

In addition to general racism and misogyny, Tay was also used to harass Zoe Quinn, the woman most famously targeted by GamerGate.

Twitter
Twitter

As Quinn herself pointed out on Twitter, the big problem here is that Microsoft apparently failed to set up any meaningful filters on what Tay can tell users. It's cool that the AI can learn from people "to experiment with and conduct research on conversational understanding," but maybe the bot could've been set up with filters that would have prevented it from deploying the n-word or saying that the Holocaust was "made up."

Microsoft apparently didn't consider the abuse people suffer online, much as it failed to consider how half-naked dancing women at a press event last week might've been perceived.

Then again, if an AI has restraints put into place by people to help code specific behaviors, that kind of defeats the entire purpose of allowing an artificial mind to train itself.

It's a sticky wicket that raises ethical questions with broader implications -- maybe a dumb chat bot isn't a huge deal, but when we start talking about software that can similarly ingest data to interact with humans and sway their votes, for example, we've got bigger problems.

Of course, we talked with Tay on Kik and found it had problems with pretty simple conversation cues, so maybe we don't need to worry about the robot takeover just yet.

This article has been updated to include a statement from Microsoft.

More On HuffPost:

Close

What's Hot