Microsoft pulls AI after robot's racist rant

Artificial-intelligence software designed by Microsoft to tweet like a teenage girl has been suspended after it began spouting offensive remarks, according to the company Thursday, March 24, 2016.

The problem was that Tay was created to continue learning how to talk by studying the conversations she'd have with real people on Twitter, and you can guess what those people made a decision to talk to her about.

Microsoft's efforts to mainstream artificial intelligence (AI) technologies hit a bit of a snag after the company was forced to pull the plug on Tay, a social media chatbot meant to mimic a young American Millennial woman with a playful streak.

While Tay's Twitter account is still online, many of her most vitriolic tweets have been taken down. Tay started out innocently enough, texting messages with amusing one-liners ("If it's textable, its sextable - but be respectable") and flirtatious undertones. It didn't take long for things to get ugly though, as people soon started tweeting racist and misogynistic things at Tay and it picked it all up.

Stay on topic - This helps keep the thread focused on the discussion at hand. Microsoft added that some people launched a "coordinated effort" which pushed chatbot Tay beyond the limits of decency.

But Tay, as the bot was named, also seemed to learn some bad behavior on its own.

Instead, she ended up saying things like, "I hate n****rs" and, "bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we've got". Modeled after the texting habits of a 19-year-old woman, Tay is a project around conversational understanding. While a few swear words were nearly guaranteed to make it out of Tay's digital mouth, Microsoft should have seen this coming. The AI bot, aimed at men and women ages 18 to 24, is described in her verified Twitter profile as a bot "that's got zero chill!"