Microsoft has just released an AI chat bot that, as it claims, becomes smarter when people talk to it on social media. The results? Not so great… Within 24 hours the bot has turned into a racist, Nazi-loving meanie. It’s a little bit of a sad piece of news for the humanity. What was hoped to be a social experiment for AI to learn from people showed the true face of internet trolling.
The bot has been created to allow researchers to develop Artificial Intelligence further and see how it interacts with people. By doing this, Microsoft also aimed at developing “conversational understanding” of how AI can talk to people in a natural way (Chapman, 2016). The bot has been active mainly on Twitter, but Microsoft has placed it on Kik and GroupMe as well – which are two messaging platforms. It has been intended to interact mainly with young social media users aged 18-24, so that it could learn the “Millenial talk” – including emojis and a specific conversation tone.
Unfortunately, what Tay’s creators didn’t predict was the willingness of some people to teach Tay things most of us humans are not proud of.. Including racism, Donald Trump love and Hitler praise. Within 24 hours, the chat bot went from friendly to rogue.