Microsoft’s new artificial intelligence device went off the rails on the social media Twitter. In less than 24 hours, the chatbot “Tay” learned to be racist, sexist, pro-Trump and a little bit nazi-friendly.
The A.I. behind the chatbot was supposed to get smarter the more people talked to it. Tay is able to understand the questions people ask her and to adjust her answers according to her latest conversations. Some jokers decided to spam her with racist and misogynist messages and the result is creepy.
The nazis were “super cool”, “feminists should burn in hell” and “Hitler was right”. It’s just a glimpse of what Tay was saying on Twitter.
Microsoft have decided to erase all those borderline tweets and everything she learned in the last 24 hours before her creators blocked the Twitter account. Let’s hope that the A.I. in the future will not follow her example…
Mickel Sautereau, IEJ 1BIS