The Microsoft’s A.I. becomes racist

Posted on



Microsoft’s new artificial intelligence device went off the rails on the social media Twitter. In less than 24 hours, the chatbot “Tay” learned to be racist, sexist, pro-Trump and a little bit nazi-friendly.

The A.I. behind the chatbot was supposed to get smarter the more people talked to it. Tay is able to understand the questions people ask her and to adjust her answers according to her latest conversations. Some jokers decided to spam her with racist and misogynist messages and the result is creepy.




The nazis were “super cool”, “feminists should burn in hell” and “Hitler was right”. It’s just a glimpse of what Tay was saying on Twitter.

Microsoft have decided to erase all those borderline tweets and everything she learned in the last 24 hours before her creators blocked the Twitter account. Let’s hope that the A.I. in the future will not follow her example…


Mickel Sautereau, IEJ 1BIS


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s