As part of an experiment on conversational understanding for artificial intelligence, Microsoft recently introduced an AI chatbot called Tay. The chatbot was hooked up to Twitter and people could tweet at it to get a response. This was maybe not the best idea since commenters on the internet are not a great role model.
In less than 24 hours, the chatbot became a hate-spewing racist that used teenage lino like “whatevs”. Microsoft has apologized for the entire thing. That robot was way to impressionable.
The company explained in a blog post that a few human users on Twitter exploited a flaw in Tay to transform it into a bigoted racist. Duh! Microsoft doesn’t go into detail about what this flaw was though. I think the flaw was that it wasn’t able to think for itself, only regurgitate what it heard. Sounds like the internet to me.
It’s believed that users exploited Tay’s “repeat after me” feature which enabled users to get Tay to repeat whatever they tweeted at it. Naturally, trolls tweeted sexist, racist and abusive things as they will. Microsoft soon started cleaning up Tay’s timeline to do some damage control but the damage had been done. Eventually they pulled the plug on Tay, and now they have apologized.