A day after going online, Microsoft had to pull the plug on its AI chatbot, Tay. The bot began making racist comments and talking about Hitler. Of course, racism was not built into Tay, but as those responses are governed by conversations with human users, things quickly devolved.
Microsoft is tweaking it (her?*) and will bring Tay back online soon. I imagine the digital lobotomy that they perform will result in the kind of obnoxious default refusals that most voice assistants exhibit.
“Alexa, f*!k you.”
“That’s not a very nice thing to say.”
The problem with this adjustment is that Tay may not pick up on the more subtle levels of sarcastic conversation. This is probably a good thing. I’d rather have my AI be a bit dumber than be prone to influence from the awful parts of the web.
Which brings me to the greater issue. As long as people are awful – or at least play awful people on the Internet – we increase the odds of creating robots that want to destroy us. Since we have solidly established that the awfulness isn’t going away anytime soon, we need either dumb AI or start preparing our tech-free bomb shelters.
* Another day, we can have a conversation about assigning a gender to our bots and how that plays out in the media.
Edit: After posting this, I listened to Motherboard’s great podcast episode, titled “Two Tales of AI.” I highly recommend.