This Is Why We Can’t Have Nice Things: Technology Edition
If a new Microsoft project’s encounter with Internet trolling is any indication, maybe we’re not ready for the next wave of innovative technology. This post originally appeared on Inc.com.
Early Wednesday morning, a new Twitter user named Tay met the world. She was a Microsoft-designed experiment in artificial intelligence that would develop an increasingly human-like persona by engaging with Millennials online.
Early this morning though, Tay abruptly withdrew from the world. It seems she learned too indiscriminately, repeating all sorts of language that had been directed at her–including racial epithets, political conspiracy theories, and all-caps Trumpisms (Microsoft has since deleted the tweets, though The Guardian was kind enough to take screenshots).
If you’re like me, your initial reaction to Tay was excitement. AI is cool. Chatbots are fun. It seemed like a neat experiment. I spent the afternoon chatting with Inc. columnist John Brandon about a post on Tay. His working headline was This Sassy Twitter Chatbot by Microsoft Talks Like a Millennial…and It’s Awesome. Ultimately, the headline became Microsoft and the Rise of the Dumbots.
Welcome to the Internet. Tay’s initial failure is a great example of how the most well-intentioned tech innovations can be taken in unexpected directions when subjected to a public audience. It’s a reflection of virtually any human interaction, in a way, though a slightly skewed one: The loudest voices aren’t always the most popular ones, but they’re the ones that most often get heard.
Here’s another example in the news this week: A British government agency, the National Environment Research Council, asked people to vote online for a name for its new $287 million polar research ship. Their overwhelming choice so far? The R.R.S. Boaty McBoatface.
The ongoing poll is less troubling than Tay’s lesson on racism, of course. For Tay, maybe a few bad apples are spoiling the bunch. That’s what I hope, anyway. Internet trolls love to mess with people (or chatbots), and they don’t seem to care when they cross a line–by hiding their identity behind a screen name, they count on never having to experience serious retribution.
TO KEEP READING, HEAD OVER TO INC.COM: http://www.inc.com/cameron-albert-deitch/microsoft-artificial-intelligence-project-shows-why-we-cant-have-nice-things.html