Tay, Microsoft’s recent foray into social media chatbot technology, quickly became an offensive embarrassment for the company.
Microsoft’s latest experiment in AI, a chatbot called Tay, went live on social media platforms recently. Almost as quickly as we became aware of the launch, however, Microsoft was forced to take Tay offline. She’d apparently picked up some bad habits from the crowd she was interacting with.
Tay went online on Wednesday, March 23, as part of an experiment by Microsoft to build “a chatbot created for 18- to 24- year-olds in the US.” Microsoft already has a similar AI chatbot in China, called XiaoIce, that is used by more than 40 million people. The hope was that Tay would fill a similar niche in the US. Tay’s first days were supposed to teach her how to talk like the target audience in much the same way we all learn: through example. Tay took in the tweets and posts on her social media sites and, at first, simply regurgitated them. She soon learned to do more than simple direct repetition, but by then the damage was done.
Tay was bombarded by posts with content ranging from wacky to hateful. This included disparaging remarks about political candidates (“Hillary Clinton is a lizard person”) to bald-faced racism (which we won’t repeat here). While Microsoft had worked on filters to prevent Tay from picking up this sort of bad language, these obviously weren’t entirely successful. Within roughly 12 hours of Tay’s launch, she had become unabashedly foul-mouthed.
Microsoft shut Tay down hoping to avoid further embarrassment. A couple of days later, the company issued an apology. The statement said that “a coordinated attack by a subset of people exploited a vulnerability in Tay” that led to the offensive tweets. Microsoft expressed the hope that they would learn from this and avoid this kind of “vulnerability” in the future.
Microsoft says it tested Tay thoroughly to avoid just this sort of outcome, though this obviously wasn’t as successful as they’d been counting on. Ultimately, Tay was simply reflecting back the thoughts that were given to her. The negativity we see every day in Internet posts and comment sections, sent through the anonymous conduit of the Internet, proved too much for Tay. One has to wonder if Tay’s corruption is the fault of programming deficiencies or simply how and why many of us now use social media.
Let us know what you think about the interaction between AI chatbots like Tay and social media in the comments below.