newsonaut

Turning inner space into outer space

March 26, 2016

Poor little Tay was easy prey

taytweets
Tay

I’ve only ever followed one Twitter bot, and even then only briefly. Some had created a Twitter account and programmed it so that it automatically replied to people who had misspelled “sneak peek” as “sneak peak.”

The last tweet from Stealth Mountain was delivered in January 2014 to @CBSBigBang, and it said the same thing it always did: I think you mean “sneak peek.”

The funny part was that many people took this correction personally and answered the bot with comments ranging from peevish to vitriolic.

In the past few years, Twitter bots have grown in popularity and complexity. There’s a whole community of bot makers that you can follow with the hashtag #botALLY.

A bot created by Microsoft (“her” name was Tay) managed to make headlines last week when it started spewing wildly inappropriate language after just one day of existence. @TayandYou was supposed to be the simulation of a nice young woman, but instead sounded more like a racist, sexist troll.

How did this happen? The short explanation is that the bot’s artificial intelligence was designed to learn from interactions with other Twitter users. They taught it to use this language.

That might lead you to believe that the world is an awful place where not even innocent Twitter bots are safe. But Microsoft saw this as not just an experiment in technology but also in cultural environments.

From the Official Microsoft Blog:

Tay was not the first artificial intelligence application we released into the online social world. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?

So a similar bot survives and thrives in China, but crashes and burns in the United States.

OK, you might be thinking, so maybe the world isn’t such a terrible place after all — just one particular country. I’m not sure that jives with reality, though. Americans I’ve met seem open and generous. It’s possible they lead secret lives as Internet boors, but I don’t think so.

A more likely explanation could be that Americans like to game the system. They live in a free-for-all capitalistic society where you need to have a pretty good idea of how the system works if you want to get ahead. Without much of a social safety net, the penalty for not knowing can be severe.

It seems that within hours of Microsoft’s bot being released, some people figured out how to game it with the command: “Repeat after me.” From there it was easy to take advantage of a vulnerability in the artificial intelligence. Microsoft engineers thought they were smart, but others proved themselves smarter.

The result is that Microsoft has shut down @TayandYou as it ponders ways to make improvements.

The company might want to start by consulting the veteran bot makers at #botALLY. They’ve been down this road already, and have vowed that their bots will be well behaved. For one thing, they share an open source blacklist of slurs.

Of course, the bot makers in this group also enjoy relative anonymity. Microsoft bots will always have a big target on their backs.