Christmas trouble with virtual assistants

Looks like Luca Chittaro  is not the only one having arguments with virtual assistants:

Microsoft recently brought into the world an online "Santa bot" — an
interactive chat program for kids. The idea was that you would add
"northpole@live.com" to your kid’s MSN Messenger and your kid could
talk to Santa Clause online. Sounds like fun.

[via Marc Andreessen’s blog]

The problem is that a reader of The Register has reported a highly inappropriate response to an innocent sentence like "eat it" – something involving sexual practices. Microsoft promptly reacted to the complaint and changed the responses (you can read the whole story on The Register).

How are these things possible? Some time ago, I have played around with chatterbots (the textual engines behind virtual assistants, interpreting the sentences written by the user and trying to give sensible answers) and I believe that the trouble comes from the in-built set of responses. To show off its capability to mock up a human conversation, those libraries eagerly provide lots of witty answers. To see examples of those sets, have a look at the ALICE pages.

Tactically speaking, chatterbots need to keep conversation threads short: keeping track of the overall meaning of a long exchange of phrases is a task requiring a high degree of intelligence, so conversation killers are a good choice for such libraries. Other good choices (if "good" means "realistic") are sentences which provide a shift in focus, moving the discussion to another subject.

In a commercial implementation of a chatterbot, then, conversation killers are not bad at all. The modification in Ikea’s assistant prompted by Luca’s observation provides an excellent example: Anna provides a quick, useful answer. I’d be perfectly content with the conversation, and move to the product page: task accomplished for the virtual assistant. But moving the discussion to another subject can lead to disastrous consequences…