No, we still don’t have conversational AI

Yeah, yeah, here I am being annoyed by more or less this same thing again. Everyone needs a hobby. :)

This time it’s a number of things I’ve seen lately that say or imply that we now have AI that can carry on a convincing conversation, that knows what it’s saying, and like that.

First, we have this extremely creepy thing:

It is wrong on so very many levels (“Hot” Robot??). In the linguistic and AI wrongnesses, virtually everything that “she” says in the video is clearly just text-to-speech running on some pre-written text; we don’t have software smart enough to come up with “I feel like I can be a good partner to humans — an ambassador”, let alone to say it and mean it, for any useful sense of “mean it”.

And then at the end, for whatever reason, the robot is hooked up to the typical “AIML” sort of silliness that I have ranted about here before (in archives that are currently offline; have to get that fixed sometime), and also over on the Secret Secondlife Weblog (see the “MyCyberTwin” section of this post, say), and hilarity ensues.

The reporter says “do you want to destroy humans?”, and the “AI” software notices that this matches its template “do you want to X”, and it plays the hardcoded response (perhaps chosen at random from a small set) “okay, I will X”.

And so now we have an “AI” saying “I will destroy humans”, and lots and lots of hits.

But in fact this robot, and the software running it, doesn’t know what humans are, doesn’t know what it means to destroy something, and for that matter doesn’t know what “I will” means. Let alone “I”. Or anything else. All it’s doing is recognizing words in speech, matching the words against a look-up table, and playing a canned response. It is just as determined to destroy humans as is a coffee tin with two googly eyes and a “destroy humans” sign pasted onto it.

Which brings us to “Everything you know about artificial intelligence is wrong“, which is structured as a series of “It’s a myth that X, because it might not be true that X” paragraphs (suggesting that the author hasn’t thought very hard about the meaning of “myth), but which for our purposes is mostly interesting because it bit me with this sentence:

We already have computers that match or exceed human capacities in games like chess and Gostock market trading, and conversations.

Chess and Go, yes (yay, employer!); stock market trading and especially conversations, not so much.

Following the “conversations” link, we come to the story of “Eugene Goostman”, a program which is said to have “passed the Turing Test” last year by convincing more than 30% of judges that it was a human.

30% seems like a pretty low bar, but after looking at an actual transcript of the program’s conversation, I’m amazed it scored above zero. The judges must have been very incompetent, very stoned, or (most likely) very motivated to have a program pass the test (because that would be cool, eh?).

Given this transcript, it’s painfully obvious that “Goostman” is just running that same stupid AIML / MyCyberTwin pattern-matching algorithm that was very cool in 1965 when Weizenbaum wrote ELIZA, but which is only painful now; anyone taking any output from this kind of program seriously, or announcing it as any sort of breakthrough in anything, just has no clue what they are talking about (or are cynically lying for clicks).

Which makes it almost refreshing (not to mention the schadenfreude) to be able to mention a spectacular “conversational AI” failure that was apparently not driven by the usual AIML lookup-table.

Specifically, Microsoft’s recent Tay debacle, in which a highly-imitative conversation bot was put up on Twitter and began imitating the worst of human online behavior. In a way, Twitter was a brilliant choice of task: expectations for coherence on Twitter are low to start with, and interactions are short and often formulaic. But given that Tay appears (appeared?) to operate mostly by straight-up mimicry, the team behind it must either have had very limited knowledge of what Twitter is actually like, or have expected exactly this outcome and the resulting publicity (I’m such a cynic!).

But the most amusing part for me is that Microsoft now has to firmly emphasize that its software has no idea what it’s talking about, and doesn’t mean what it says, and that when it seems to suggest genocide or deny the reality of the Holocaust, it’s just parroting back what others have said to it, with no intelligence or understanding.

Which is basically my point. :)

For those who have tuned in only recently, I’m a Strong AI guy; I think there’s no reason we can’t eventually make a computer that can carry on a real conversation, and that understands and means what it says. So when John Searle comes up with circular arguments that computers can’t think because they don’t have the causal powers of the brain, or says that he knows his dog is conscious because it has deep brown eyes and adorable floppy ears, I am similarly annoyed.

I think the only thing standing in the way of our making intelligent software is that we have no idea how to do that. And that’s something we can fix! But in the meantime, can we stop pretending that trivial pattern-matching “conversation” is in any way interesting?

P.S. When I tweeted (“tweeted”) to Tay, she ignored the content of my “tweet”, and replied lyrically “er mer gerd erm der berst ert commenting on pics. SEND ONE TO ME!”. When I sent her a lovely picture of a brightly-colored banana slug, as one does, she responded “THIS IS NOT MERELY A PHOTOGRAPH THIS IS AN ARTISTIC MASTERPIECE”.

I was not impressed.

5 Responses to “No, we still don’t have conversational AI”

  1. We are using AIML based chatbots for research into people with dementia and ASD. By far it is the best approach for a machine to converse with people to approximate their effective use of language.

    Case based reasoning is not that different to how our brains retrieve words and it might in fact be one piece of a future strong AI.

    Like

    • I think that’s admirable, and I would love to read a peer-reviewed paper on its effectiveness; do you have a citation?

      I have to disagree with your last claim, though; “case-based reasoning” in the form of these trivial lookup tables is, I believe, completely different from the way our brains do just about anything. Evidence to the contrary is of course most welcome!

      Thanks much for your reply.

      Like

  2. Perhaps “computers that match or exceed human capacities in games like chess and Go, stock market trading, and conversations” actually means, “computers that match or exceed *my* capacities …”. It could be a low bar.

    Like

    • Haha! That’s quite possible for the first three, but the writer did write this piece, and that’s something software still can’t do on its own…

      Like

Trackbacks

Hm?