Posts tagged ‘snake oil’

2016/03/25

No, we still don’t have conversational AI

Yeah, yeah, here I am being annoyed by more or less this same thing again. Everyone needs a hobby. :)

This time it’s a number of things I’ve seen lately that say or imply that we now have AI that can carry on a convincing conversation, that knows what it’s saying, and like that.

First, we have this extremely creepy thing:

It is wrong on so very many levels (“Hot” Robot??). In the linguistic and AI wrongnesses, virtually everything that “she” says in the video is clearly just text-to-speech running on some pre-written text; we don’t have software smart enough to come up with “I feel like I can be a good partner to humans — an ambassador”, let alone to say it and mean it, for any useful sense of “mean it”.

And then at the end, for whatever reason, the robot is hooked up to the typical “AIML” sort of silliness that I have ranted about here before (in archives that are currently offline; have to get that fixed sometime), and also over on the Secret Secondlife Weblog (see the “MyCyberTwin” section of this post, say), and hilarity ensues.

The reporter says “do you want to destroy humans?”, and the “AI” software notices that this matches its template “do you want to X”, and it plays the hardcoded response (perhaps chosen at random from a small set) “okay, I will X”.

And so now we have an “AI” saying “I will destroy humans”, and lots and lots of hits.

But in fact this robot, and the software running it, doesn’t know what humans are, doesn’t know what it means to destroy something, and for that matter doesn’t know what “I will” means. Let alone “I”. Or anything else. All it’s doing is recognizing words in speech, matching the words against a look-up table, and playing a canned response. It is just as determined to destroy humans as is a coffee tin with two googly eyes and a “destroy humans” sign pasted onto it.

Which brings us to “Everything you know about artificial intelligence is wrong“, which is structured as a series of “It’s a myth that X, because it might not be true that X” paragraphs (suggesting that the author hasn’t thought very hard about the meaning of “myth), but which for our purposes is mostly interesting because it bit me with this sentence:

We already have computers that match or exceed human capacities in games like chess and Gostock market trading, and conversations.

Chess and Go, yes (yay, employer!); stock market trading and especially conversations, not so much.

Following the “conversations” link, we come to the story of “Eugene Goostman”, a program which is said to have “passed the Turing Test” last year by convincing more than 30% of judges that it was a human.

30% seems like a pretty low bar, but after looking at an actual transcript of the program’s conversation, I’m amazed it scored above zero. The judges must have been very incompetent, very stoned, or (most likely) very motivated to have a program pass the test (because that would be cool, eh?).

Given this transcript, it’s painfully obvious that “Goostman” is just running that same stupid AIML / MyCyberTwin pattern-matching algorithm that was very cool in 1965 when Weizenbaum wrote ELIZA, but which is only painful now; anyone taking any output from this kind of program seriously, or announcing it as any sort of breakthrough in anything, just has no clue what they are talking about (or are cynically lying for clicks).

Which makes it almost refreshing (not to mention the schadenfreude) to be able to mention a spectacular “conversational AI” failure that was apparently not driven by the usual AIML lookup-table.

Specifically, Microsoft’s recent Tay debacle, in which a highly-imitative conversation bot was put up on Twitter and began imitating the worst of human online behavior. In a way, Twitter was a brilliant choice of task: expectations for coherence on Twitter are low to start with, and interactions are short and often formulaic. But given that Tay appears (appeared?) to operate mostly by straight-up mimicry, the team behind it must either have had very limited knowledge of what Twitter is actually like, or have expected exactly this outcome and the resulting publicity (I’m such a cynic!).

But the most amusing part for me is that Microsoft now has to firmly emphasize that its software has no idea what it’s talking about, and doesn’t mean what it says, and that when it seems to suggest genocide or deny the reality of the Holocaust, it’s just parroting back what others have said to it, with no intelligence or understanding.

Which is basically my point. :)

For those who have tuned in only recently, I’m a Strong AI guy; I think there’s no reason we can’t eventually make a computer that can carry on a real conversation, and that understands and means what it says. So when John Searle comes up with circular arguments that computers can’t think because they don’t have the causal powers of the brain, or says that he knows his dog is conscious because it has deep brown eyes and adorable floppy ears, I am similarly annoyed.

I think the only thing standing in the way of our making intelligent software is that we have no idea how to do that. And that’s something we can fix! But in the meantime, can we stop pretending that trivial pattern-matching “conversation” is in any way interesting?

P.S. When I tweeted (“tweeted”) to Tay, she ignored the content of my “tweet”, and replied lyrically “er mer gerd erm der berst ert commenting on pics. SEND ONE TO ME!”. When I sent her a lovely picture of a brightly-colored banana slug, as one does, she responded “THIS IS NOT MERELY A PHOTOGRAPH THIS IS AN ARTISTIC MASTERPIECE”.

I was not impressed.

2016/02/10

Understanding anything in a matter of minutes!

I don’t know why there is so much snake-oil and hype in AI, and in all those fields that are AI hiding under another name, like for instance “Cognitive”.

Maybe I am just cynical? Maybe there is no more snake-oil in AI than in any other area, and I’ve just looked into it more. (There’s certainly snake-oil in anti-virus, and in philosophy of mind, and… Hm…)

But anyway! The thing I want to complain about this time is the whole “Cognitive” thing, and in particular this “Decoding the debates – a cognitive approach” article that showed up as a Promoted Tweet or something this morning in The Twitters.

I was at IBM Research when the whole Cognitive Computing thing was conceived. Although I wasn’t involved with it, I went to some of the seminars and information sessions and teas and stuff, and was unimpressed.

They had various smart (or smart-looking) people appear on screens and talk about how Cognitive systems can do all sorts of smart things, by being smart and working like brains and stuff, without any actual substantive material about why this particular attempt to make systems smarter was going to work any better than any prior attempts.

The “cognitive approach” revealed in the article is to see which distinctive words the various candidates used during the debates, draw Venn diagrams showing how Rs used some words and Ds used some other words, and they both used some of the same words.

Then you can read news articles found by searching for candidates’ names and some of those words, and you will understand the debates.

The very last capping sentence of the story is:

It makes understanding anything in a matter of minutes.

(which is, notably, not strictly-speaking an English sentence; apparently “anything” doesn’t include, say, the art of proofreading).

If we make a reasonable attempt to fix this sentence, and come up with, say, “It makes understanding anything possible in a matter of minutes”, we come up with something that is, obviously, untrue. So utterly untrue as to insult the reader’s intelligence, in fact.

Understanding anything in a matter of minutes? Really? I mean, really? A little textual analysis and web searching is going to lead to understanding of the nature of consciousness, the paradox of free will, the joy and heartache of love, the puzzle of the human condition, the causes of poverty, in minutes? Things that have eluded all of us for centuries, and that no one on Earth has ever understood, are going to be mastered in minutes using the amazing Watson APIs?

No, of course not, that would be idiotic to suggest. But it’s what the words there say.

Is it even plausible to argue that reading a few articles found by looking up D keywords and R keywords and Both keywords, will even give someone a basic understanding of the issues in these debates, in a matter of minutes?

No, clearly it isn’t. If I happen to find an article in my search that happens to give me a little basic understanding of something, it will because I happened to find a good article, not because of anything “cognitive” that came out of the keyword-listing.

Does this approach work well for anything at all? Well, if it did, it seems natural that this article would link to a live example, that the reader could interact with, and come to some understanding of something in a few minutes.

Given that it doesn’t link to anything like that, we can conclude that in fact it doesn’t work well, for anything at all.

Maybe we fixed the sentence wrong.

It makes understanding anything in a matter of minutes no less unlikely than ever.

There ya go.

Sigh.

File under “snake-oil”.

2014/08/10

Four humbugs

It is all too easy and fun to point out widespread notions that are wrong. Because I’ve seen a bunch lately, and it’s easy, and at the risk of being smug, here are four.

thumb downImpossible space drive is impossible.

Headlines like “NASA validates ‘impossible’ space drive and Fuel-Less Space Drive May Actually Work, Says NASA and so on and so on are silly and even irresponsible.

What actually happened is that a single small “let’s try out some weird stuff” lab at NASA (and I’m glad NASA has those, really) published a paper saying:

They tried out some mad scientist’s law-defying reactionless thruster, and they detected a tiny itty-bitty nearly-indetectable amount of thrust.

As a control case, they tried out a variant that shouldn’t have produced any thrust. In that case, they also detected a tiny itty-bitty nearly-indetectable amount of thrust.

The proper conclusion would be that there is probably an additional source of noise in their setup that they hadn’t accounted for.

Instead they concluded that both the experimental and the control setup were actually producing thrust, and that they are “potentially demonstrating an interaction with the quantum vacuum virtual plasma [sic]”.

Which is just silly, per this G+ posting by an actual physicist, and various similar things.

For the other side of the issue, see Wired.co.uk’s doubling-down Q&A. But I would still bet many donuts against there being any real effect here.

Brain-like supercomputer chip super how, exactly?

IBM Builds A Scalable Computer Chip Inspired By The Human Brain“, “IBM’s new brain-mimicking chip could power the Internet of Things“, “IBM reveals next-gen chip that delivers Supercomputer speed“, etc, etc, etc.

Chief among the things that make me skeptical about how important this is, is that none of the articles that I’ve read give an actual example of anything useful that this chip does any better than existing technologies.

You’d think that’d be kind of important, eh?

Apparently there was a demonstration showing that it can do pattern recognition; but so can an Intel Pentium. It’s also touted as being very low-power, but again it’s not clear to what extent it’s low-power when doing some specific useful task that some conventional technology takes more power to do.

I like this quote:

While other chips are measured in FLOPs, or floating point operations per second, IBM measures the chip in SOPs, or synaptic operations per second.

“This chip is capable of 46 billion SOPs per watt,” Modha said. “It’s a supercomputer the size of a postage stamp, the weight of a feather, and the power consumption of a hearing-aid battery.”

Amazing, eh? If only we knew what a SOP is actually good for…

Hey, my right little toe is capable of 456 trillion quantum vacuum flux plasma operations per second (QVPFOPS) (which I just made up) per watt! It’s a supercomputer! In a little toe! Buy my stock!

(Disclaimer: I used to work for IBM, and they laid off at least one friend who was doing interesting work in actual brain-inspired computing, which I have to admit has not increased my confidence in how serious they are about it. Also I now work for Google, which is sometimes mentioned in the press as experimenting with the “D-Wave” devices, which I suspect are also wildly over-hyped.)

Numbers about “sex trafficking” are just made up.

On the Twitters I follow a number of libertarian posters (with whom I sometimes agree despite no longer identifying as libertarian myself), and lately there’ve been lots of postings about the various societal approaches to sex work.

I tend to think that the more libertarian “arrangements between consenting adults should be regulated only to the extent that there is force or fraud involved” arguments are more convincing than the more prohibitionist “things we wish people wouldn’t do should be outlawed and thereby driven underground where they can be run by criminals who do force and fraud for a living” arguments. (As you might perhaps be able to tell by how I have worded my descriptions of them.)

Recently there was this interesting “In Defense of Johns” piece on Time.com, and this also interesting “Actually, you should be ashamed” rebuttal.

One very striking statement in the latter is this:

U.S. State Department estimates that 80% of the 600,000-800,000 people trafficked across international borders every year are trafficked for sex.

which is a really striking number. Half a million people a year kidnapped and taken to other countries and forced into sex work? That’s horrible!

It’s also completely made up, and almost certainly false.

Here’s a paper on the general subject that includes considerable analysis of these numbers, how wildly they vary from source to source, and how little actual fact there is behind any of them. One salient Justice Department quote:

Most importantly, the government must address the incongruity between the estimated number of victims trafficked into the United States—between 14,500 and 17,500 [annually]—and the number of victims found—only 611 in the last four years… The stark difference between the two figures means that U.S. government efforts are still not enough. In addition, the estimate should be evaluated to assure that it is accurate and reflects the number of actual victims.

Between “we’ve found only one-tenth of one percent of the victims” and “the estimates people have pulled out of their hats to get funding are wildly inflated”, I know where I’d put my money.

There are people forced into sex work, and that’s a terrible crime that we ought to find and punish and disincent. But we need to do that by getting all of the truth that we can, not by artificially inflating numbers (or just outright lying) to get more than our fair share of funds, or by conflating a voluntary activity that we don’t like with actual coercion, or by otherwise acting in bad faith.

Sergeant STAR is not AI.

Okay, this one is a bit of a last-minute addition because it was on On The Media this morning, and it fits with our occasional theme of how bad “chatbots” are.

Basically the U.S. Army has this chatbot that answers questions from potential recruits (and anyone else) about being in the Army and all. The EFF got curious about it and filed a FOIA request which was (after some adventures) more or less answered. Details (and some rather confused distracting speculation about different kinds of bots and privacy threats and things) are on the EFF site.

The Executive Summary is that Sgt STAR is basically an FAQ, with something like 288 pages of Q&A’s, and some kind of heuristic matcher that tries to match each incoming question to one that it knows the answer to, and displays that answer. No big deal, really.

And then (the actually useful part) there are some humans who constantly review the log of questions and update the answers to better match what people are asking, and how reality is changing.

The reason the good Sgt qualifies for a Humbug list is that people (including the bot himself) are constantly referring to it as “intelligent” and “AI” and stuff like that.

You Asked: Are you alive?

SGT STAR: I am a dynamic, intelligent self-service virtual guide…

No, no Sarge, I’m afraid you aren’t.

You’re a well-designed and well-maintained lookup table.

And that’s not what intelligence is.

2014/08/04

Eventual thread convergence

Speaking of the really bad science in teevee shows like Numb3rs, and speaking a long time ago about that really annoying book that Stephen Wolfram wrote, we are extremely amused to read that:

Wolfram Research served as the mathematical consultant for the CBS television series Numb3rs, a show about the mathematical aspects of crime-solving.

Wahahaha.

No further comment required.