I don’t know why there is so much snake-oil and hype in AI, and in all those fields that are AI hiding under another name, like for instance “Cognitive”.
Maybe I am just cynical? Maybe there is no more snake-oil in AI than in any other area, and I’ve just looked into it more. (There’s certainly snake-oil in anti-virus, and in philosophy of mind, and… Hm…)
But anyway! The thing I want to complain about this time is the whole “Cognitive” thing, and in particular this “Decoding the debates – a cognitive approach” article that showed up as a Promoted Tweet or something this morning in The Twitters.
I was at IBM Research when the whole Cognitive Computing thing was conceived. Although I wasn’t involved with it, I went to some of the seminars and information sessions and teas and stuff, and was unimpressed.
They had various smart (or smart-looking) people appear on screens and talk about how Cognitive systems can do all sorts of smart things, by being smart and working like brains and stuff, without any actual substantive material about why this particular attempt to make systems smarter was going to work any better than any prior attempts.
The “cognitive approach” revealed in the article is to see which distinctive words the various candidates used during the debates, draw Venn diagrams showing how Rs used some words and Ds used some other words, and they both used some of the same words.
Then you can read news articles found by searching for candidates’ names and some of those words, and you will understand the debates.
The very last capping sentence of the story is:
It makes understanding anything in a matter of minutes.
(which is, notably, not strictly-speaking an English sentence; apparently “anything” doesn’t include, say, the art of proofreading).
If we make a reasonable attempt to fix this sentence, and come up with, say, “It makes understanding anything possible in a matter of minutes”, we come up with something that is, obviously, untrue. So utterly untrue as to insult the reader’s intelligence, in fact.
Understanding anything in a matter of minutes? Really? I mean, really? A little textual analysis and web searching is going to lead to understanding of the nature of consciousness, the paradox of free will, the joy and heartache of love, the puzzle of the human condition, the causes of poverty, in minutes? Things that have eluded all of us for centuries, and that no one on Earth has ever understood, are going to be mastered in minutes using the amazing Watson APIs?
No, of course not, that would be idiotic to suggest. But it’s what the words there say.
Is it even plausible to argue that reading a few articles found by looking up D keywords and R keywords and Both keywords, will even give someone a basic understanding of the issues in these debates, in a matter of minutes?
No, clearly it isn’t. If I happen to find an article in my search that happens to give me a little basic understanding of something, it will because I happened to find a good article, not because of anything “cognitive” that came out of the keyword-listing.
Does this approach work well for anything at all? Well, if it did, it seems natural that this article would link to a live example, that the reader could interact with, and come to some understanding of something in a few minutes.
Given that it doesn’t link to anything like that, we can conclude that in fact it doesn’t work well, for anything at all.
Maybe we fixed the sentence wrong.
It makes understanding anything in a matter of minutes no less unlikely than ever.
There ya go.
File under “snake-oil”.