Posts tagged ‘chomsky’

2023/03/10

Chomsky declares: LLMs icky!

Friend Steve wrote us today about this New York Times opinion piece, “Noam Chomsky: The False Promise of ChatGPT” (this link may be free for everyone for some time or something). Despite the title, it’s by Chomsky, Roberts, and Watumull.

Steve commented inter alia on the authors’ apparent claim that ChatGPT can say that the apple you’re holding will fall if you open your hand, but unlike humans it can’t explain the fact. The trouble with the argument is that, as anyone who’s actually used ChatGPT can tell you, it will happily explain the fact, go into the history of the notion of gravity, talk about other things people have thought about it over time, and explain various situations in which the apple wouldn’t fall, given the slightest provocation.

My reply, lightly edited:

I am pretty unimpressed with the article as a piece of science or philosophy; fine as a nice polemic by a greybeard I suppose. :)

I’m amused at how LLMs are “lumbering” and “gorged”, while human minds are “elegant” and even “efficient”. I doubt there is any useful sense in which these adjectives are anything more than bigger words for “icky” and “nice” in this context.

Chomsky brings in the innateness of language, because of course he does, but I’m not at all clear how it’s relevant. Even if humans do have innate language scaffolding, and LLMs don’t have the same kind, it’s far too early to say that they don’t have any, and even if they didn’t, so what? Does the ability to learn a wider variety of languages than humans can, mean that LLMs don’t really understand, or can’t really think, or are harmful or dangerous? None of that makes sense to me; it seems just an even longer way of saying that they’re icky.

He (well, they, there being multiple non-Chomsky authors) claims that LLMs don’t have the ability to say “what is not the case and what could and could not be the case.” And I can’t imagine what they think they mean by that. As with the flaw you point out in the apple example, it’s simply wrong, and suggests that they haven’t really used an LLM much. ChatGPT (let alone a less heavily defanged system) will expound at length about what is not the case and what could and could not be the case, given any halfway decent prompt to do so. They may intend something deeper here than they actually say, but I don’t know what it could be (beyond that they can’t do it non-ickily).

“Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round.” Um, what? There are certainly humans who believe each of these things. They can’t just be saying that humans can’t conjecture that the earth is flat “rationally” because so what; that’s exactly as true of an LLM. If they mean that the same LLM can make one of those claims one minute and the other the next, whereas humans can’t hold two contradictory beliefs at the same time, I’d like to introduce them to some humans. :)

Similarly for whatever it is they are trying to say about moral reasoning. The suggestion seems to be that, simultaneously, ChatGPT is icky because it cannot stay within moral boundaries, and also icky because it stays stubbornly within anodyne moral boundaries. As pretty much throughout the piece, stuff that humans do all the time is cited as reasons ChatGPT isn’t as good as humans.

Tay became toxic by listening to people, therefore it’s not like people? It had to be heavily censored to keep it from talking trash, therefore it’s not like people? Um?

It might be interesting to try to tease a set of actual significant truth-claims out of this article, and see which ones are arguably true. But I’m not sure that’s the point really.

As far as I can tell, this piece is just a longer and nicely phrased version of “Boo, LLMs! Icky!”

But maybe that’s just me. :)