So I’m still listening to that Learning Company Course on Consciousness. It’s interesting, keeps my mind awake, although so far there’s nothing that’ll be new to anyone who has an undergraduate degree in the area (raises hand), or who’s just done alot of reading.
In the lecture on Consciousness and Physics, I was somewhat disappointed that the professor repeated Roger Penrose’s incompleteness-theory-based argument that thinking things can’t be (just) carrying out algorithms (and therefore that computers, say, can’t think), with a straight face, and without pointing out the difficulties. I think that argument is pretty much entirely wrong, and not even in very subtle ways, so I will rail against it here (I originally typed that I would “inveigle” against it here, and was proud of myself for spelling it right, even though it’s entirely the wrong word).
Gödel’s Incompleteness Theorem says (quite surprisingly and counterintuitively, which is why it’s famous) that any formal system (that is, any set of symbols and rules about how you can manipulate them to derive some from others) that is at least powerful enough to represent arithmetic (like, say, the digits and plus and equal signs and stuff), and that is consistent (that is, there’s no statement S where you can both prove that S is true, and prove that it’s false, in the system), is incomplete, where incomplete means that there is at least one statement that is true in the system that cannot be proven in the system. If you want to prove that true statement, you have to add it as an axiom. (And the resulting system is now unable to prove some other true statement, so it’s still incomplete.)
(This immediately seems counterintuitive, because after all what could “true in a formal system” mean, apart from “provable in that system”? It turns out those really are different, and realizing that was one of the more interesting things about learning about the Incompleteness Theorem in the first place.)
Now Roger Penrose, who is really good at mathematics and physics and Minkowski spaces and geometric tilings and all, but as far as I can tell not all that good at philosophy, famously argues that the Incompleteness Theorem shows that we humans cannot be merely executing algorithms when we think, because things that execute algorithms can be modeled as formal systems, and (the argument goes) we humans can do arithmetic, and we are consistent, and yet there are no statements that are true but we humans can’t prove them.
Or, as Professor Robinson puts it in the Course Guidebook here, “[w]e are able to reflect on our own problem-solving maneuvers without importing into the system some set of axioms, otherwise unknown to us, in order for us to make sense of what we are thinking.”
Now this is awfully sloppy. The Incompleteness Theorem isn’t some broad fuzzy statement that a formal system can’t “reflect upon” its own “problem-solving maneuvers” without adding extra axioms; it’s that it can’t prove every single true statement without doing so.
Kind of an important difference.
To pummel this completely into the ground, for this argument to hold any water, three things would have to be true:
Human thought is at least powerful enough to do arithmetic.
Human thought is consistent.
Human thought is able to prove every true statement implied by its beliefs.
Not only are these not all true, I think it’s pretty clear that none of them are true. In detail:
Human thought is not powerful enough to do arithmetic. Now how can I say that? We do arithmetic all the time! But for the purposes of the Incompleteness Theorem, we have to not just be able to do some arithmetic, we have to be able to do it all. I can add pretty big numbers together, I can (with some outside assistance) prove some moderately complex theorems. But that’s a (literally) vanishingly small fraction of what the Incompleteness Theorem requires. I can’t add the vast majority of trillion-digit numbers, and I would bet money that Penrose can’t either. The usual proof of the Incompleteness Theorem relies on utterly enormous proofs, proofs which if written down would probably require a book larger than the Earth. I can’t generate, or verify, or even begin to understand, a proof that big (and neither can Penrose, or any other human).
One response to this is what while humans can’t do all arithmetic, we can do some bounded subset of it; some arithmetic that doesn’t extend to numbers bigger than N for some finite N, for instance. And that’s possible, but the Incompleteness Theorem as far as I know has never been proven for bounded arithmetics; that is, it tells us nothing whatever about systems that are powerful enough to do bounded arithmetic. So if that’s what we are, the theorem tells us nothing about ourselves.
Another response would be to say that while we can’t actually do all of arithmetic (due to not living long enough to add random trillion-digit numbers, to losing interest long before understanding the Earth-sized theorem, etc), we can do it all in principle. This has the nice feature of sounding good; the problem with it is that it doesn’t actually mean anything without further elaboration, and I don’t know of any further elaboration that saves the argument. (For instance, you might end up proving that something that really isn’t anything much like a human at all, and can’t in fact exist, isn’t just executing an algorithm, but how interesting is that?)
Secondly, human thought is not consistent. This will be blindingly obvious to some people :) but apparently not to all professional philosophers. I am quite certain that I have at least one false belief; and just to avoid amusing paradoxes I will add that I am quite certain that I have at least one false belief other than this one. I am quite certain that I believe at least one pair of things that contradict each other. And I am quite certain that you, and Roger Penrose, do also.
Anyone who thinks that all of their beliefs are consistent, and that none contradicts any other, is someone that I wouldn’t want in a position of power or responsibility; such persons are dangerous. Human thought, and belief, are very very fallible, and it’s important not to forget that.
And last, humans cannot prove all of the true statements implied by their beliefs. Another shocker, eh? In fact except possibly for some really heads-down mathematical theorists with no lives, I’d say that humans cannot prove the vast majority of the true statements implied by their beliefs, or for that matter the vast majority of the beliefs themselves. Really, how many of the things that you believe could you prove, in the mathematical sense of “derive from first principles via the basic rules of mathematics and logic”? That’s not really the way that human belief works at all.
So there we are. The Incompleteness Theorem fails to prove that human thought is “non-algorithmic” because humans are not the kind of thing that the Theorem talks about at all (consistent and arithmetic-capable systems), and for that matter, even if we were, we can’t do what the Theorem says that that kind of thing can’t do.
Now I could give Penrose the benefit of the doubt here, and suspect that I’m just missing some important part of his argument, and my criticisms miss the point. But that’s what I used to think about John Searle, until he came right out and said that the Problem of Other Minds isn’t really a problem because his dog has adorable floppy ears and deep brown eyes.
So until I get a good argument to the contrary (and candidate good arguments are eagerly solicited from my loyal readers), I’m going to assume that Penrose is just wrong here, perhaps for interesting psychological reasons, and continue both to wonder why people bother talking about his argument as though it might be true, and to be generally mystified by consciousness, and not even know anything to speak of about what sorts of things might, or might not, have it.