Posts tagged ‘consciousness’

2015/12/14

Consciousness and the Verbal Bias

I have been privileged lately to be part of a little informal group that meets (twice now, I think) in a Chinese restaurant on the West Side and (more often but more diffusely) in email, and talks about the mysteries (or otherwise) of consciousness (whatever that is).

There is me, and Steve who used to have a weblog a million years ago, and some smart folks from Columbia University. We are all amateurs (although Steve threatens to lure in a professional philosopher in some capacity), but that may be a Good Thing.

(Extremely long-time readers of this weblog in its various incarnations, if there are any, may recall the ancient Problems of Consciousness pages that Steve and I did. Highly related!)

books

Here is a pile of books.

As we’ve talked about these things, I have for some reason found myself increasingly attracted to the “we are just passengers” approach to consciousness. Not so much as something to believe, but as something to think about.

The idea behind this approach (which may or may not be the same thing as “epiphenominalism”) is that while subjective experience reflects what happens in the objective (or “physical” or “natural” or whathaveyou) world, it does not influence it in any way.

Subjective experience (and therefore “us”, if we identify with our subjective experiences) is just a passenger, an observer, and to the extent that we feel like we are making decisions and carrying them out, we are just mistaken. Either we are so constituted that we always (or almost always) decide to do that thing that our bodies were going to do anyway, or (perhaps more likely) we actually “decide” what to do a few milliseconds after our bodies do it; we make up stories quickly and retroactively to explain why we “decided” to do that.

What if this were true? We’d no longer have to worry about how the subjective realm has effects on the objective (it doesn’t). We may still have to worry about how subjective experience finds out what is happening in the objective world, but that’s always been the easier part; we can probably even say that well it just does, which is roughly what we say to someone who worries about why masses experience gravitational attraction.

We don’t really have to worry about Other Minds any more, either!  Or rather, we will be nicely justified in giving up on that entirely!  No way I’m going to be able to determine anything like objectively whether you, or any other physical system, has subjective experience, since subjective experience causes no discernible (or indiscernible) effects in the objective world; so I don’t need to feel guilty about not actually knowing, but just being content with whatever working hypothesis seems to result in the best parties and so on.

One puzzle that seems to remain in the We Are Just Passengers (perhaps better called the I Am Just A Passenger) theory, is why it should be the case that some of the things that my body says seem to reflect so accurately what I subjectively experience. If I have no effect on the objective world, why should this part of the objective world (the things my body says, writes in weblogs, etc) in fact correspond so well to how it feels to be me?

I started out thinking that it would be really interesting to see a theory about that: that would explain why objective biological bodies would tend to commit speech-acts that describe subjective experience, without that explanation including actual subjective experience anywhere in the causal chain.

Then like yesterday or something I had what may or may not be an insight: if a version of the Passenger theory can hold that I make my “decisions” by quickly rationalizing to myself the things that I (“subconsciously”) observe my body doing, why can’t it also hold that some significant part of what I “experience” is similarly made up just after the fact, as I retroactively experience (or remember experiencing) things corresponding to what I perceive my body saying (where “saying” here includes whatever unvoiced but subvocalized inner narration-acts occur).

That is, how certain are we (am I) that we (where each “we” identifies with our individual subjective experience) actually cause the speech-acts that our bodies carry out? I don’t see any reason we should be particularly infallible about that, at least any more than we should be infallible about causing our bodies to do other things, like buying chicken instead of turkey, or going to the opera.

We have a bias toward verbal behaviors, I will suggest, and tend to assume (without, I will suggest, any really very good reason) that verbal behaviors reflect the contents of subjective experience more or better than other kinds of behaviors do.

Of the various studies that have been done suggesting that our bodies start to do things before “we” have actually “decided” to do them, I recall (without actually going back and looking, because yolo) that the experimenters more or less assumed that verbal behaviors reflected the activity of subjective consciousness, whereas other body behaviors (and neural firings and so on) reflected mere physical stuff.  But why assume that?

In the extremely surreal and fascinating phenomenon of blindsight, a person will (for instance) claim verbally not to be able to see anything to the right, but will pretty reliably catch (or avoid) a ball tossed from the right side.

This is pretty much universally described as a case where our bodies can react to something (the ball from the right) that we don’t have conscious awareness of.

But why is this the right description? One thing the body does (the catching or avoiding) indicates awareness of the ball, and another thing the body does (the saying “no, I can’t see anything on that side”) indicates a lack of awareness.

Why do we assume that the verbal act reflects the contents of subjective awareness, and the other behavior doesn’t?

If someone couldn’t speak, but could catch a ball, we would generally not hesitate to say that they had subjective awareness of the ball.

But if the person does speak, and says things about their subjective awareness, we take that saying as overriding the non-verbal behaviors.

Could we be wrong?

The two main other kinds of things that might be happening are: that the person has subjective awareness of the ball, but for some reason the speech parts of his body insist on denying the fact; or that there are two subjective consciousnesses here, and one (associated with the speech behaviors) is not aware of the ball, but the other (associated with the catching or avoiding) is.

The first of these seems weird because we aren’t used to thinking about verbal behaviors (at least from people) happening without consciousness. The second seems weird because we aren’t used to thinking (outside of split-brain cases) of two consciousnesses associated with the same person.

(Would it be terribly frustrating to be the non-verbal consciousness in the second case? Aware of the ball, catching the ball, experiencing those things, but unable to speak when asked about it, and unable to stop the bizarrely traitorous speech organs from denying it. Or maybe that consciousness is more deeply non-verbal, and doesn’t understand and/or doesn’t have any particular desire to respond to, the questions being asked.)

Hm, where did all of that get us? I think I’ve written down pretty much what I wanted to capture: the idea that even our own speech acts might be, not things uniquely caused by our subjective consciousnesses, but simply more things that happen in the world that might or might not have any particular causal connection to subjectivity.  And that, perhaps consequent to that, that when we are developing theories about what other consciousnesses there might be out there in the world, we should watch ourselves for unwarranted bias toward speech acts over other behaviors.

Perhaps we can develop some good theory about why speech acts are in fact special in these ways, but I don’t have one at the moment, and I don’t know if anyone else has seen a need for one and written down any words in that direction.  (If you do, please let me know!)

And in the meantime, perhaps not assuming that speech-acts are special can help us reach some interesting places we would not otherwise have reached, or avoid some puzzles that would otherwise have puzzled us.

 

2011/10/07

Friday, October 7, 2011

I’m taking the day off at random; it is very nice! We don’t have Monday off, so it’s a mere three-day weekend, but still.

Hos, Boobies, an’ Orgasms

So we signed up for HBO for a month, mostly so we could watch this George Harrison Special that they had. The program on HBOHD right before it was “Making it in America”, which seems to be a satirical comedy about naked people having uncomfortable-looking sex. Then the program right after it was “Cathouse” something, a hard-hitting documentary about what it’s like to be an escort, including numerous scenes of nudity and/or sex. And then after that was “Katie Someone on Sex Toys”, which featured sex, as one might expect from the name, and also a blonde ditzy stark-naked narrator with large artificial breasts (as one might also expect from the name I suppose, since Katie Someone is apparently a relatively well-known porn star).

Surprisingly the George Harrison Special didn’t include any gratuitous nudity or sex, at least not that I noticed. But the other programs suggest a certain theme, or one might even say obsession.

Weird.

The Aren’t Like Us, You Know

So (maybe all subsections should start with “So”) I finished listening to that Learning Company course on consciousness, and it was indeed pretty basic. It was also disappointingly simplistic on the whole “what kinds of things have consciousness, and how could we possibly know?” question.

The professor says things like “to be conscious is to be the subject of sensation” as if it actually told us anything very interesting. He also talks about how impossible it would be for fish to become aware of, or know anything about, water, as though that was anything more than the flimsiest of metaphors (flimsy because it falls apart as soon as one asks whether humans could ever know anything about air).

And in general he has very definite, but apparently completely unsupported, opinions about what is or might be conscious. He is always talking about what he is “inclined” or “strongly inclined” to “say”; but I find myself rather definitely uninterested in what he is inclined to say: I want to know what is true.

He is for instance inclined to say that there is something that it is like to be an amoeba, but that there isn’t anything that it is like to be a “machine” (where “machine” is not further defined). My suspicion is that that inclination comes from a sort of unthinking “carbon compounds good, silicon compounds bad” meme with nothing very interesting behind it, or at least he gives us no reason to think otherwise.

The question of what things might be conscious is rather a different question from the question of how we might come to know that a thing is conscious. We come to know that other humans are conscious because we observe a strong correlation between our own actions and our own consciousness, and probably-justifiably conclude that people that take similar actions have similar consciousnesses. (One thing the professor gets right is pointing out that the claim that we each have only one datapoint, ourselves, on the subject is wrong; actually we each have a huge number of datapoints: all of our conscious actions.)

But it’s important to distinguish between things that we can come to know are conscious (other humans, probably other relatively high-level animals, possibly unicellular microorganisms although I would take some convincing), things that we can come to know aren’t conscious (not sure what if anything is in that set, although the professor seems to think that “machines” are in it), and things that we can’t come to know are conscious (or at least can’t come to know it in the same way), but still might be conscious for all we know.

Even if I bought the argument that an amoeba’s actions are more like mine than the actions of any machine could ever be (which I think is in fact utterly false), that would still not be any reason to think that no machine could ever be conscious. It would just be reason to think that I could not come to know that any machine was conscious by way of the behaves-like-me argument.

Conflating the truth of a thing with one’s ability to find out that truth is the height of arrogance, not to mention silly.

“Are there any apples in the box?”

“Nope.”

“How do you know?”

“The box is closed, and I can’t see inside. Must be empty.”

I don’t know if there’s something that it’s like to be an amoeba, or a tree, or a jackhammer, or Deep Blue. I think the whole question is deep and mysterious and fascinating. Pretending to answer it by just examining one’s pretheoretic inclinations to say things is completely unsatisfactory, and I’m disappointed in this professor for doing not much more than that.

One possible reaction to all this is to say oh, well, phht, it may be philosophically fun to speculate that maybe there’s something it’s like to be a tree, but really there isn’t, and it has no practical interest. I think it was Nagel or someone who pointed out that some extremely alien Martians might examine us and their more practical citizens might say the same thing about us, and they would be factually wrong, since in fact there is something that it is like to be one of us.

And I’d like to not be factually wrong, when feasible.

Blue What?

So I am really liking this wireless Bluetooth headset thing! It’s ummm this one. I bought it to work with the iPad, which it does very nicely, and it turns out that the Windows 7 laptop here also has Bluetooth, and also works nicely with it.

My only complaint is that when anything of interest happens (the signal momentarily dropping or reconnecting, one accidentally trying to turn the headset volume higher or lower than it goes, etc) it makes a LOUD BEEPING NOISE in one’s ear, which seems uncalled-for. Also switching it from the iPad to the laptop and back requires a bit more messing-around than I’d like, but maybe I just haven’t found the right buttons to push yet.

So anyway I can now listen to sounds being produced by either the iPad or the laptop without having to untangle wires, keep my head carefully within N inches of the device, or worry about having the things rudely ripped from my ears by passing cats or the corners of things.

It is very modern and shiny!

There was some other witty section title I was going to use

But I have forgotten it. :) I have also been playing Glitch, which is fun and silly, and some WoW, and always Second Life. And watching Buffy the Vampire Slayer episodes on Netflix on the iPad! Which is also fun and silly. :)

2011/09/30

Non-algorithmic pixie dust

So I’m still listening to that Learning Company Course on Consciousness. It’s interesting, keeps my mind awake, although so far there’s nothing that’ll be new to anyone who has an undergraduate degree in the area (raises hand), or who’s just done alot of reading.

In the lecture on Consciousness and Physics, I was somewhat disappointed that the professor repeated Roger Penrose’s incompleteness-theory-based argument that thinking things can’t be (just) carrying out algorithms (and therefore that computers, say, can’t think), with a straight face, and without pointing out the difficulties. I think that argument is pretty much entirely wrong, and not even in very subtle ways, so I will rail against it here (I originally typed that I would “inveigle” against it here, and was proud of myself for spelling it right, even though it’s entirely the wrong word).

Gödel’s Incompleteness Theorem says (quite surprisingly and counterintuitively, which is why it’s famous) that any formal system (that is, any set of symbols and rules about how you can manipulate them to derive some from others) that is at least powerful enough to represent arithmetic (like, say, the digits and plus and equal signs and stuff), and that is consistent (that is, there’s no statement S where you can both prove that S is true, and prove that it’s false, in the system), is incomplete, where incomplete means that there is at least one statement that is true in the system that cannot be proven in the system. If you want to prove that true statement, you have to add it as an axiom. (And the resulting system is now unable to prove some other true statement, so it’s still incomplete.)

(This immediately seems counterintuitive, because after all what could “true in a formal system” mean, apart from “provable in that system”? It turns out those really are different, and realizing that was one of the more interesting things about learning about the Incompleteness Theorem in the first place.)

Now Roger Penrose, who is really good at mathematics and physics and Minkowski spaces and geometric tilings and all, but as far as I can tell not all that good at philosophy, famously argues that the Incompleteness Theorem shows that we humans cannot be merely executing algorithms when we think, because things that execute algorithms can be modeled as formal systems, and (the argument goes) we humans can do arithmetic, and we are consistent, and yet there are no statements that are true but we humans can’t prove them.

Or, as Professor Robinson puts it in the Course Guidebook here, “[w]e are able to reflect on our own problem-solving maneuvers without importing into the system some set of axioms, otherwise unknown to us, in order for us to make sense of what we are thinking.”

Now this is awfully sloppy. The Incompleteness Theorem isn’t some broad fuzzy statement that a formal system can’t “reflect upon” its own “problem-solving maneuvers” without adding extra axioms; it’s that it can’t prove every single true statement without doing so.

Kind of an important difference.

To pummel this completely into the ground, for this argument to hold any water, three things would have to be true:

Human thought is at least powerful enough to do arithmetic.

Human thought is consistent.

Human thought is able to prove every true statement implied by its beliefs.

Not only are these not all true, I think it’s pretty clear that none of them are true. In detail:

Human thought is not powerful enough to do arithmetic. Now how can I say that? We do arithmetic all the time! But for the purposes of the Incompleteness Theorem, we have to not just be able to do some arithmetic, we have to be able to do it all. I can add pretty big numbers together, I can (with some outside assistance) prove some moderately complex theorems. But that’s a (literally) vanishingly small fraction of what the Incompleteness Theorem requires. I can’t add the vast majority of trillion-digit numbers, and I would bet money that Penrose can’t either. The usual proof of the Incompleteness Theorem relies on utterly enormous proofs, proofs which if written down would probably require a book larger than the Earth. I can’t generate, or verify, or even begin to understand, a proof that big (and neither can Penrose, or any other human).

One response to this is what while humans can’t do all arithmetic, we can do some bounded subset of it; some arithmetic that doesn’t extend to numbers bigger than N for some finite N, for instance. And that’s possible, but the Incompleteness Theorem as far as I know has never been proven for bounded arithmetics; that is, it tells us nothing whatever about systems that are powerful enough to do bounded arithmetic. So if that’s what we are, the theorem tells us nothing about ourselves.

Another response would be to say that while we can’t actually do all of arithmetic (due to not living long enough to add random trillion-digit numbers, to losing interest long before understanding the Earth-sized theorem, etc), we can do it all in principle. This has the nice feature of sounding good; the problem with it is that it doesn’t actually mean anything without further elaboration, and I don’t know of any further elaboration that saves the argument. (For instance, you might end up proving that something that really isn’t anything much like a human at all, and can’t in fact exist, isn’t just executing an algorithm, but how interesting is that?)

Secondly, human thought is not consistent. This will be blindingly obvious to some people :) but apparently not to all professional philosophers. I am quite certain that I have at least one false belief; and just to avoid amusing paradoxes I will add that I am quite certain that I have at least one false belief other than this one. I am quite certain that I believe at least one pair of things that contradict each other. And I am quite certain that you, and Roger Penrose, do also.

Anyone who thinks that all of their beliefs are consistent, and that none contradicts any other, is someone that I wouldn’t want in a position of power or responsibility; such persons are dangerous. Human thought, and belief, are very very fallible, and it’s important not to forget that.

And last, humans cannot prove all of the true statements implied by their beliefs. Another shocker, eh? In fact except possibly for some really heads-down mathematical theorists with no lives, I’d say that humans cannot prove the vast majority of the true statements implied by their beliefs, or for that matter the vast majority of the beliefs themselves. Really, how many of the things that you believe could you prove, in the mathematical sense of “derive from first principles via the basic rules of mathematics and logic”? That’s not really the way that human belief works at all.

So there we are. The Incompleteness Theorem fails to prove that human thought is “non-algorithmic” because humans are not the kind of thing that the Theorem talks about at all (consistent and arithmetic-capable systems), and for that matter, even if we were, we can’t do what the Theorem says that that kind of thing can’t do.

Now I could give Penrose the benefit of the doubt here, and suspect that I’m just missing some important part of his argument, and my criticisms miss the point. But that’s what I used to think about John Searle, until he came right out and said that the Problem of Other Minds isn’t really a problem because his dog has adorable floppy ears and deep brown eyes.

So until I get a good argument to the contrary (and candidate good arguments are eagerly solicited from my loyal readers), I’m going to assume that Penrose is just wrong here, perhaps for interesting psychological reasons, and continue both to wonder why people bother talking about his argument as though it might be true, and to be generally mystified by consciousness, and not even know anything to speak of about what sorts of things might, or might not, have it.

2011/09/18

Home again

So Dad passed away a little after midnight, on Wednesday (the 14th). He was the best Dad ever, and his passing was as gentle and as undemanding on his loved ones as one would have expected. I was there, down in Florida with him and Stepmom. I think maybe he was waiting for me to show up; one of the last things he said to me (along with some discussion of Linda Ronstadt and Alan Watts) was “you got here just in time”.

And now I am back home, and while my usual witty and ironic commentary here might be a bit subdued for awhile, I figure Dad is the one who gave me at least the good parts of the wit and irony, and I shouldn’t let it be suppressed on his behalf. I have been remembering all sorts of things about him (pretty much unreservedly positive, because I am the luckiest son ever), and some of them may get written down here eventually, but probably not right now.

It makes you think, in a more serious and concrete way, about consciousness and death and what might happen to the one after the other. (I’m listening to a course that touches on the subject, but I think it’s going to stay pretty abstract and theoretical.)

As far as I know there’s no particular reason to think that the usual suspects have it at all correct; they are just, layers of complication aside, taking a bunch of very old guesses far too seriously. Those guesses might be right, but they’re no more likely to be than any of millions of similar guesses that didn’t happen to get written down.

It could be that nothing happens, that consciousness just goes out at death. That would be awfully boring, though, and it’s not clear there’s much more to say about it.

There’s a theory, which somewhat hearteningly I can’t find on the Interwebs at the moment, that consciousness, mysterious and amorphous as it is, quantum-tunnels among the available possible world-lines, and always finds one in which life continues. So although other people may experience a world in which one dies, one’s own consciousness avoids those, and one is always, in one’s own world-line, immortal.

It’s not clear what it would mean for that to be true or false; it’s not obviously falsifiable, at least from here. But that’s okay.

Dad lives, in various senses and to various degrees, in the state-spaces of various brains, mine and Stepmom’s and lots of others. Does that mean anything about his consciousness? No idea; consciousness is hard.

A friend told me, a week or two ago, that when her father died, a friend had a dream that he was waiting in line, all excited because he was waiting to find out what he was going to do next.

I like that thought.

I know Dad would want to be doing something interesting. I’ll be sure to arrange that at least the parts of him here enriching my own state-space are.

Thanks, Dad, for everything.