Posts tagged ‘philosophy’


Consciousness and the Verbal Bias

I have been privileged lately to be part of a little informal group that meets (twice now, I think) in a Chinese restaurant on the West Side and (more often but more diffusely) in email, and talks about the mysteries (or otherwise) of consciousness (whatever that is).

There is me, and Steve who used to have a weblog a million years ago, and some smart folks from Columbia University. We are all amateurs (although Steve threatens to lure in a professional philosopher in some capacity), but that may be a Good Thing.

(Extremely long-time readers of this weblog in its various incarnations, if there are any, may recall the ancient Problems of Consciousness pages that Steve and I did. Highly related!)


Here is a pile of books.

As we’ve talked about these things, I have for some reason found myself increasingly attracted to the “we are just passengers” approach to consciousness. Not so much as something to believe, but as something to think about.

The idea behind this approach (which may or may not be the same thing as “epiphenominalism”) is that while subjective experience reflects what happens in the objective (or “physical” or “natural” or whathaveyou) world, it does not influence it in any way.

Subjective experience (and therefore “us”, if we identify with our subjective experiences) is just a passenger, an observer, and to the extent that we feel like we are making decisions and carrying them out, we are just mistaken. Either we are so constituted that we always (or almost always) decide to do that thing that our bodies were going to do anyway, or (perhaps more likely) we actually “decide” what to do a few milliseconds after our bodies do it; we make up stories quickly and retroactively to explain why we “decided” to do that.

What if this were true? We’d no longer have to worry about how the subjective realm has effects on the objective (it doesn’t). We may still have to worry about how subjective experience finds out what is happening in the objective world, but that’s always been the easier part; we can probably even say that well it just does, which is roughly what we say to someone who worries about why masses experience gravitational attraction.

We don’t really have to worry about Other Minds any more, either!  Or rather, we will be nicely justified in giving up on that entirely!  No way I’m going to be able to determine anything like objectively whether you, or any other physical system, has subjective experience, since subjective experience causes no discernible (or indiscernible) effects in the objective world; so I don’t need to feel guilty about not actually knowing, but just being content with whatever working hypothesis seems to result in the best parties and so on.

One puzzle that seems to remain in the We Are Just Passengers (perhaps better called the I Am Just A Passenger) theory, is why it should be the case that some of the things that my body says seem to reflect so accurately what I subjectively experience. If I have no effect on the objective world, why should this part of the objective world (the things my body says, writes in weblogs, etc) in fact correspond so well to how it feels to be me?

I started out thinking that it would be really interesting to see a theory about that: that would explain why objective biological bodies would tend to commit speech-acts that describe subjective experience, without that explanation including actual subjective experience anywhere in the causal chain.

Then like yesterday or something I had what may or may not be an insight: if a version of the Passenger theory can hold that I make my “decisions” by quickly rationalizing to myself the things that I (“subconsciously”) observe my body doing, why can’t it also hold that some significant part of what I “experience” is similarly made up just after the fact, as I retroactively experience (or remember experiencing) things corresponding to what I perceive my body saying (where “saying” here includes whatever unvoiced but subvocalized inner narration-acts occur).

That is, how certain are we (am I) that we (where each “we” identifies with our individual subjective experience) actually cause the speech-acts that our bodies carry out? I don’t see any reason we should be particularly infallible about that, at least any more than we should be infallible about causing our bodies to do other things, like buying chicken instead of turkey, or going to the opera.

We have a bias toward verbal behaviors, I will suggest, and tend to assume (without, I will suggest, any really very good reason) that verbal behaviors reflect the contents of subjective experience more or better than other kinds of behaviors do.

Of the various studies that have been done suggesting that our bodies start to do things before “we” have actually “decided” to do them, I recall (without actually going back and looking, because yolo) that the experimenters more or less assumed that verbal behaviors reflected the activity of subjective consciousness, whereas other body behaviors (and neural firings and so on) reflected mere physical stuff.  But why assume that?

In the extremely surreal and fascinating phenomenon of blindsight, a person will (for instance) claim verbally not to be able to see anything to the right, but will pretty reliably catch (or avoid) a ball tossed from the right side.

This is pretty much universally described as a case where our bodies can react to something (the ball from the right) that we don’t have conscious awareness of.

But why is this the right description? One thing the body does (the catching or avoiding) indicates awareness of the ball, and another thing the body does (the saying “no, I can’t see anything on that side”) indicates a lack of awareness.

Why do we assume that the verbal act reflects the contents of subjective awareness, and the other behavior doesn’t?

If someone couldn’t speak, but could catch a ball, we would generally not hesitate to say that they had subjective awareness of the ball.

But if the person does speak, and says things about their subjective awareness, we take that saying as overriding the non-verbal behaviors.

Could we be wrong?

The two main other kinds of things that might be happening are: that the person has subjective awareness of the ball, but for some reason the speech parts of his body insist on denying the fact; or that there are two subjective consciousnesses here, and one (associated with the speech behaviors) is not aware of the ball, but the other (associated with the catching or avoiding) is.

The first of these seems weird because we aren’t used to thinking about verbal behaviors (at least from people) happening without consciousness. The second seems weird because we aren’t used to thinking (outside of split-brain cases) of two consciousnesses associated with the same person.

(Would it be terribly frustrating to be the non-verbal consciousness in the second case? Aware of the ball, catching the ball, experiencing those things, but unable to speak when asked about it, and unable to stop the bizarrely traitorous speech organs from denying it. Or maybe that consciousness is more deeply non-verbal, and doesn’t understand and/or doesn’t have any particular desire to respond to, the questions being asked.)

Hm, where did all of that get us? I think I’ve written down pretty much what I wanted to capture: the idea that even our own speech acts might be, not things uniquely caused by our subjective consciousnesses, but simply more things that happen in the world that might or might not have any particular causal connection to subjectivity.  And that, perhaps consequent to that, that when we are developing theories about what other consciousnesses there might be out there in the world, we should watch ourselves for unwarranted bias toward speech acts over other behaviors.

Perhaps we can develop some good theory about why speech acts are in fact special in these ways, but I don’t have one at the moment, and I don’t know if anyone else has seen a need for one and written down any words in that direction.  (If you do, please let me know!)

And in the meantime, perhaps not assuming that speech-acts are special can help us reach some interesting places we would not otherwise have reached, or avoid some puzzles that would otherwise have puzzled us.



Demographic substitution does not preserve truth

When I was in kid-school, a Social Studies teacher pointed out to us that there was no entry in the index of our textbook for “Women’s history” or “Women” in general.

I flipped through it and raised my hand, and said that hey, there was nothing for “Men’s history” or “Men”, either!

This is because I was a smug little shit who didn’t have the first clue how the world actually works.

(I like to think that this is a bit less true now.)

The teacher more or less adored me just because I was smart and (usually) well-behaved, and rather than giving me the smack-down I really needed, she (I vaguely recall) just said something like “It’s not the same thing”.

Which is entirely correct.

It’s easy to see why we might expect statements about one group to have the same status (truth, objectionability, etc.) as the same statements applied to another group.  In many contexts, there is basic fairness involved.  “Women should be able to participate in government” and “Men should be able to participate in government” are both true.  “Men should not be jerks” and also “Women should not be jerks”.  Or simple fact: “Most white people have toes”, and “Most people of color have toes”.

On the other hand, a few moments of thought reveals lots of statements for which this doesn’t work.  “Most pregnant people are women” is true; but “Most pregnant people are men” is false.  “Until comparatively recently, the law considered women to be essentially property” is true; but “Until comparatively recently, the law considered men to be essentially property” is false.  “Western society grants extensive privilege to white men per se” is pretty clearly true, but “Western society grants extensive privilege to disabled women per se” is implausible at best.

So far these examples are all of “ought” statements that survive under demographic substitution, and some “is” statements that don’t.  But in any plausible morality, situated “ought” statements are implied by “is” statements about their situation; their context.

A very strong case could be made, for instance, that “Western society grants extensive privilege to white men per se”, and “Mainstream study of history has been from a heavily male-oriented perspective” are both true, and that as a result “It is unfortunate that there is no entry about women in the index of this history textbook” can be true, while “It is unfortunate that there is no entry about men in the index of this history textbook” is silly (because, as I vaguely recall my Social Studies teacher pointing out, the whole book is about that).

More significantly (and I imagine more controversially, although perhaps not among y’all weblog readers), there are sets of “is” statements that don’t survive demographic substitution, from which we can conclude that for instance “Women, people of color, and LGBTQ people have a legitimate need for safe spaces that exclude those not in the relevant group” is true, whereas “Men, white people, and straight people have a legitimate need for safe spaces that exclude those not in the relevant group” is not. Or in shorter words, Women’s Rights and Black Power are not necessarily in the same moral categories as Men’s Rights and White Power.

And I am happy to have written that down, because I’ve had the argument rattling around inchoate in my head for some years.

Now there are a significant number of people posting things on the Internet who would claim that that the concluding sentence, that Women’s Rights and Black Power are not necessarily in the same moral categories as Men’s Rights and White Power, is just obviously false, and unfair, and sexist / racist, and so on. Some of them are, I imagine, smug little shits who don’t have the first clue how the world actually works; some others are just doing a good imitation.  To avoid the argument that we would use to get to the conclusion, they would either deny some of the initial “is” statements (denying that there is currently structural oppression of women or people of color, for instance), or deny in one way or the other that those statements imply the conclusion.

Or, perhaps more commonly, they would just repeat that the concluding sentence is sexist / racist, because what’s good for the goose is good for the gander, because fairness, and so on.  Because, that is, demographic substitution ought to preserve the truth of “ought” statements, and saying that it doesn’t is sexist / racist / etc.

What finally pushed me over the edge to write this down was some Twitter discussion of this rather baffling story on the often-odious “Breitbart” site, by the often-odious Milo somebody.  It’s still not clear to me what the intent of the story is, aside from a general suspicion that it’s supposed to be humorous in some way (I do like the part where someone asks what direction they’re driving, and someone else looks at the GPS and says “up”; that’s funny!).  But at least some of the Milo supporters in the Twitter thread that I foolishly walked into, thought that it was obviously a parody of feminist claims that various aspects of technology are gendered against women.

The argument would be, I guess, something like “I have written this piece claiming that an aspect of technology is anti-male, and the piece is silly; therefore other pieces, claiming that other aspects of techhnology are anti-female, are also silly.”  Or, perhaps more charitably, “See how silly this claim that a technology is anti-male is; claims that technologies are anti-female are similar to it, and are just as silly!”.

And this brought to mind some sort of claim like “It’s silly to analyze technology for signs of structural oppression of women, because it’s silly to analyze technology for signs of structural oppression of men, and demographic substitution preserves silliness!”.

But (whatever other additional things might or might not be going on in the case), demographic substitution doesn’t preserve silliness.  Or various other properties.

So there we are!


A footnote in Kaufmann’s translation of “I and Thou”

I was struck just now to find, tucked away at the end of a footnote discussing the technical details of one of the many tricky bits of translation in Buber’s “I and Thou”, this paragraph from Kaufmann:

The main problem with this kind of writing is that those who take it seriously are led to devote their attention to what might be meant, and the question is rarely asked whether what is meant is true, or what grounds there might be for either believing or disputing it.

It is easy to read this as a sort of jarring Philistinism, as though Kaufmann is wondering wistfully (or grumpily) why Buber has to use all of these coinages and poetic turns of phrase, all of these images and metaphors, rather than laying out his argument clearly, in simple and common words, perhaps as a set of bulleted lists (maybe a PowerPoint deck!), so that one could analyze it logically and decide whether or not it’s likely to be true.

Which seems like a hysterically inappropriate thing to think, given that what Buber is doing here is laying out a particular way of thinking about the nature of reality and each individual’s relationship to God (or equivalent). A deeply personal way of seeing the world, that he invites the reader to consider, and (implicitly) to adopt or not according to taste.

This isn’t really a thing that admits of being true or false, or of being expressed in plain and simple words (or at least of words where “what is meant” is immediately evident without special attention being paid to the question).

For me at least, Buber is saying, “think of the things we do as divided into two kinds: the I-It and the I-You; then think of…”. This is in the imperative, and doesn’t admit of being true or false (or likely or unlikely).

And surely Kaufmann, being the translator of the silly thing, realizes this.

I see only three plausible theories here so far: that Kaufmann is just pulling our leg in this paragraph (which would be wonderful); that there is an entirely different way of understanding Buber under which the paragraph makes more sense (I would be very curious what that way is); and that Kaufmann really does fail to understand the material as anything more than muzzily-expressed truth-claims that, if only more concretely written down, one could study objectively in the lab (this seems both the most obvious, and in some way the least plausible, explanation).

It’s a funny world. :)


Liebe ist ein welthaftes Wirken

Kaufmann translates this, from Buber’s “I and Thou”, as “love is a cosmic force”, but gives us the original in a footnote to see for ourselves.

One thing I like about German and how synthetic it is (in the technical sense that I just learned; I was going to say “agglutinative“, but that turns out to be wrong) is that you can look at the parts of many words, and see how the meaning compares to the sum of those parts.

The most simple-minded translation of that phrase might be “Love is a worldly work”, which has the same nice consonance of double-ues, but a very different sense, since the English “worldly” has strong connotations that are almost the opposite of Kaufmann’s “cosmic”.

It’s interesting that the translator chose “force” here, rather than the obvious “work” (which would have read a bit awkwardly), or perhaps “act”. Because Buber is talking about love in the context of “those who stand in it and behold in it”, “force” probably makes more sense than “act”, since you can stand in a force (a force field!), but not so much in an act.

$50 FINEBut then I wonder why Buber wrote Wirken rather than say Kraft. And then I am at, or perhaps well beyond, the very end of my competence as a translator. :)

The other day the little daughter, watching me staring into my phone and clicking and swiping without end, commented more or less “you’re taking in so much content; I don’t know if that’s healthy”.

I found myself very much in agreement with that thought, and put the phone away (temporarily) and looked at various stacks of books sitting unread here and there, and picked up “I and Thou”, read the Acknowledgements and Translator’s Key, skipped Kaufmann’s very long Prologue (these things should generally be at the end of a book, in my ever so humble opinion, so that one can encounter the work itself with more or less fresh eyes, and then read the prologue-writer’s thoughts about it afterward, when one has already one’s own ideas to compare them to), and started very slowly into the work (Werk, Wirken, Kunstwerk?) itself.

It’s a very dense book, or feels like it deserves to be treated as such, which means that I have to be careful not to spend so much time on each sentence that I eventually drift off and do other things before I get past the first chapter.

As I tweeted not long after starting (and yeah, I know; somehow Twitter and the Face Book and now even plague have all taken up residence in my ways of relating to the world):

I can’t of course actually empty the cup, and I admit I’m not really trying all that hard to.

Currently, a few more pages in, I’m wondering if Buber will go from talking about the ineffable relating that is I-You (and that he identifies with, or as, love in some sense), to a realization that the duality present even in I-You (because after all there is still I, and You) is at some level an illusion. Because that would be so Buddhist.

There are no sentient beings,
And I vow to save them.

It will be interesting either way; if he does get to some kind of non-duality, I’m sure it will have a flavor all its own. If he doesn’t, it will be interesting to see if he simply stops short of it, actively considers and denies it, or goes off in some other direction entirely.

I’ve been meaning to read this book since college sometime :) and it’s nice to finally get to it.

Solstice was nice, thank you for asking, if a little atypical. All four of us were here together, but instead of the usual Christmas Dinner with ham an’ all, we went out to the local diner.

The story: M smelled gas in the basement, so on I forget maybe the 22nd we had the gas man come and test things, and he found there was a leak somewhere in the kitchen range, and while we were moving the range out from the wall it got caught on something and when we pushed on it a little to get it past the something, the entire glass front of the oven door very enthusiastically shattered into a zillion pieces and fell onto the floor.

That was exciting!

We called the appliance place who sent out a person who determined that the range was old enough to vote, and that no one makes parts for it anymore (either for replacing the door glass or fixing any possible leak).

A new range arrived yesterday and I have baked my first loaves of bread in it, but between the breaking of the old and the installing of the new we could cook only in the microwave and crockpot, and although we considered trying to design a satisfying Solstice dinner around those, in the end we decided the local Diner would be more fun.

And it was very nice.

How do Diners do it, by the way; anyone know? How can you have that enormous a menu of available things, and be able to produce absolutely any of them in a reasonably short span of time? Are they all designed to be producible from some smallish set of ingredients, and you keep those around and ready at all times? Do all of the chefs know how to make all of the things? Are there big recipe books? Or do they look at the menu when the order comes in, figure out what you are probably expecting, and wing it?


The buzzing of distant bees

Is there evil in Heaven? And is there free will?

I know it doesn’t really make sense to spend too much time wondering about the details of fictional universes (“if Peter and his friends could only fly when they were having happy thoughts, why did Tinkerbell, who was after all the source of the pixie-dust that let them fly, seem to have no trouble flying even when she was upset?”), but I am somehow fond of these questions at the moment.
Heaven, the flowchart
It’s a subject that I don’t remember coming up in the average Internet discussion of (Judeo-Christian) religion, and it seems to me like a real quandary.

Seems likely that there is free will in Heaven (otherwise why give it to us on Earth?), and seems unlikely that there is evil (it being Heaven and all); and yet if God can make a place where there is no evil even though there is free will, why didn’t he do that on Earth?

(I started to wonder about this after hearing a couple of different theist types talk about their ideas of Heaven on NPR or something: the Jewish one said that there must be a wonderfully just afterlife because he strongly believes that the universe is just, whereas the evidence he has suggests that life isn’t just, so there must be some really very just stuff after life to make up for it; and the Christian one says that Heaven is a place where we all get whatever we truly want, and we all have learned to live together in harmony. Ha ha funny people, I thought, and also thought the “well if God can make it happen in Heaven, what’s his excuse for not doing so on Earth?” thought that we consider here.)

(Ooh, here they are! The Rabbi and the pastor; so you can judge for yourself how badly I’m misreporting their statements above.)

The usual answer to the Problem of Evil, that is comes about as a direct and inevitable result of imperfect beings with free will, seems to sort of evaporate if (as seems hard to avoid) Heaven is a place where imperfect beings have free will, and yet there is no evil there. So evil can’t really be the inevitable result of free will. So the Problem of Evil, it would seem, remains.

I did have a rather detailed discussion of this with my Jehovah’s Witness friend back in the day. He (and therefore I assume the JWs in general) have a pretty complete and interesting (if maybe sort of creepy) picture of life after the umm Big Thing, where (in the case of the JWs) the 400,000 special people or whatever it is go to live with God in Heaven or something, and all the other good people live on Earth under their direct governance more or less.

He said that yes in that world people would still have free will, and that in fact they would be able to do evil. They wouldn’t do it very much, because they would be good people living in a great environment, but it would still happen, and in that case God (i.e. Jehovah) would look into their hearts, and between the time they made a really bad decision and became evil and the time they were able to actually do anything bad as a result, He would stop them, in a very final way.

Since the JWs don’t believe in Hell, and think that all the stuff about burning and fire and stuff in the Bible is just a way of talking about ceasing to exist altogether, what happens to you if you freely chose evil after the Big Thing happens is that you just cease to exist.

Pretty weird, I thought!

And this got me thinking of a story set in that world, which I’ve never gotten around to writing, but which I think I will try to set down a general idea of here.

And in the meantime, you can ask your local rabbi or pastor or Judeo-Christian friend whether there is free will in Heaven, and whether there is evil there. I wonder if that is a hard question…

“I cannot follow the Elders anymore,” he’d said, that night, as they walked back from the orchards where they had been picking the perfect fruits that Jehovah provided for them in this perfect Earth.

“Jeremiah,” she’d exclaimed, “what can you mean, you cannot follow them? How could anyone do anything but follow them? We know that they are the appointed ones of Jehovah, that they have only our welfare at heart, that they are good and wise men. You cannot doubt, when you have seen Jehovah and His Son moving about on the Earth with your own eyes.”

“I have.” They were walking close together, hands brushing each other now and then, innocently, like brother and sister. “And I do not doubt that the Elders are those chosen of Jehovah. But…”

“But what? What is it that you can doubt?”

He’d taken a deep breath. He looked, she remembered thinking, like someone who was not quite sure of what he was saying, and speaking as much to convince himself as to convince her.

“I do not doubt the facts. The Elders are the chosen of Jehovah, and they do truly intend the best for me. But I doubt, no, I reject, their authority over me.”

“What can you mean by that, Jeremiah? Jehovah is the source of all authority, of all rightness, and He has given them their authority! It cannot be doubted, or rejected.”

“But I do reject it,” he’d said, his voice louder but still with an undercurrent of uncertainty, “I reject it as I am free to do, using the free will that Jehovah has given me. It is my right!”

She’d stopped, and taken his hands, looking very seriously into his face. The others walking in the same direction continued along, and were soon out of any danger of hearing.

“This is blasphemy,” she’d said, “this is not the use we are supposed to make of the freedom that has been given us. Can you truly do this? Do you truly, of your own free will, reject the authority of Jehovah?”

She had meant it rhetorically, really, or so she told herself afterward, saying it only so that he would say no, of course not, not that. But his face said that he took the question very seriously, and was considering it, somewhere deep inside. When he spoke again, the uncertainty was gone from his voice.

“Yes, Sarah. Yes, I d–“.

And before he’d finished that last word, her head was filled by a strange sound, like the buzzing of distant bees, and her hands were empty. And Jeremiah was gone, forever.

So now, in her bed at night, she lies curled tensely after her prayers, telling herself, telling Jehovah who can see into her very heart, that she does accept His goodness and His authority, that she is His true daughter, and that she would never reject Him.

And she cries until sleep comes.

Something like that, anyway…


Quantum Physics and (not really) Free Will

TWIN SPIN FINIt turns out that there is a well-known thing in quantum physics called “The Free Will Theorem”, developed by smart persons John Conway and Simon Kochen.

(I haven’t heard of this before, which I suspect means that, like L. Ron Hubbard’s science fiction, it didn’t exist in my original birth universe; I wonder if that means I’ve switched again recently. Always hard to tell.)

Anyway, the Free Will Theorem, which is described in two papers that are both quite readable really, is not actually about Free Will to speak of, at least not if you are a sensible compatibilist like we are, and I want to write down my thoughts here as to why and how that is.

What the Theorem actually shows is that, if some generally but not universally accepted parts of quantum physics and relativity are true, then if there is some behavior of some humans that can’t be predicted, even in principle if you knew every fact about the universe up to that point, then there is also some behavior of some elementary particles (as evidenced by the behavior of some macroscopic detection apparatus) that can’t be predicted, even in principle if you knew every fact about the universe up to that point.

Which is not a big surprise really; it’s hard to imagine a universe in which all elementary particle behavior was predictable but human behavior was somehow not, humans being made of elementary particles and all. But this Theorem puts a solid example under that intuition (as well as bringing up some other issues in physics that I won’t talk about more here).

Conway admits somewhere in the coverage of this that he chose the name, the Free Will Theorem, intentionally to get attention. But he’s also responded to criticism of the name by saying things worth noting.

The most obvious criticism is that being unpredictable even in principle isn’t the same as having free will (and if you’re a compatibilist it’s not even a necessary condition). Conway has said a couple of things about this.

First, he’s said informally that humans and particles are the same in this way, and since we say that humans have free will we should say that about particles too:

“That’s why I insisted on using this evocative language,” Conway says. “Many people thought I should say the particle’s behaviour is indeterminate. But it would be really rude if I told you that you were indeterminate! It’s the same property and I don’t see why we should be required to speak of it as if it were a different property. Our theorem says that if humans have it, then so do particles.”

But that’s sort of silly. The fact that humans and particles share some property X, and humans have free will, doesn’t imply that particles therefore have free will; it’s just a non-sequitur.

He’s also responded to the “randomness isn’t enough for free will” argument by claiming that the indeterminacy they’ve proved for particles isn’t just randomness. From that same link above:

and which action the particle does is free in this sense, it is not a predetermined function of the past. And that’s not the same as randomness, oh dear me no!”

Exclamatory cuteness aside, if “not determined by anything that’s come before, and not predictable even in principle” isn’t the same as randomness, I don’t know what is.

What Conway apparently has in mind here is that the randomness is weird and quantumly nonlocal: when the behavior goes from undetermined to determined, it does it at two places at once, and the places can be sufficiently far apart that no signal can get between them in time. That doesn’t mean it’s not random, it just means that as well as being random it’s also bizarre in the usual QM way; the Free Will Theorem doesn’t tell us anything particularly new about the weirdness, it’s just one of the three assumptions that it starts from.

Conway gets all sort of Penrose-like and speculates that while all the little “free decisions” made by particles usually sort of cancel out, our brains somehow avoid this canceling out, and that through some as yet unknown process our human-level free will pops out as a result. This, he says, makes the whole Compatibilism thing a moot point; since the universe isn’t deterministic, it just doesn’t matter whether free will is compatible with determinism. Compatibilism, the second of the papers says, is just “a now unnecessary attempt to allow for human free will in a deterministic world”.

Which is not quite right. :) Compatibilism is the recognition that indeterminicity is neither necessary nor sufficient for free will. Free will has to do with the freedom to express one’s preferences and goals in the world, not necessarily the ability to escape prediction by a hypothetical omniscient seer. Free will is possible with or without random or undetermined bits of fundamental physics (and those people who told Conway he ought to just say the particle behaviors are undetermined were right).

In fact free will requires that various of our actions are in fact influenced by, reflective of, if not determined by, past facts about the universe, those facts being the preferences, plans, and goals of the person acting with free will.

Hope that clears that up. :)


So, I’m an atheist

I’m an atheist.Atheist symbol

But wait, says a hypothetical reader, don’t you call yourself a pantheist? And sometimes a Buddhist? And an Ariadnite? Don’t you believe that there are deep mysteries and weird things going on in the universe, beyond what science knows? Isn’t consciousness itself a profound mystery to you? And haven’t you said that you aren’t 100% certain of anything? Shouldn’t you be at most an agnostic?

And yeah, except for that last question there, those are all very true of me.

But none of that prevents me from being an atheist.

Specifically, I am an atheist because I do not believe that there is a God, where a God is an omniscient omnipotent being, existing prior to and outside of the universe, who has opinions or preferences or plans about what should happen in the universe, and who serves as the basis for morality.

(If by “God” you mean instead “an entity that is significantly more advanced technologically or morally than humanity”, or “an entity that caused there to be life on Earth”, or “a ham sandwich”, then none of this applies. Also, we are speaking different dialects of English, and mine is by far the most common one.)

I will go a little beyond that, and say that not only do I not believe there is a God in that sense, I also believe that there is not a God in that sense.

So I’m an atheist even if “don’t believe” isn’t enough for you, and you insist on “believe that not”. :)

On the hypothetical reader’s questions:

  • I’m a pantheist in that I think the universe (as broadly construed as possible) is worthy of worship. But that involves no omniscient omnipotent thing outside of or other than the universe.
  • I’m a Buddhist to some extent or other, but relevantly for this discussion Buddhism’s attitude toward deities outside of the universe is basically “don’t waste your time worrying about it”, so again there’s no conflict between Buddhism and atheism.
  • I’m an Ariadnite in that my worship of the universe (as broadly construed as possible) involves images of this lady in a white gown, swords and balls of string, and so on; but that is all metaphor, not truth-claims, and in any case the Goddess is not something other than the universe.
  • On deep mysteries, sure. Being an atheist doesn’t mean thinking that our current scientific knowledge is correct and/or complete. Same thing on consciousness. This was driven home to me recently by this very good piece and even some words in this by Sam Harris (with whom I only occasionally agree).
  • I’m not 100% sure of anything (even this!). But being an atheist doesn’t require being 100% sure that there is no God; at most it requires believing that there is no God (and really I think just not believing that there is a God will do).

On Agnosticism, we get into edge cases.

When asked “Do you believe that there is a God?”, someone who says “Yes” is a theist.

When asked “Do you believe that there is no God”, someone who says “Yes” is an atheist.

Someone who says, “well, I really don’t know” to both questions is an agnostic.

But what if someone says “No” to both questions? I would count that person as an atheist, since they don’t believe that there is a God. But if you’d rather call them an agnostic (since they also don’t believe that there isn’t a God), that’s okay with me.

I’m an atheist either way. :)

And of course I could be wrong. I could be wrong about any belief of mine; as I think I’ve said before, anyone who thinks that some particular belief of theirs couldn’t possibly turn out to be wrong just isn’t using their imagination hard enough. But that doesn’t stop me from being an atheist.

I’m bothering to say this pretty much because of the Bacon Moose post, and because of a certain frustration I have with intelligent people, who I am pretty sure believe the same way that I do, who don’t identify as atheists.

I’d like more people to identify as atheists, because every time someone says they aren’t an atheist, 99% of the people who hear it assume that they are Christian or (theistic) Jewish or something, and that just bolsters the “atheists are weird and rare” feeling, even if what the person really meant was that while they don’t believe there is a god, they have some (generally rather contorted) reason for not identifying as atheist.

When I posted a link to the Bacon Moose posting on Facebook, in fact, I had two friends comment that (although they don’t believe there is a god in the relevant sense), they aren’t atheists. One said she is not an atheist because (if I understand her right) she just doesn’t think the question is all that important, and doesn’t want therefore to bother having a label relating to it. The other said (if I understand him right) that he’s not an atheist because if you change the meaning of the word it wouldn’t apply to him: say if you define “atheist” as “someone who is 100% certain there is no god”, or if you define “god” as “whatever caused there to be life on Earth”.

Needless to say, I didn’t find either of these arguments very compelling. :)

I suspect that, buried deep in the back of most of our minds, there is this ancient inculcated meme that atheists are icky, or grim, or narrow, or closed-minded, and that really one should not identify as one in polite company.

That is a meme I’d love to see wither away.

So here I am! :)


Getting free will wrong

Free Will book cover pictureHave I really never weblogified about this? I see I have written about it briefly in the ancient Problems of Consciousness tree, and that pretty much lays out my (i.e. the correct :) ) view, but I will write about it again here Just Because.

I have just had delivered to my ‘Pad Sam Harris’s recent book “Free Will”, because I heard Harris talking on the Brian Lehrer Show (also on my ‘Pad; see last month’s post on how odd the world is these days).

I haven’t read the book yet, just skimmed around a bit, but it looks as though he is going to get free will wrong in the way that so many others have gotten it wrong: by assuming that free will is possible only under certain conditions, showing that those conditions don’t hold, and concluding that there is no free will.

When in fact, of course, they just got the definition wrong.

Free will exists. I got that Girl Scout Thin Mint cookie just now, on the way here from the other side of the house, of my own free will. If someone had been holding a gun to my head and required that I got a Samoa instead, that would not have been of my own free will.

Those things are facts. The job of the philosopher is to explicate just what “free will” means, given facts like that. The philosopher who concludes that there is in fact no free will bears a very heavy responsibility, that includes convincing me that in fact I didn’t get that Thin Mint of my own free will. No philosopher has ever done that, and I think none is likely to (although I’ve been wrong before!).

From listening to Sam Harris on NPR, and looking through some of this here book, it looks like he will be getting free will wrong in a pretty typical way. Here are some excerpts from the beginning of the book:

Free will is an illusion. Our wills are not simply of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control. We do not have the freedom we think we have.

Either our wills are determined by prior causes and we are not responsible for them, or they are the product of chance and we are not responsible for them.

These statements reveal a whole swarm of background assumptions that I think are pretty flatly wrong. These include:

  • To have free will, our wills must be simply of our own making.
  • To have free will, we must be aware of, and exert conscious control over, the causes of our thoughts and intentions.
  • We can be responsible for our decisions only if those decisions are neither determined by prior causes, nor random.

I don’t think any of these background assumptions are true. It will be interesting to see, as I actually read the book, whether Harris makes them explicit and analyzes and defends them, or if he just takes them for granted and repeatedly points out that they are not satisfied by the facts.

So if I don’t think those kinds of things are true about free will, what do I think is true about free will?

I think free will is a useful concept that we humans have come up with to determine when to ascribe moral responsibility to a moral actor, and when not to. It is a concept that we learn mostly by ostention (i.e. by reading and hearing and talking about examples and stories) rather than by definition, and so (like “knowledge” and “virtue” and really most other interesting words) it tends to have fuzzy edges, and people can have honest disagreements about when it applies. But that doesn’t mean it never applies at all.

Roughly, we say that someone does something of their own free will if their doing it reflects primarily their own nature, their own desires, beliefs, strengths and weaknesses. If, that is, it provides useful evidence about what kind of person they are in morally-relevant ways. Which is just what you want as a criterion when you’re deciding which acts to include when doing moral judgement and assigning moral responsibility.

If I unknowingly step on a weak board and the board breaks, all that tells anyone about me is that I weigh more than the holding capacity of the board; my having broken the board tells you nothing about my morally-relevant properties, and I didn’t intentionally break it, didn’t break it “of my own free will”. I broke it by accident.

If on the other hand I saw the board making up part of the wall of an abandoned building, and kicked it so it broke just ’cause I like hearing things break and didn’t really care that it wasn’t officially mine to break, that was a freely-willed action, and it tells you something morally relevant about me. You might not want to include me in certain activities; you might even want society to fine me for wanton destruction.

All that seems pretty easy. So why does Harris (and why do the many other philosophers who’ve made similar arguments) think that free will is an illusion?

Well, they argue, science can (at least in principle) explain why I broke the board in both cases. In both cases there is a story about my birth and upbringing, about purely physical processes occurring in my brain and body, that show how the board ended up being broken. In the second case this involves my enjoyment of things breaking and my disregard for the concept of ownership, but those things can themselves be explained.

And if, the argument goes, science can explain why I have certain tendencies and values, then surely I can’t be held responsible for them.

To which the reply is “whyever not”? The assumption is that people can’t be held responsible for their preferences and desires and values because we can explain how they ended up with those preferences and desires and values, and/or because the person didn’t choose to have those preferences and desires and values (or at least didn’t choose to have the initial genes and early-life experiences that led to having them).

But why should we accept that assumption?

The usual argument by analogy is that if it turned out that I have a small brain tumor that is pressing on a certain part of my cerebrum that is causing me to enjoy breaking things and to ignore property rights, then surely I wouldn’t be held morally responsible. And having genes and life-experiences that cause me to do those things is just like having a small brain tumor.

This strikes me as unconvincing. Having certain genes and life-experiences really isn’t all that much like having a small brain tumor. When we exercise moral judgement, we are trying to determine (basically) what sort of person this person’s genes and life-experiences have brought into being.

We may or may not be willing to factor out small and identifiable and perhaps reparable things like brain tumors (I think this is an edge-case of free will where different people may have different intuitions), but we are not willing to factor out every single fact that makes this person different from every other person.

This form of the denial of free will hinges, I would say, on a false dichotomy, related to the dichotomy in the second Harris quote above: either, the dichotomy says, our actions are determined by causes (in which case those causes are responsible, we are not, and we don’t have free will), or our actions are random (in which case, again, we are not responsible for them, and we don’t have free will).

The actual fact of the matter, I believe, is that our actions are determined by various causes, and to the extent that those causes are part of what kind of person we are, we are responsible for the action and we exercise free will. To the extent that those causes are outside of the morally-relevant parts of us (including random die-rolls, although perhaps not the weighting of the dice), the actions are not morally relevant, and don’t count as free(ly) will(ed).

The fact that I didn’t cause myself to have the nature that I do isn’t relevant to the fact that when my actions are expressions of that nature, they are expressions of my free will, and I bear the responsibility for them.

Which seems pretty simple, really. :)

When I actually read the book, I will see if any of it applies to the sort of objection that I put forth here, and report back with any interesting findings, on that issue or any other.


Naturalism not actually defeated

So I have been bad about writing down things about this next Plantinga paper, largely because I have been making sure my healer looks good, and related endeavors. And working and stuff.

But anyway! The paper in question here is Plantinga’s “Naturalism Defeated“, dated 1994, which presents an argument I gather first presented in his “Warrant and Proper Function” (1993), and is apparently still one of his big talking points (2012).

The basic argument is pretty simple. If, it says, you believe both in evolution and in naturalism (which is the lack of supernatural things), then you are in trouble, because in the absence of supernatural intervention, evolution doesn’t reliably create creatures that are good at coming to correct conclusions, so you’re forced to conclude that your own conclusions aren’t reliable. This doesn’t mean that evolution-plus-naturalism isn’t true, of course, but it does suggest that one oughtn’t to believe it, in the same way that one oughtn’t to believe solipsism, radical skepticism, or that one is a brain in a vat, even though in some sense none of those can be refuted.

(I actually find it a little amusing here that the argument basically says that you shouldn’t believe evolution-plus-naturalism, because if it’s true your own reasoning processes don’t give you good reasons for belief, and you shouldn’t believe things without good reason; whereas as we saw last month he apparently thinks it’s fine to believe in god without good reason. But there ya go.)

Anyone who knows anything about evolution will have a hard time with that “in the absence of supernatural intervention, evolution doesn’t reliably create creatures that are good at coming to correct conclusions” part. As Plantinga puts it (rephrasing Patricia Churchland):

[T]he objective probability that our cognitive faculties are reliable, given naturalism and given that we have been cobbled together by the processes to which contemporary evolutionary theory calls our attention, is low.

On the face of it this seems silly; it seems immediately obvious that in the usual sort of Darwinian contest, having reliable cognitive facilities is better than not having them, so they will tend to evolve, subject to constraints about how expensive they are to build and operate, how well they are passed along to offspring, and so on.

But Plantinga attempts to make a case for it. He points out, with Churchland, that evolution selects only for behavior and not (directly) for belief. And Plantinga points out that for any example of a set of desires and true beliefs that lead to adaptive behavior, you can construct a set of different desires, and false beliefs, that lead to that same behavior.

Sure, not wanting to be eaten by tigers and believing (correctly) that they want to eat you will make you tiptoe around that sleeping tiger there, but so will wanting very much to be eaten by tigers, and believing (falsely) that they especially like to devour people who tiptoe! In his words:

Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking
for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief. . . . . Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it. . . . or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps . . . . Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior.

And that’s quite true. It’s pretty much irrelevant, though, since we’re not talking about the evolution of specific beliefs, we’re talking about the evolution of reliable or otherwise cognitive systems. It’s very hard to imagine a cognitive system which would reliably produce false but adaptive beliefs of this kind. Roughly, if Paul is crazy enough to think that the tiger is a cuddly pussycat that you pet by running away from, he probably thinks that alligators are treasure chests that you open by sticking your head in their mouths, or whatever, so he’s not going to be passing those crazy genes very far.

So that example is not very useful, because it talks about single beliefs, not about cognitive systems or methods of coming to have beliefs.

Plantinga comes close to realizing this, although he doesn’t quite get it.

A problem with the argument as thus presented is this. It is easy to see, for just one of Paul’s actions, that there are many different belief-desire combinations that yield it; it is less easy to see how it could be that most of all of his beliefs could be false but nonetheless adaptive or fitness enhancing. Could Paul’s beliefs really be mainly false, but still lead to adaptive action? Yes indeed; perhaps the simplest way to see how is by thinking of systematic ways in which his beliefs could be false but still adaptive. Perhaps Paul is a sort of early Leibnizian and thinks everything is conscious (and suppose that is false)… Perhaps he is an animist and thinks everything is alive. Perhaps he thinks all the plants and animals in his vicinity are witches, and his ways of referring to them all involve definite descriptions entailing witchhood. But this would be entirely compatible with his belief’s being adaptive; so it is clear, I think, that there would be many ways in which Paul’s beliefs could be for the most part false, but adaptive nonetheless.

This is somewhat plausible, and also somewhat beside the point again. Note that this time all of the false beliefs he gives as examples are (apparently) ones where the false part of the belief doesn’t impact action at all. So now he is pointing out that you can have lots and lots of false beliefs without being maladaptive, as long as those beliefs don’t influence your behavior.

How often does a false belief really not influence behavior? If I am an animist and think that everything is alive, isn’t it likely that I will tend to act differently toward things, and (assuming they aren’t really alive) that that will make me less efficient and effective? This seems likely at least if the predicate “is alive” actually has any content for me. If I think all of the plants and animals around me are witches, won’t I spend time placating them, or hiding my fear of witches from them, or whatever? Or at the very least worrying that they might convert into their true form while I’m not looking and cast spells or whatever it is I think witches do? Seems likely, and maladaptive at least to the extent of wasting time that I could have put to good pro-survival uses.

But again what we should be thinking about are cognitive systems rather than just beliefs. If we think about adaptive cognitive systems, it seems pretty inescapable that they will, to first order, tend to produce true beliefs about things that matter to behavior. And given that they do that to first order, it seems likely that at least some of the methods that they use to pare away false beliefs that impact behavior will also serve to pare away false beliefs that don’t. Not as efficiently, perhaps, since doing that isn’t of direct evolutionary value, but it seems quite plausible as a by-product.

That is, we first learn to find truths about things that will kill us if we get them wrong, but the methods that we use, the cognitive habits and structures and systems that we develop and evolve to do that, are likely to be good at finding truths in general; or at least there seems to be no reason to think they won’t be. So once I learn experimental science in a context where it does impact my survival, I am likely to do an experiment to confirm my belief that all these plants and animals are witches, and much to my surprise I will find they are not! And so I will be rid of that false belief, even if (somehow) the belief itself wasn’t hurting me any.

So it seems that the believer in naturalism and evolution really does have good reason to have a certain amount of faith in his own conclusions, since there is a good story that can be told about how evolution leads to reasonably reliable cognitive systems without any divine intervention.

(It’s an interesting question, actually, whether things are any better for the believer in divinely-infused knowledge, or whatever Plantinga would propose as an alternative to naturalism plus evolution. Is there any reason to think that such a person can tell true divinely-infused beliefs from false divinely-infused beliefs? The argument that the Divine would infuse only true beliefs, because the Divine is good, is very vulnerable to the argument that an evil Divine would likely have infused exactly that same “the Divine is good” belief into one, along with whatever false beliefs tickled the evil-Divine fancy; so one doesn’t really have any rational argument for one account over the other. On this argument it seems that this kind of believer is at least as badly off as the naturalist would be in Plantinga’s original argument.)

It’s perhaps worth noting that the bulk of Plantinga’s “Naturalism Defeated” assumes that the believer in naturalism and evolution actually accepts most of the argument of the paper, but after granting that naturalism plus evolution does yield the conclusion that human cognitive faculties are unreliable, then tries to squirm out of it by claiming in various ways that it’s okay to believe in them anyway. This part of the paper (which makes up at least 75% of it) is full of quaint symbology and meticulously organized discussions of what it means for something to defeat something else, and what kinds of things can defeat what other kinds of things, and whether and how defeaters can themselves be defeated, and so on.

And fascinating as that might be if you’re into such things (and I used to be, but now amn’t), it’s all entirely beside the point, because in fact naturalism plus evolution actually gives us a pretty good reason to believe in the reliability of our cognitive faculties. At least as good a reason, I suspect, as whatever Plantinga’s alternative gives him for believing in his…


More on Alvin Plantinga’s “Theism, Atheism, and Rationality”

I actually wrote “Is it rational to believe random stuff for no good reason?“, or 80% of it anyway, twice; WordPress whimsically threw away the first version.

In the second version there, I left out one argument that I discussed the first time. Between saying that he doesn’t believe in God by choice, and saying that the atheological evidentialist objector may regard the theist-without-evidence as sick or malfunctioning, Plantinga argues that there is not a general obligation to have evidence for everything you believe, thus:

[T]here seems no reason to think that I have such an obligation. Clearly I am not under an obligation to have evidence for everything I believe; that would not be possible. But why, then, suppose that I have an obligation to accept belief in God only if I accept other propositions which serve as evidence for it?

Well, he’s not so much arguing as he is baldly asserting, with a “clearly” in there for emphasis. But we can think about what he might mean by it.

I can think of three things he might be saying here.

First, he might be pointing out that I’m not obliged to have prepared in advance evidence for everything that I believe. That would be an awful lot of stuff to carry around with one just in case, so to speak, either physically or cognitively. And that’s fine, it seems reasonable to state the requirement of rationality that one should be able to produce good reasons for one’s beliefs, if asked, not so much that one should be aware of those reasons at all times.

Second, he might be making a sort of foundational or preconditional argument, saying that we can’t have evidence for stuff that is so basic to thinking itself that it’s really a precondition for anything even counting as evidence. Things like the reality of the past (as opposed to the world having sprung into being ready-made ten seconds ago), not being a brain in a vat, etc. You can’t really have evidence for them (or against them) since ex hypothesi they are entirely consistent with all of our experiences.

And this is reasonable also. It means that we can’t demand that all beliefs are based on sufficient evidence, and I think the right rationalist response to that is to say that all beliefs should be based on good reasons, where reasons are a superset of evidence, and also include things being preconditions for thought or evidence itself. Someone may someday come up with a system where we can do rational discourse without presupposing the reality of the past or our own nonvatness, but until that happens we have good reason to believe them, just because otherwise you’re dead in the water.

Third, he might just mean “there’s no reason to require that anyone have any reason for believing anything”, but since interpreted one way that’s just asserting as obvious the whole conclusion that the paper is aiming at (which would be silly), and interpreted another way it’s just weird (of course rationality requires something of our cognitive behavior, or it has no content at all), I will assume that he doesn’t mean that.

So we can take this argument to be pointing out that rationality doesn’t require us to, at all times, have in mind evidence for everything that we believe, but rather that it requires that we can, if asked, produce good reasons for believing each thing that we believe, where reasons are broader than just evidence.

This doesn’t actually work very well as as argument for what he wants to argue, though, since he seems to want to say not just that it’s okay to believe in God even if you don’t have sufficient evidence on the tip of your tongue, but also that it’s okay to believe in God for no good reason at all. And that’s far too strong a proposition to demonstrate by mere assertion.

While I’m here saying more stuff about this Plantinga paper, I’d also like to note not only how he slips from talking about believing in God for reasons into talking about believing in God simpliciter, but also how he conflates theism in general with his particular Christian theism.

Here’s a passage that especially raised my eyebrows:

[The theist] will see the atheist as somehow the victim of sin in the world — his own sin or the sin of others. According to the book of Romans, unbelief is a result of sin; it originates in an effort to “suppress the truth in unrighteousness.”

Of course not all theists have any particular theory about “sin”, or about the causes of unbelief, or about Romans (bookish or otherwise); last I looked, most theists weren’t Christian at all. Again, despite the putative topic of the paper, Plantinga doesn’t seem to be interested in a general philosophical point about the rationality of believing in God without evidence; really he’s just launching a salvo in the defense of Christianity. I’d have more respect for him if he’d just do that, and not pretend to be doing something else…


Is it rational to believe random stuff for no good reason?

So yesterday I got into the car to go get something from somewhere, and I heard the tail-end of an interview with some philosopher-guy. All they said in the twenty seconds I heard was that God and football are the main topics at Notre Dame, although there are a few others ha ha, and then they said his name is Alvin Plantinga, and he has a new book.

Thanks to the magic of the Innertubes, in this case the NPR app for the iPad, I was able to listen to the entire piece (and you can too if that link still works; it’s just six minutes). It’s Alvin Plantinga, who’s an Emeritus Professor of Philosophy at Notre Dame, talking in very general terms about how science and religion are compatible, and how in fact it’s “naturalism” (i.e. the idea that there are no “supernatural entities”, by which he means God, because no one believes in Santa anymore) that’s the weird belief, and that science is a great thing, but just limited in scope, and there are lots of things that you can’t scientifically prove, like the reality of the past (that is, the entire universe could have been created five minutes ago in exactly the state that we all remember, and you wouldn’t be able to tell), and (I’m guessing) the divinity of Jesus.

I’m always interested in religious people who claim to have a rational (or rough equivalent) argument for their religiousness, so I poked around the web a bit, and found a site that has a bunch of his papers, and I read a couple of them.

There’s some interesting stuff here, but I think he tends to (rather Searle-like) skip very quickly past the obvious problems with his theories, and dive into the complex and arguable ones instead. Admittedly those are more fun :) but…

On to the arguments! The first paper I read was “Theism, Atheism, and Rationality“; it’s intended as a response to the claim that “[a] person who believed without evidence that there are an even number of ducks would be believing foolishly or irrationally; the same goes for the person who believes in God without evidence”, and that therefore “one who accepts belief in God but has no evidence for that belief is not, intellectually speaking, up to snuff.”

He examines this first as a claim that the “theist without evidence” is violating an ethical or cognitive duty or responsibility that applies to any member of some cognitive or rational community, and that by violating this duty e opens emself to criticism and disapprobation by (other) members of the community, that being the way that communities work.

Plantinga’s first response to this is to note that he doesn’t exactly choose to believe in God. Although there may be “some sort of regimen” that he could use to eventually change or extinguish that belief, it’s not like if offered a million dollars he could just change his belief in a moment.

And that’s fair; no one needs to claim that he’s being irrational on purpose, I don’t think.

So what’s the alternative? The next possibility he considers is that, rather than choosing to do something wrong, perhaps the theist without evidence is defective in some way, broken, or ill, or otherwise malfunctioning. That’s all very well, he says, but the theist might also say that the atheist is broken or ill or malfunctioning or full of sin or whatever, and doing the wrong thing for that reason. How do we decide which one is actually malfunctioning?

(It’s interesting to note at this point that while he started out talking about someone who thinks it’s irrational to believe in God without sufficient evidence versus someone who thinks that’s fine, he’s now talking in plainer terms of atheist versus theist; that will become key a bit later.)

It’s easy for the theist, he says, in that correct functioning means functioning as God intended, and God wants us to believe in him, so believing in him is correct functioning. (Which makes alot of unwarranted assumptions about God and belief, but we’ll let that pass for now.) What, he asks, can the atheist offer instead?

Here Plantinga considers, and instantly dismisses, the right answer. The “atheological evidentialist objector” (love the phrase) “may be thinking of proper functioning as functioning in a way that helps us attain our ends“. And that’s basically right: for pretty much any plausible set of plans and goals and desires, believing stuff only for good reasons is much more conducive to attaining them than is believing stuff without good reasons (because it feels nice, or because you saw it in a dream, or whatever). If I believe in a traditional Christian God for no good reason, for instance, I will probably defer various pleasant things on the theory that I will be infinitely rewarded after death as a result; but if I don’t have good reasons to believe that, it’s quite likely false, and I will have deferred those pleasant things unnecessarily.

Plantinga, though, doesn’t look this deeply into the claim. He just notes that although the atheist may not want to believe in God, the theist probably does, so believing in God helps him attain his ends, and the atheist is just wishing that he wouldn’t. But that’s confusing “doing things that help me attain my ends in the long-run” with “doing things that I want to do right now”. One of the benefits of rationality, in fact, is just that it can help us see what will work out best in the long term, even when it’s not the thing we most want to do right now.

Next Plantinga does consider a version of the argument I give above:

A second possibility: proper functioning and allied notions are to be explained in terms of aptness for promoting survival, either at an individual or species level.

And that works, too, and in fact it’s a special case of the end-attaining argument above to the extent that in general our ends include individual and species survival.

Plantinga waves this one away, also, saying “the atheological objector would then owe us an argument for the conclusion that belief in God is indeed less likely to contribute to our individual survival, or the survival of our species than is atheism or agnosticism”, and concludes that that would be a hard argument to make.

I find this baffling! Suddenly, rather than replying to the suggestion that we shouldn’t believe in things (including God) for no good reason, he’s defending theism per se. Surely all that the atheological evidentialist objector (and note that Plantinga has dropped that middle word this time) has to argue is, not that believing in God is less survival promoting than not believing in God, but that believing things for no good reason is less survival promoting than believing things only for good reasons.

And doesn’t that seem awfully plausible?

Plantinga seems to have just dodged here, and that’s disappointing. He seems to entirely forgotten that he started out to respond to the claim that it’s irrational to believe in God without evidence, and reverted to just “well, you can’t prove that believing in God causes bad results!”, which is not the same thing at all.

The last paragraph of the paper is equally disappointing, raising a question that should be so obvious to any philosopher as to not even need asking. I’ll just quote the last three sentences:

The theist has an easy time explaining the notion of our cognitive equipment’s functioning properly: our cognitive equipment functions properly when it functions in the way God designed it to function. The atheist evidential objector, however, owes us an account of this notion. What does he mean when he complains that the theist without evidence displays a cognitive defect of some sort? How does he understand the notion of cognitive malfunction?

If only there were a vast existing literature, much of it not making reference to God at all, about what rationality and cognitive obligation and function and malfunction might mean! If only this vast literature were available in any decent university library, easily accessible by anyone in the Philosophy profession!

Oh, wait…

Next time: the next paper I read, in which Plantinga approaches some of these same things, and some different things, from a different angle, and raises some interesting questions, but still dodges the correct answer. His new book is apparently based on essentially the same argument.

Update: some stuff I forgot to mention. :)


New Year Update

It’s the New Year! 2012! Time to go out and buy a new Mayan calendar!

(Actually one has until December until the end of the current B’ak’tun, it seems. I wonder how Mayan Calendar vendors remember to stock up before the rush every 394 years or whatever it is.)

This year we made a mere 159 New Year dumplings (餃子, WordPress permitting), which is about the same number as in 2005, considerably more than in 2007, but significantly less than in recent years. We had somewhat more meat than dough (the kids are speaking of dumpling-meat patties), which traditionally means we will have enough food but not enough clothes in 2012, which is better than the main alternative.

Search o’ the Day: arrow in the meme. (You’re welcome!)

So I asked on “Facebook”: “How do you decide what to want?”.

Didn’t get much in the way of (substantive) answers (although I admit it’s fun that the two answers I did get were from a co-worker and a childhood friend who live on like different continents). It seems like a very important question. As questions go.

On some piece of paper somewhere, maybe not in digital form anywhere, I wrote something about some part of Colin Wilson’s “The Outsider” I think it was, about how soldiers returning from war could find the ordinary world meaningless or arbitrary; I think I wrote that this is likely because they had been in a context where they had to spend alot of time just thinking about survival, and when that need then went away they were left with only less compelling reasons for action.

So (I’m writing very stream-of-consciousness here) we can think about ascending ol’ Maslow’s Hierarchy of Needs, where it’s more or less obvious what to do when we’re down at the Physiological level (find air, find food), and for that matter the Safety level (get further from the tigers, put up walls), and as we get higher up it becomes sort of less obvious, more arbitrary, less compelling. And if we make the mistake of thinking about what to want, rather than just wanting what’s expected, we may find nothing to speak of under our feet.

How do you decide what to want? Your ancestors all wanted to have children who would in turn have children, or at least they all did that, or they wouldn’t be your ancestors. The intellectual ancestors of your beliefs and attitudes all wanted to pass their beliefs and attitudes down to later generations, or at least they all did that, or they wouldn’t be the intellectual ancestors of your beliefs and attitudes.

So there’s a strong (what?) evolutionary tendency to want to have and raise children, and/or to pass one’s beliefs and attitudes down to later generations. But we don’t necessarily want to follow that evolutionary tendency. Or, we don’t have to want to follow that tendency; it’s not mandatory or required, it’s merely easy and obvious. (Easy and obvious to make that choice, that is; the actual doing of it may be hard and subtle.)

Somewhere when I was even younger :) I wrote down “the is-ought connection is choice”. And I think that’s true; choice, or the lack of choice, the slipping into the default choice. But how do you choose? How do I choose? How, especially, if one of the things that we’re choosing is the deepest basis for our own choice-making?

It seems like the choice must either be arbitrary, or (which may be the same thing) must be based on things that are so fundamental that we don’t get to choose about them however hard we might try (ingrained preferences that we can’t get beyond, or can’t want to get beyond, intrinsic tendencies that are too deep down even to represent as preferences).

So, hm. Am I an Existentialist now? :)

I think I have probably written all of this down before, and it’s not clear what there is to say about it next, or what to do beyond writing it down and mentally putting it in your pocket, for the next time it comes up. So now I’ve done that again.

Tamara de Lempicka. Just sayin’.


Vampire Willow

That’s a fun title! All sorts of possible meanings. But only one in the Buffy context, and this time we are in the Buffy context because I have been watching ancient Buffy the Vampire Slayer episodes again (or, as I keep saying by accident much to M’s hysterical amusement, “Bumpy”).

That’s Vampire Willow over to the right there (or somewhere nearby, or else you just have to imagine a picture, depending on how you’re Experiencing this Content). She appears in two Episodes: “The Wish” and “Doppelgängland”. She is, obviously, the vampire form of Willow, the shy quiet bookish young hacker girl that everyone with an inner geeky highschool gynophile has an enormous crush on.

Vampire Willow is the sultry sexy id of the Good Girl Willow. She has Willow’s cute mannerisms, without the insecurity and repression. She also looks on humans as primarily a food-source, and enjoys causing fear and suffering, but our inner geeky highschool gynophiles are willing to overlook that because she looks so good in leather.

Besides fanboying all over the character, I bring up the topic of Vampire Willow because the episodes, especially “Doppelgängland”, touch on the question we considered the other week: just what is the relationship, in the Buffyverse, between a person and the vampire that that person becomes after they are, um, made into a vampire? And what does this tell us about personal identity, moral responsibility, justice, and so on?

One delicious and relevant moment in “Doppelgängland”:

Willow: It’s horrible. That’s me as a vampire? I’m so evil, and skanky… and I think I’m kinda gay.
Buffy: Willow, just remember, a vampire’s personality has nothing to do with the person it was.
Angel: Well, actually…
[pauses as Willow and Buffy look at him]
Angel: That’s a good point.

Now Angel was about to say something along the lines of “actually, a vampire’s personality is shaped to a surprising extent by the personality of the person” (and of course the “kinda gay” thing is lovely foreshadowing since it will turn out that Willow is in fact kinda gay, for various values of “kinda” and “gay”).

This suggests some sort of subtle grey area between our previous wondering whether a vampire is (a) the same person, just with the soul / conscience / goodness removed, or (b) a completely different person (well, demon) who is just using the body of the (now dead or whatever) person.

Perhaps the vampire is a demon who is using the body, and also using the personality of the original person, only with the non-demonic bits left out, maybe because this flavor of demon doesn’t have a personality of its own. When looking at the vampire, then, we might draw conclusions about the person, not so much that they are culpable for the vampire’s acts or anything, but something along the lines of “this is what Willow / Angel / whoever would be like if they cast off the shackles of conscience“.

This still doesn’t seem to justify (for instance) Xander or Giles hating Angel-with-soul for what demon-Angel actually did, so we still have a puzzle there. But one can imagine that knowing things of the form “if he were to cast off the shackles of conscience, Angel-with-soul would be capable of X and Y and Z” might make one sort of uncomfortable to be around him, at a visceral level. (As might, I admit, just knowing that his body had in the past done these various very unpleasant things.)

Nor does it really make any sense of the gypsies’ (gad, is that the right spelling?) wanting to give Angel back his soul so he could suffer for what he (“he”) had done. The closest it really gets is “we will give him back his soul so that he can see the awful kinds of things that he might do if he didn’t have a soul!”. But that’s kinda stupid. If not any stupider than any other explanation we’ve been able to come up with for the gypsy thing.

Presumably (or at least this is worth thinking about) we don’t hold anyone morally responsible for things that they would do if they had no conscience, because when we judge someone morally we are judging (among other things) exactly their conscience. If we found out that someone would be a murderer if only they were a better shot, we might judge them harshly; but “he’d be a murderer if only he had no conscience” is not nearly the same sort of accusation.

Of course I’m awful at explaining human behavior in general. :) Don’t get me started on sexual jealousy, for instance (another common Bumpy theme); why is Willow hiding in the girls’ room crying (in “Consequences” I vaguely think it was) because she’s found out that Xander (whom she loves but is carefully not physically involved with because she is going all steady with Oz who she probably also loves) has had sex with ummm Faith, and why does she dislike Faith intensely as a result?

It’s not like she and Xander had pledged mutual fidelity and he’s broken the agreement; quite the opposite in fact! (That is, they’ve promised not to get physically involved with each other.) Is no one she loves allowed to have sex with anyone else? (That would probably condemn Xander to a life of celibacy, given the Oz thing.) Is she envious of Faith? Is she wishing that society wasn’t so annoyingly monogamous so she could snuggle with them both? (That one would almost make sense to me, come to think of it.) But it doesn’t seem to be that kind of crying.

(Maybe it’s more the “In this situation I’m supposed to cry for no rational reason, like in the hundreds of similar love stories you’ve seen throughout your life, and don’t question it or you’re some kind of sick pervert!” kind of crying, heh heh.)

So okay, that (and for that matter my relative incomprehension of the immediate “never speak to me again!” reaction of Cordelia and Oz finding Xander and Willow kissing that time in “Lover’s Walk”, when you’d think that if there was any actual, y’know, love involved it’d be more like “I understand, pumpkin, you both thought you were going to die, it’s a perfectly normal reaction”, or in Cordelia’s case “whatever, as long as you continue to turn me on with your wild chemistry”) — ehem anyway, that is my cluelessness about human nature for the night.

I will go back to interacting with nice rational computers now. :)


Friday, October 28, 2011

So Apple is becoming less evil in the sense that back in June they somewhat eased the restrictions on apps with respect to what people making apps that let you buy subscriptions to things can do.

However, they are still evil, in that they still forbid apps from having links to places outside of the app (and therefore outside of Apple’s gigantic cut of the proceeds) where you can buy the stuff they play.

I understand why they want to do this in order to make money, and it is probably legal and even within their rights, but it is still evil. (There are various evil things that one has the right to do; consider the writing of vile racist tracts as an obvious example.) It is evil because it is restricting the programs that I can get on this iPad I own, not in order to make my experience with the device better (which is the reason we iPad owners put up with Apple being the app gatekeeper in the first place), but just in order to advantage Apple itself.

I don’t really mind using a device that reeks slightly of evil, and I hope and imagine and even expect that it will continue to get less evil over time. On the other hand, it does lead me to keep one eye on possible more-open alternatives.


I was noticing this morning on the drive to work that “Yodels” is “Sledoy” backwards, and that made me think about the lexeme† “doy”, and how amusing it is, and I then noticed I couldn’t think of any English words containing it. Sometime during my first coffee I came up with “doyen”, but that was it. Some random /usr/dict/words produced only “Doyle” (a proper name, doesn’t count), and the Ispell English Word Lists (found in various places on the Web) had “Doyle” and “doyen”, and rather unconvincingly added “doyley” (which is an obscure variant spelling of the already rather obscure “doily”). One or more online dictionaries offers “doyly” as another alternate spelling, but now we’re really off in the weeds.

So, readers! Do you have any good “doy” words to hand? Or an explanation for why there aren’t more? Lots of untapped potential there!

“This is going to be a doyantic day!”

“Could you pass the pandoy?”

“Whoa, look at that saradoya!”

Maybe it’s just part of the “reserved for future expansion” part of the space…

† “lexeme” is almost certainly the wrong word. Readers are invited to suggest the right word.

More evil

Government could hide existence of records under FOIA rule proposal. Or, as I saw it linked originally, Justice Department Wants To Be Able To Lie In Response To Freedom Of Information Requests.

Which seems like a bad idea.

Watching every bit of The Daily Show you can find is of course a good idea. But a recent notable snippet: Climate Change is Real (but the media isn’t nearly as interested in the debunking of “ClimateGate” as they were in the original pseudo-scandal, somehow).

And of course Jon Stewart on Pat Robertson worrying that Republican rhetoric has become too extreme. Which rather boggles.

iTunes-U and Kant and all

I have discovered iTunes U, and it’s pretty hoopy (another reason I am willing to put up with a certain level of evil from Apple). All sortsa free stuff to learn!

You may recall that the other day I was listening to a (decidedly non-free) course on Consciousness and its implications, which was kinda cool, and although I’d gotten a little tired of Prof. Daniel Robinson for some of his odd little quirks of speech and for being wrong about stuff and like that, I was up for some more random audio philosophy, so I downloaded the first couple of lectures from a free iTunes U course on Kant’s Critique of Pure Reason and started listening.

And it was Prof. Daniel Robinson again!

Which is either quite a coincidence, or there’s not really all that much material out there, or Prof. Dan has done a lot of these things.

So far it is not bad to listen to, although as well as the same little verbal tics (random “you see?”s and “capito?”s and “of.. what? of experience!” and so forth) there is also the occasional burst of cellphone-static on the recording. And there is also Robinson (who outs himself as a Aristotelian, which does not bode well for my agreeing with him about very much) saying rather offhandedly that mathematics and the physical sciences are “riddled” with synthetic a priori truths, and giving as examples “there is no number so large that one cannot be added to it”, “every effect has an antecedent cause”, and “there’s no line so long that you can’t increase its length”.

And of course I disagree.

The synthetic a priori

Synthetic a priori statements are supposedly those that can be known without any reference to experience (so a priori, rather than a posteriori or “empirical”), but which are not true just because of the meanings of the words (so synthetic).

Myself, I rather doubt that there are any of these (for any reasonable construal of “because of the meanings of the words”), and I certainly don’t think that any of Robinson’s examples count. Most of the time when someone claims that something is synthetic a priori, it actually means that they just aren’t imaginative enough to come up with a possible world in which it isn’t true (but there are such possible worlds, and therefore it’s not a priori at all; you have to check the actual world to see if it’s true here or not). Or, alternately, the statement is true but follows so directly from the meanings of the words that it’s hard to justify calling it synthetic if “synthetic” is to have any actual meaning.

“There is no number so large that one cannot be added to it” is clearly not true if we’re working in the domain of, say, positive integers less than 1000. Oh, but that isn’t what we mean by “number”! Well, what do you mean? The answer to that will be a set that has no upper bound, which makes “no number so large that one cannot be added to it” true essentially by the definition of “number”. So that one’s analytic a priori.

“There’s no line so long that you can’t increase its length” is only true in some spaces. It’s not true, for instance, on the surface of a sphere. So this is either synthetic but empirical (i.e. to know it’s true we have to check to make sure we aren’t in a space that’s like the surface of a sphere), or if we add “on a flat plain” to the end it’s again analytic a priori (analytic because it follows directly from the definition of “flat plain”).

The one in the middle, “every effect has an antecedent cause”, is awfully vague, but again can be read in at least two ways, neither of which turns out to be synthetic a priori. Either it’s saying that, in the actual world, events happen in temporally-ordered causal chains (which is something one would definitely have to check the actual world for, since there are scads of possible worlds where things just sort of happen at random and uncaused), or it’s saying that there’s a subset of events, called “effects”, which are those that have “antecedent causes”, and that all of those have antecedent causes. And that is obviously analytic.

Readers are invited to submit more convincing examples of the synthetic a priori. With or without accompanying “doy” words… :)

Update: I meant to close with this picture!


Tuesday, October 25, 2011

Personal Identity and the Continuity of Consciousness in Buffy the Vampire Slayer and that one Babylon 5 Episode

We use these two fragments of popular culture (decade-old popular culture, at that) to explore and illuminate our intuitions, or lack thereof, about personal identity, consciousness, and moral responsibility (that last bit didn’t fit into the title, or really it would have fit, but then the title would have been really really long).

The theory of vampirism in Buffy the Vampire Slayer, as laid out by the title character in the Season Two episode “Lie to me“, is that when a vampire does whatever it is to a human to make the human into a vampire, the human dies, and a demon comes to inhabit and animate the human’s (former) body.

This simple theory is complicated by the vampire Angel, or Angelus. After he does terrible things to a stereotypical tribe of gypsies, the gypsies cast a curse on him in revenge. The curse, intended to make the vampire suffer, “restores his soul”.

There are (at least) two ways we could imagine this soul-restoration occurring: either the original human soul (including subjectivity and memory) is brought back from some afterlife to inhabit its former body (although, as textual evidence from other episodes strongly suggests, with the demon also still in residence in a usually-subservient position), or some generic soul (consisting really of just a “conscience” or a sense of right and wrong (whose?)) is injected into the body along with the demon.

The first of these possibilities is supported by most of the evidence; on first having his soul restored, Angel is confused at first, not knowing where he is or why. This is consistent with the consciousness having just returned from some afterlife, and not having been caught up with what the demon in its (former) body has been up to. But then the memories do arrive, and Angel is horrified to realize all of the things that he (the demon? his body?) has done, and begins to suffer as the gypsies intended him to.

Note the bizarre moral theory here, however. In this reading, in order to punish the demon for his evil acts, the gypsies have summoned up someone else entirely who will feel bad about these acts, and arranged for that someone else to suffer. (The demon, presumably, is not really suffering, except to the extent of being annoyed by this human soul having taken over the body; an annoyance that is attested to in later episodes when the human soul goes away again, and we hear from the demon; although the body of utterances there is not completely unambiguous.)

But why would the gypsies consider that to be justice, and why would anyone sane go along with that consideration? The demon has done terrible things, so we will arrange for someone else, who in the past inhabited this same body, to suffer. How could that be just?

The other possible reading is that the original Angel is still dead and gone, and some more generic feelings of guilt have been imposed upon the demon inhabiting the body. This seems a little more plausible as a kind of justice, but the theory of moral feelings that it requires is odd; in this view moral feelings must be something independent of one’s nature, of who one actually is, so that they can be sort of grafted on after the fact to any personality and consciousness at all, including that of the most depraved demon. And while Angel-with-no-soul is the vilest creature imaginable, Angel-with-soul is such a great guy that Buffy falls in love with him, the viewer is clearly supposed to identify strongly with him as a Good Guy, and so on. (Also, later on when the soul is removed again, the now-evil Angel says of the human one “your boyfriend is dead”, which is more evidence for the first theory, although it could possibly be a figurative way of saying “because I no longer have a conscience, I am evil and nasty again, so that goodness you saw in me before is dead”; but that seems a bit of a stretch perhaps.)

So neither of these theories is really satisfying, and this suggests that our ideas about continuity of consciousness, personal identity, and moral responsibility aren’t sufficiently well-formed to handle these counterfactual edge-cases in any consistent way. If a demon takes over my body, surely I shouldn’t be responsible for its depraved acts, and made to suffer in the name of justice. On the other hand, surely just adding a generic conscience to a vile monster would not convert that personality into something virtuous and admirable.

Then there’s that one Babylon 5 episode. Which one was it, let’s see… Ah, yes: Divided Loyalties (also in the second season, albeit of a different series).

The setup at one point here is that it’s known that someone on the station innocently and unknowingly has a bomb hidden in their brain, and in order to find who it is they have people line up to sit in a booth where, if they are the one with the bomb, it will go off and kill them, while not hurting anyone else. So everyone lines up more or less calmly, except for a bit of grousing about interfering with personal privacy, to sit in the booth and be examined and possibly die.

Ha ha, no, of course that isn’t actually it! That would be ridiculous. It’s actually that someone has an evil Psi Corps artificial personality implant, which will activate when a telepath thinks a certain code-word into their brain, so everyone lines up, with only a bit of grousing about not liking telepaths, to have the code-word thought at them. Which, if they are the one, will awaken the artificial personality implant. Effectively killing their real personality.

And that is actually it! And the people line up anyway! Is that bizarre, or what? It’s again hard to imagine the theory of personal identity and continuity of consciousness here, that would either cause the people to be willing to be tested with only a little grumbling, or that would cause the supposedly virtuous station staff to attack the problem that way (rather than, as they presumably would have done in the brain-bomb case, looking for a way to find and defuse the bomb without killing the person that happened to be carrying it).

It seems as though the writers, and the commentators who have written about this episode without noting the bizarreness of the whole test thing, are working from some theory where people only care that their bodies continue to exist, with some consciousness in them, even if it’s not the one that’s in charge right now. Echoes here of the gypsies, who only care that some consciousness in Angel’s body suffers, even if it’s not the one that was in charge when the actual atrocities were committed.

We might speculate, for instance, that we are so used to seeing a single body always associated with a single personality and a single consciousness, that we don’t really think very hard, at least in these examples, about what happens to identity, moral culpability, or personal survival when that is no longer the case, and we don’t always get a sensible answer when although there’s still someone in there, it’s now someone else.

Readers are warmly invited to submit other ways to read either or both of these fragments of popular culture, in ways that simplify or otherwise cast different lights on the issues.

Other things I might weblogify about in future issues: how Apple is becoming less evil, and iTunes-U and this thing I’m listening to. Also bread, and other stuff!


Friday, October 7, 2011

I’m taking the day off at random; it is very nice! We don’t have Monday off, so it’s a mere three-day weekend, but still.

Hos, Boobies, an’ Orgasms

So we signed up for HBO for a month, mostly so we could watch this George Harrison Special that they had. The program on HBOHD right before it was “Making it in America”, which seems to be a satirical comedy about naked people having uncomfortable-looking sex. Then the program right after it was “Cathouse” something, a hard-hitting documentary about what it’s like to be an escort, including numerous scenes of nudity and/or sex. And then after that was “Katie Someone on Sex Toys”, which featured sex, as one might expect from the name, and also a blonde ditzy stark-naked narrator with large artificial breasts (as one might also expect from the name I suppose, since Katie Someone is apparently a relatively well-known porn star).

Surprisingly the George Harrison Special didn’t include any gratuitous nudity or sex, at least not that I noticed. But the other programs suggest a certain theme, or one might even say obsession.


The Aren’t Like Us, You Know

So (maybe all subsections should start with “So”) I finished listening to that Learning Company course on consciousness, and it was indeed pretty basic. It was also disappointingly simplistic on the whole “what kinds of things have consciousness, and how could we possibly know?” question.

The professor says things like “to be conscious is to be the subject of sensation” as if it actually told us anything very interesting. He also talks about how impossible it would be for fish to become aware of, or know anything about, water, as though that was anything more than the flimsiest of metaphors (flimsy because it falls apart as soon as one asks whether humans could ever know anything about air).

And in general he has very definite, but apparently completely unsupported, opinions about what is or might be conscious. He is always talking about what he is “inclined” or “strongly inclined” to “say”; but I find myself rather definitely uninterested in what he is inclined to say: I want to know what is true.

He is for instance inclined to say that there is something that it is like to be an amoeba, but that there isn’t anything that it is like to be a “machine” (where “machine” is not further defined). My suspicion is that that inclination comes from a sort of unthinking “carbon compounds good, silicon compounds bad” meme with nothing very interesting behind it, or at least he gives us no reason to think otherwise.

The question of what things might be conscious is rather a different question from the question of how we might come to know that a thing is conscious. We come to know that other humans are conscious because we observe a strong correlation between our own actions and our own consciousness, and probably-justifiably conclude that people that take similar actions have similar consciousnesses. (One thing the professor gets right is pointing out that the claim that we each have only one datapoint, ourselves, on the subject is wrong; actually we each have a huge number of datapoints: all of our conscious actions.)

But it’s important to distinguish between things that we can come to know are conscious (other humans, probably other relatively high-level animals, possibly unicellular microorganisms although I would take some convincing), things that we can come to know aren’t conscious (not sure what if anything is in that set, although the professor seems to think that “machines” are in it), and things that we can’t come to know are conscious (or at least can’t come to know it in the same way), but still might be conscious for all we know.

Even if I bought the argument that an amoeba’s actions are more like mine than the actions of any machine could ever be (which I think is in fact utterly false), that would still not be any reason to think that no machine could ever be conscious. It would just be reason to think that I could not come to know that any machine was conscious by way of the behaves-like-me argument.

Conflating the truth of a thing with one’s ability to find out that truth is the height of arrogance, not to mention silly.

“Are there any apples in the box?”


“How do you know?”

“The box is closed, and I can’t see inside. Must be empty.”

I don’t know if there’s something that it’s like to be an amoeba, or a tree, or a jackhammer, or Deep Blue. I think the whole question is deep and mysterious and fascinating. Pretending to answer it by just examining one’s pretheoretic inclinations to say things is completely unsatisfactory, and I’m disappointed in this professor for doing not much more than that.

One possible reaction to all this is to say oh, well, phht, it may be philosophically fun to speculate that maybe there’s something it’s like to be a tree, but really there isn’t, and it has no practical interest. I think it was Nagel or someone who pointed out that some extremely alien Martians might examine us and their more practical citizens might say the same thing about us, and they would be factually wrong, since in fact there is something that it is like to be one of us.

And I’d like to not be factually wrong, when feasible.

Blue What?

So I am really liking this wireless Bluetooth headset thing! It’s ummm this one. I bought it to work with the iPad, which it does very nicely, and it turns out that the Windows 7 laptop here also has Bluetooth, and also works nicely with it.

My only complaint is that when anything of interest happens (the signal momentarily dropping or reconnecting, one accidentally trying to turn the headset volume higher or lower than it goes, etc) it makes a LOUD BEEPING NOISE in one’s ear, which seems uncalled-for. Also switching it from the iPad to the laptop and back requires a bit more messing-around than I’d like, but maybe I just haven’t found the right buttons to push yet.

So anyway I can now listen to sounds being produced by either the iPad or the laptop without having to untangle wires, keep my head carefully within N inches of the device, or worry about having the things rudely ripped from my ears by passing cats or the corners of things.

It is very modern and shiny!

There was some other witty section title I was going to use

But I have forgotten it. :) I have also been playing Glitch, which is fun and silly, and some WoW, and always Second Life. And watching Buffy the Vampire Slayer episodes on Netflix on the iPad! Which is also fun and silly. :)


Non-algorithmic pixie dust

So I’m still listening to that Learning Company Course on Consciousness. It’s interesting, keeps my mind awake, although so far there’s nothing that’ll be new to anyone who has an undergraduate degree in the area (raises hand), or who’s just done alot of reading.

In the lecture on Consciousness and Physics, I was somewhat disappointed that the professor repeated Roger Penrose’s incompleteness-theory-based argument that thinking things can’t be (just) carrying out algorithms (and therefore that computers, say, can’t think), with a straight face, and without pointing out the difficulties. I think that argument is pretty much entirely wrong, and not even in very subtle ways, so I will rail against it here (I originally typed that I would “inveigle” against it here, and was proud of myself for spelling it right, even though it’s entirely the wrong word).

Gödel’s Incompleteness Theorem says (quite surprisingly and counterintuitively, which is why it’s famous) that any formal system (that is, any set of symbols and rules about how you can manipulate them to derive some from others) that is at least powerful enough to represent arithmetic (like, say, the digits and plus and equal signs and stuff), and that is consistent (that is, there’s no statement S where you can both prove that S is true, and prove that it’s false, in the system), is incomplete, where incomplete means that there is at least one statement that is true in the system that cannot be proven in the system. If you want to prove that true statement, you have to add it as an axiom. (And the resulting system is now unable to prove some other true statement, so it’s still incomplete.)

(This immediately seems counterintuitive, because after all what could “true in a formal system” mean, apart from “provable in that system”? It turns out those really are different, and realizing that was one of the more interesting things about learning about the Incompleteness Theorem in the first place.)

Now Roger Penrose, who is really good at mathematics and physics and Minkowski spaces and geometric tilings and all, but as far as I can tell not all that good at philosophy, famously argues that the Incompleteness Theorem shows that we humans cannot be merely executing algorithms when we think, because things that execute algorithms can be modeled as formal systems, and (the argument goes) we humans can do arithmetic, and we are consistent, and yet there are no statements that are true but we humans can’t prove them.

Or, as Professor Robinson puts it in the Course Guidebook here, “[w]e are able to reflect on our own problem-solving maneuvers without importing into the system some set of axioms, otherwise unknown to us, in order for us to make sense of what we are thinking.”

Now this is awfully sloppy. The Incompleteness Theorem isn’t some broad fuzzy statement that a formal system can’t “reflect upon” its own “problem-solving maneuvers” without adding extra axioms; it’s that it can’t prove every single true statement without doing so.

Kind of an important difference.

To pummel this completely into the ground, for this argument to hold any water, three things would have to be true:

Human thought is at least powerful enough to do arithmetic.

Human thought is consistent.

Human thought is able to prove every true statement implied by its beliefs.

Not only are these not all true, I think it’s pretty clear that none of them are true. In detail:

Human thought is not powerful enough to do arithmetic. Now how can I say that? We do arithmetic all the time! But for the purposes of the Incompleteness Theorem, we have to not just be able to do some arithmetic, we have to be able to do it all. I can add pretty big numbers together, I can (with some outside assistance) prove some moderately complex theorems. But that’s a (literally) vanishingly small fraction of what the Incompleteness Theorem requires. I can’t add the vast majority of trillion-digit numbers, and I would bet money that Penrose can’t either. The usual proof of the Incompleteness Theorem relies on utterly enormous proofs, proofs which if written down would probably require a book larger than the Earth. I can’t generate, or verify, or even begin to understand, a proof that big (and neither can Penrose, or any other human).

One response to this is what while humans can’t do all arithmetic, we can do some bounded subset of it; some arithmetic that doesn’t extend to numbers bigger than N for some finite N, for instance. And that’s possible, but the Incompleteness Theorem as far as I know has never been proven for bounded arithmetics; that is, it tells us nothing whatever about systems that are powerful enough to do bounded arithmetic. So if that’s what we are, the theorem tells us nothing about ourselves.

Another response would be to say that while we can’t actually do all of arithmetic (due to not living long enough to add random trillion-digit numbers, to losing interest long before understanding the Earth-sized theorem, etc), we can do it all in principle. This has the nice feature of sounding good; the problem with it is that it doesn’t actually mean anything without further elaboration, and I don’t know of any further elaboration that saves the argument. (For instance, you might end up proving that something that really isn’t anything much like a human at all, and can’t in fact exist, isn’t just executing an algorithm, but how interesting is that?)

Secondly, human thought is not consistent. This will be blindingly obvious to some people :) but apparently not to all professional philosophers. I am quite certain that I have at least one false belief; and just to avoid amusing paradoxes I will add that I am quite certain that I have at least one false belief other than this one. I am quite certain that I believe at least one pair of things that contradict each other. And I am quite certain that you, and Roger Penrose, do also.

Anyone who thinks that all of their beliefs are consistent, and that none contradicts any other, is someone that I wouldn’t want in a position of power or responsibility; such persons are dangerous. Human thought, and belief, are very very fallible, and it’s important not to forget that.

And last, humans cannot prove all of the true statements implied by their beliefs. Another shocker, eh? In fact except possibly for some really heads-down mathematical theorists with no lives, I’d say that humans cannot prove the vast majority of the true statements implied by their beliefs, or for that matter the vast majority of the beliefs themselves. Really, how many of the things that you believe could you prove, in the mathematical sense of “derive from first principles via the basic rules of mathematics and logic”? That’s not really the way that human belief works at all.

So there we are. The Incompleteness Theorem fails to prove that human thought is “non-algorithmic” because humans are not the kind of thing that the Theorem talks about at all (consistent and arithmetic-capable systems), and for that matter, even if we were, we can’t do what the Theorem says that that kind of thing can’t do.

Now I could give Penrose the benefit of the doubt here, and suspect that I’m just missing some important part of his argument, and my criticisms miss the point. But that’s what I used to think about John Searle, until he came right out and said that the Problem of Other Minds isn’t really a problem because his dog has adorable floppy ears and deep brown eyes.

So until I get a good argument to the contrary (and candidate good arguments are eagerly solicited from my loyal readers), I’m going to assume that Penrose is just wrong here, perhaps for interesting psychological reasons, and continue both to wonder why people bother talking about his argument as though it might be true, and to be generally mystified by consciousness, and not even know anything to speak of about what sorts of things might, or might not, have it.


Out and About

So I’m reminded by this philosophy course that I’m listening to, that (some) philosophers have this funny notion of “aboutness”; they say things like “mental froobahs are invariably about things, which is to say they have content, whereas physical froobahs are not, and do not.”

I’ve never really understood why philosophers say this kind of thing, because it’s so obviously wrong.

The argument is mostly by induction from a handful of examples. There is no belief without something believed, no fear without something feared, no love without something loved, no boredom without — oops. Well, they don’t usually include that example.

(Oh, you see, boredom isn’t really a mental state, or it is a mental state, and what it’s about is the person who is bored, you see? At least that’s the strongest answer to that example I can think of, and it’s not strong at all.)

And on the other hand, all sorts of non-mental things are about stuff, and have content. This book, for instance, is about pirates and ninjas and zombies, and even if the book had been written long ago and the author and everyone who had ever read it had expired and/or forgotten about it (so it would be really hard to say that there were any appropriately mental things still floating about) it would still be about pirates and ninjas and zombies, as anyone fluent in the language could determine by reading it.

(The suggestion that it isn’t actually about pirates and ninjas and zombies anymore until someone reads it and acquires mental froobahs that have aboutness, strikes me as utterly unconvincing.)

The same only even moreso for content; all sorts of non-mental things have contents. Mailboxes, for instance. And grocery lists. Also cats.

An obvious response to this is that it’s not those kinds of “aboutness” and “content” that is meant when a philosopher says that all and only mental froobahs have aboutness and content; it’s a special mental kind of aboutness and content, that we just happen to use the same words for because we like to be confusing. (Speaking of confusing, note that I have not yet referred to either “intentional objects” or “intensional objects” in this weblog entry, although if I wanted to be even more confusing I certainly could have!)

But if “aboutness” and “content” in “all and only mental froobahs have aboutness and content” don’t have their normal meanings, but only some special Philosophy of Mind meanings, then it’s not really legitimate to utter the sentence as though it meant something, as though it were adding to our stock of knowledge by telling us about a newly-discovered relationship between familiar things that have familiar names. It would be more honest to say something like “all and only mental froobahs have glorpiness and fnoo”, and then when our audience asks what glorpiness and fnoo are we can say that they are a couple of words that we just made up for something or other that only mental froobahs have (after which we’re probably unlikely to be invited back).

Why do philosophers want to say that all and only mental froobahs have aboutness and content, anyway? I think it’s because (as came up during the latest lecture of the course linked above), they have physics envy, and want mental froobahs to have some interesting properties. Physical things have mass, volume, electrical charge, angular momentum, position, and all sorts o’ stuff that you can put into an Excel spreadsheet, whereas mental things (i.e. mental froobahs) don’t really have alot of those. The experience of a red apple, for instance, is generally regarded (depending) as a mental froobah, but it isn’t a red one. But it’s about that red apple, boy howdy! (If there actually is a red apple, that is; if there isn’t, well, we’ll touch on that below.)

This doesn’t really explain why they want aboutness and content to be properties of all mental froobahs, or of only mental froobahs. Which is where they get into trouble.

Aside from intransitive mental froobahs like boredom, there is also a problem of what a mental froobah might be about when the thing that it would normally be about is missing. For instance, if I’m afraid of the tiger in the next room, that fear froobah would normally be about the tiger in the next room, but if in fact there is no tiger in the next room, we have a bit of an oops.

In some cases, it could be that I’m actually afraid of the panther in the next room, and am simply misinformed about the type of animal that it is. Or I’m afraid of the tiger in this room, and am just a bit behind the times in my theory of its location.

But in other cases there is no such obvious out. If I have been convinced that there is a tiger in the next room by an elaborate conspiracy of misleading statements and recorded sounds, and in fact if I were aware of the actual circumstances I would not be afraid at all, then there doesn’t seem to be anything that I’m afraid of to hand. This does not strike me as a big problem pre-theoretically (i.e the average person, once the deception and my fear are described, would not think there was some remaining mystery about what my fear was about), but if we’re committed to the view that all mental froobahs are always about something, we are likely to end up adding weird things to our ontology (i.e. our list of things that there are), like the (non-existent) tiger that I was afraid of, and intentional objects (and perhaps, if we use the right sort of logic to think about them, even intensional ones) in general, and/or imaginary objects, or who knows what-all.

Which leads to all sorts of weird situations, it strikes me, like having in ones ontology a bunch of round cubes which are neither round nor cubical, and which don’t exist.

Now this isn’t necessarily a bad thing, and I can see doing it sort of for fun, but I wouldn’t claim to have been forced into doing it because the only other alternative would be to admit that not all mental froobahs have aboutness.

In general, and this is a subject that I’ve written upon extensively elsewhere (on some old bits of paper, in particular; not as far as I can tell ever on the Interwebs), I think the whole Aboutness and Intentional Objects thing is yet another result of overambitious reification (for which there should shortly be one hit, when enclosed in quotes, in the obvious Google search).

Nature doesn’t know from objects (surely I have made this point somewhere on the web before). Leaving consciousness aside for the moment (since it is Different), all that there really is is (something like) a big quantum-mechanical wave function, and there’s no reason to think that it (or some larger system containing it and some other stuff) can be neatly divided up into distinct or disjoint or otherwise conveniently separable or enumerable or nameable objects.

There are other possible universes in which objects really are baked into the physics, but this doesn’t appear to be one of them, at any scale we’ve so far come anywhere near close to penetrating.

It’s often convenient to talk about objects as if they were given by nature and baked into the universe, but that doesn’t mean that they are, and we shouldn’t be surprised if sometimes that talk breaks down. Similarly it’s often convenient to talk about objects (in the subject and object sense) in conceptualizations of the universe (as in, say, intentional objects and mental froobahs), but again we shouldn’t fall into the easy mistake of thinking that they are really out there, and that the universe is obliged to respect our cuttings-up of it, and therefore that it must always be possible to make that language work.

I really must find that old handwritten essay on “Overambitious Reification and the Applicability of Concepts” that I scrawled in an old Princeton exam book the other week / decade / millenium, and type it in, so that it comes to actually exist. But right now, I have to go off to Back To School night. :)