Posts tagged ‘philosophy’

2023/01/20

County Jury Duty

Well, that’s over! For another six years (for state / country / town) or four years (for Federal). This is probably going to be chatty and relatively uninteresting.

Top tip: park in the parking lot under the library; it’s very convenient to the courthouse (although you still have to walk outside for a bit, and it was windy and rainy yesterday).

I had to report originally on Friday (the 13th!) because Monday was MLK day. On Friday 60-70 of us sat around in a big auditoriumish jury room for a while, with WiFi and allowed to use our cellphones and everything. Then they called attendance and talked about random things like the $40/day stipend if our employer doesn’t pay us or we’re self-employed (where did that tiny amount of money come from, one wonders) and where to park and so on. Then we were basically allowed to decide whether to come back on Tuesday or Wednesday (although I imagine if you were far down the perhaps-random list and most people had said one, you had to take the other).

A cute isomorphic pixel-art image of a bunch of people waiting around in a large room. Note this does not accurately reflect the County Courthouse except in spirit. Image by me using Midjourney OF COURSE.

I elected to come back on Wednesday for no particular reason. We were originally supposed to arrive on Wednesday at 9:30am, but over the weekend they called and said to arrive at 11am instead. Due to an inconvenient highway ramp closure and a detour through an area of many traffic lights, I got there at 11:10 or so and felt guilty, but hahaha it didn’t matter.

In the big Jury Room again, the 30+ of us waited around for a long time, then were led upstairs to wait around in the hallway outside the courtroom, and then after waiting some more were ushered into the courtroom to sit in the Audience section, and introduced to the judge and some officers, and then dismissed until 2pm for lunch (seriously!).

Some time after 2pm they let us back into the courtroom and talked to us for awhile about how it was a case involving this and that crime, and might take up to a month to try, and the judge is busy doing other things on Mondays and Thursday mornings so it would be only 3.5 days / week. Then they called 18 names, and those people moved from the Audience section to the Jury Box section. They started asking them the Judge Questions (where do you live, how long have you lived there, what do you do, what does your spouse and possible children do, do you have any family members who are criminal lawyers, etc, etc), and we got though a relatively small number of people and it was 4:30pm and time to go home.

I had a bit of a hard time sleeping, thinking about what the right answers to The Questions would be (how many times have I been on a jury in all those years? did we deliberate? do I know anyone in Law Enforcement? does the NSA count? should I slip in a reference to Jury Nullification to avoid being on the jury, or the opposite?) and like that.

Since the judge is busy on Thursday mornings, we appeared back at the courtroom at 2pm on Thursday, and waited around for quite awhile in the hallway, then went in and they got through questioning the rest of the 18 people in the Jury Box (after the judge asked the Judge Questions, the People and the Defense asked some questions also, although it was mostly discussions of how police officers sometimes but not always lie under oath, and how DNA evidence is sometimes right but now always, and how it’s important to be impartial and unbiased and so on, disguised as question asking).

Then they swore in like 6 of those 18 people, told the rest of the 18 that they were done with Jury Duty, and told the rest of us in the Audience section to come back at 9:30am on Friday (today!).

At 9:30 nothing happened for quite awhile in the hallway outside the auditorium, then for no obvious reason they started to call us into the courtroom one person at a time by name. There got to be fewer and fewer people, and then finally it was just me which was unusual and then they called my name and I went in. The Jury Box was now entirely full of people, so I sat in the Audience Section (the only person in the Audience Section!).

Then I sat there while the judge asked the same ol’ Judge Questions to every one of the dozen+ people (I know, I don’t have the numbers quite consistent) ahead of me, and then finally, as the last person to get them, I got them. And the Judge went through them pretty quickly, perhaps because he’d said earlier that he wanted to finish with this stage by lunchtime, and I had no chance to be indecisive about the issue of following his legal instructions exactly and only being a Trier of Fact, or anything else along those lines.

Then we had another couple of lectures disguised as questions, plus some questions, from The People and the The Defense. I’d mentioned the cat as someone who lived with me (got a laugh from that, but the Whole Truth, right?), and The People asked me the cat’s name and nature, and when I said it was Mia and she was hostile to everyone, The People thanked me for not bringing her with me (haha, lighten the mood, what?). And asked about my impartiality.

Now we’d had a bunch of people from extensive cop families say boldly that they couldn’t promise not to be biased against the defendant (and when The Defense I think it was asked if anyone would assume from The Defendant’s name on the indictment that He Must Have Done Something a couple people even raised their hands (whaaaaat)), and I perhaps as a result and perhaps foolishly said that while my sympathies would generally be with a defendant, I would be able to put that aside and be unbiased and fair and all.

So The People asked me if I could promise “100%” that I would not be affected by that sympathy, and I said quite reasonably that hardly any sentences with “100%” in them are true, and the judge cut in to say that he would be instructing the jurors to put stuff like that aside (implying that then I would surely be able to), and I said that I would (but just didn’t say “100%”) and then The People came back in saying that they need people who are “certain” they can be unbiased (so, no way), but then actually asked me if I was “confident” that I could be (a vastly lower bar) so I said yes I would.

And when all of that was over, they had us all go out to the hallway again, and wait for awhile, and then go back in to sit in the same seats. And then they had I think four of us stand up and be sworn in as jurors, and the rest of us could go out with the officer and sit in the big jury room again until they had our little papers ready to say that we’d served four days of Jury Duty.

And that was it!

My impression is that they were looking for (inter alia, I’m sure) people who either believe, or are willing to claim to believe, that they can with certainty be 100% unbiased in their findings as jurors. That is, people who are in this respect either mistaken, or willing to lie. And that’s weird; I guess otherwise there’s too much danger of appeals or lawsuits or something? (Only for Guilty verdicts, presumably, since Not Guilty verdicts are unexaminable?) The Judge did say several times that something (the State, maybe?) demands a Yes or No answer to his “could you be an unbiased Juror and do as you’re told?” question, and when people said “I’ll try” or “I think so” or “I’d do my best” or whatever, he insisted on a “Yes” or a “No”. (So good on the honesty for those cop-family people saying “No”, I suppose.)

So if my calculations are roughly correct, after ummm two or 2.5 days of Jury Selection, they’ve selected only about 10 jurors, and exhausted the Jan 13th jury draw; so since they need at least 12 jurors and 2 (and perhaps more like 6) alternates, they’re going to be at this for some time yet! (Hm, unless it’s not a felony case? In which case 10 might be enough? But it sounded like a felony case.)

2022/12/04

Omelas, Pascal, Roko, and Long-termism

In which we think about some thought experiments. It might get long.

Omelas

Ursula K. LeGuin’s “The Ones Who Walk Away From Omelas” is a deservedly famous very short story. You should read it before you continue here, if you haven’t lately; it’s all over the Internet.

The story first describes a beautiful Utopian city, during its Festival of Summer. After two and a half pages describing what a wise and kind and happy place Omelas is, the nameless narrator reveals one particular additional thing about it: in some miserable basement somewhere in the city, one miserable child is kept in a tiny windowless room, fed just enough to stay starvingly alive, and kicked now and then to make sure they stay miserable.

All of the city’s joy and happiness and prosperity depends, in a way not particularly described, on the misery of this one child. And everyone over twelve years old in the city knows all about it.

On the fifth and last page, we are told that, now and then, a citizen of Omelas will become quiet, and walk away, leaving the city behind forever.

This is a metaphor (ya think?) applicable whenever we notice that the society (or anything else) that we enjoy, is possible only because of the undeserved suffering and oppression of others. It suggests both that we notice this, and that there are alternatives to just accepting it. We can, at least, walk away.

But are those the only choices?

I came across this rather excellent “meme” image on the Fedithing the other day. I can’t find it again now, but it was framed as a political-position chart based on reactions to Omelas, with (something like) leftists at the top, and (something like) fascists at the bottom. “Walk away” was near the top, and things like “The child must have done something to deserve it” nearer the bottom. (Pretty fair, I thought, which is why I’m a Leftist.)

It’s important, though, that “Walk away” wasn’t at the very top. As I recall, the things above it included “start a political movement to free the child”, “organize an armed strike force to free the child”, and “burn the fucking place to the ground” (presumably freeing the child in the process), that latter being at the very top.

But, we might say, continuing the story, Omelas (which is an acronym of “Me also”, although I know of no evidence that Le Guin did that on purpose) has excellent security and fire-fighting facilities, and all of the top three things will require hanging around in Omelas for a greater or lesser period, gathering resources and allies and information and suchlike.

And then one gets to, “Of course, I’m helping the child! We need Councilman Springer’s support for our political / strike force / arson efforts, and the best way to get it is to attend the lovely gala he’s sponsoring tonight! Which cravat do you think suits me more?” and here we are in this quotidian mess.

Pascal

In the case of Omelas, we pretty much know everything involved. We don’t know the mechanism by which the child’s suffering is necessary for prosperity (and that’s another thing to work on fixing, which also requires hanging around), but we do know that we can walk away, we can attack now and lose, or we can gather our forces and hope to make a successful attack in the future. And so on. The criticism, if it can even be called that, of the argument, is that there are alternatives beyond just accepting or walking away.

Pascal’s Wager is a vaguely similar thought experiment in which uncertainty is important; we have to decide in a situation where we don’t know important facts. You can read about this one all over the web, too, but the version we care about here is pretty simple.

The argument is that (A) if the sort of bog-standard view of Christianity is true, then if you believe in God (Jesus, etc.) you will enjoy eternal bliss in Heaven, and if you don’t you will suffer for eternity in Hell, and (B) if this view isn’t true, then whether or not you believe in God (Jesus, etc.) doesn’t really make any difference. Therefore (C) if there is the tiniest non-zero chance that the view is true, you should believe it on purely selfish utilitarian grounds, since you lose nothing if it’s false, and gain an infinite amount if it’s true. More strongly, if the cost of believing it falsely is any finite amount, you should still believe it, since a non-zero probability of an infinite gain has (by simple multiplication) an infinite expected value, which is larger than any finite cost.

The main problem with this argument is that, like the Omelas story but more fatally, it offers a false dichotomy. There are infinitely more possibilities than “bog-standard Christianity is true” and “nothing in particular depends on believing in Christianity”. Most relevantly, there are an infinite number of variations on the possibility of a Nasty Rationalist God, who sends people to infinite torment if they believed in something fundamental about the universe that they didn’t have good evidence for, and otherwise rewards them with infinite bliss.

This may seem unlikely, but so does bog-standard Christianity (I mean, come on), and the argument of Pascal’s Wager applies as long as the probability is at all greater than zero.

Taking into account Nasty Rationalist God possibilities (and a vast array of equally useful ones), we now have a situation where both believing and not believing have infinite expected advantages and infinite expected disadvantages, and arguably they cancel out and one is back wanting to believe either what’s true, or what’s finitely useful, and we might as well not have bothered with the whole thing.

Roko

Roko’s Basilisk is another thought experiment that you can read about all over the web. Basically it says that (A) it’s extremely important that a Friendly AI is developed before a Nasty AI is, because otherwise the Nasty AI will destroy humanity and that has like an infinite negative value given that otherwise humanity might survive and produce utility and cookies forever, and (B) since the Friendly AI is Friendly, it will want to do everything possible to make sure it is brought into being before it’s too late because that is good for humanity, and (C) one of the things that it can do to encourage that, is to create exact copies of everyone that didn’t work tirelessly to bring it into being, and torture them horribly, therefore (D) it’s going to do that, so you’d better work tirelessly to bring it into being!

Now the average intelligent person will have started objecting somewhere around (B), noting that once the Friendly AI exists, it can’t exactly do anything to make it more likely that it will be created, since that’s already happened, and causality only works, y’know, forward in time.

There is a vast (really vast) body of work by a few people who got really into this stuff, arguing in various ways that the argument does, too, go through. I think it’s all both deeply flawed and sufficiently well-constructed that taking it apart would require more trouble that it’s worth (for me, anyway; you can find various people doing variously good jobs of it, again, all over the InterWebs).

There is a simpler variant of it that the hard-core Basiliskians (definitely not what they call themselves) would probably sneer at, but which kind of almost makes sense, and which is simple enough to express in a way that a normal human can understand without extensive reading. It goes something like (A) it is extremely important that a Friendly AI be constructed, as above, (B) if people believe that that Friendly AI will do something that they would really strongly prefer that it not do (including perhaps torturing virtual copies of them, or whatever else), unless they personally work hard to build that AI, then they will work harder to build it, (C) if the Friendly AI gets created and then doesn’t do anything that those who didn’t work hard to build it would strongly prefer it didn’t do, then next time there’s some situation like this, people won’t work hard to do the important thing, and therefore whatever it is might not happen, and that would be infinitely bad, and therefore (D) the Friendly AI is justified in doing, even morally required to do, a thing that those who didn’t work really hard to build it, would strongly rather it didn’t do (like perhaps the torture etc.). Pour encourager les autres, if you will.

Why doesn’t this argument work? Because, like the two prior examples that presented false dichotomies by leaving out alternatives, it oversimplifies the world. Sure, by retroactively punishing people who didn’t work tirelessly to bring it into being, the Friendly AI might make it more likely that people will do the right thing next time (or, for Basiliskians, that they would have done the right thing in the past, or whatever convoluted form of words applies), but it also might not. It might, for instance, convince people that Friendly AIs and anything like them were a really bad idea after all, and touch off the Bulterian Jihad or… whatever exactly that mess with the Spacers was in Asimov’s books that led to their being no robots anymore (except for that one hiding on the moon). And if the Friendly AI is destroyed by people who hate it because of it torturing lots of simulated people or whatever, the Nasty AI might then arise and destroy humanity, and that would be infinitely bad!

So again we have a Bad Infinity balancing a Good Infinity, and we’re back to doing what seems finitely sensible, and that is surely the Friendly AI deciding not to torture all those simulated people because duh, it’s friendly and doesn’t like torturing people. (There are lots of other ways the Basilisk argument goes wrong, but this seems like the simplest and most obvious and most related to the guiding thought, if any, behind his article here.)

Long-termism

This one is the ripped-from-the-headlines “taking it to the wrong extreme” version of all of this, culminating in something like “it is a moral imperative to bring about a particular future by becoming extremely wealthy, having conferences in cushy venues in Hawai’i, and yes, well, if you insist on asking, also killing anyone who gets in our way, because quadrillions of future human lives depend on it, and they are so important.”

You can read about this also all over the InterThings, but its various forms and thinkings are perhaps somewhat more in flux than the preceding ones, so perhaps I’ll point directly to this one for specificity about exactly which aspect(s) I’m talking about.

The thinking here (to give a summary that may not exactly reflect any particular person’s thinking or writing, but which I hope gives the idea) is that (A) there is a possible future in which there are a really enormous (whatever you’re thinking, bigger than that) number of (trillions of) people living lives of positive value, (B) compared to the value of that future, anything that happens to the comparatively tiny number of current people is unimportant, therefore (C) it’s morally permissible, even morally required, to do whatever will increase the likelihood of that future, regardless of the effects on people today. And in addition, (D) because [person making the argument] is extremely smart and devoted to increasing the likelihood of that future, anything that benefits [person making the argument] is good, regardless of its effects on anyone else who exists right now.

It is, that is, a justification for the egoism of billionaires (like just about anything else your typical billionaire says).

Those who have been following along will probably realize the problem immediately: it’s not the case that the only two possible timelines are (I) the one where the billionaires get enough money and power to bring about the glorious future of 10-to-the-power-54 people all having a good time, and (II) the one where billionaires aren’t given enough money, and humanity becomes extinct. Other possibilities include (III) the one where the billionaires get all the money and power, but in doing so directly or indirectly break the spirit of humanity, which as a result becomes extinct, (IV) the one where the billionaires see the light and help do away with capitalism and private property, leading to a golden age which then leads to an amount of joy and general utility barely imaginable to current humans, (V) the one where the billionaires get all the money and power and start creating trillions of simulated people having constant orgasms in giant computers or whatever, and the Galactic Federation swings by and sees what’s going on and says “Oh, yucch!” and exterminates what’s left of humanity, including all the simulated ones, and (VI) so on.

In retrospect, this counterargument seems utterly obvious. The Long-termists aren’t any better than anyone else at figuring out the long-term probabilities of various possibilities, and there’s actually a good reason that we discount future returns: if we start to predict forward more than a few generations, our predictions are, as all past experience shows, really unreliable. Making any decision based solely on things that won’t happen for a hundred thousand years or more, or that assume a complete transformation in humanity or human society, is just silly. And when that decision just happens to be to enrich myself and be ruthless with those who oppose me, everyone else is highly justified in assuming that I’m not actually working for the long-term good of humanity, I’m just an asshole.

(There are other problems with various variants of long-termism, a notable one that they’re doing utilitarianism wrong and/or taking it much too seriously. Utilitarianism can be useful for deciding what to do with a given set of people, but it falls apart a bit when applied to deciding which people to have exist. If you use a summation you find yourself morally obliged to prefer a trillion barely-bearable lives to a billion very happy ones, just because there are more of them. Whereas if you go for the average, you end up being required to kill off unhappy people to get the average up. And a perhaps even more basic message of the Omelas story is that utilitarianism requires us to kick the child, which is imho a reductio. Utilitarian calculus just can’t capture our moral intuitions here.)

Coda

And that’s pretty much that essay. :) Comments very welcome in the comments, as always. I decided not to all any egregious pictures. :)

It was a lovely day, I went for a walk in the bright chilliness, and this new Framework laptop is being gratifyingly functional. Attempts to rescue the child from the Omelas basement continue, if slowly. Keep up the work!

2022/11/21

NaNoWriMo 2022, Fling Thirty-Seven

There are books everywhere; this makes me happy. There is diffuse moonlight coming in through the windows; this also makes me happy.

Essentially none of the books here are in any of the very few languages that I can read. This also makes me happy, in a way. There is so much to know already, this only emphasizes the point. And some of them have really interesting illustrations.

The books fill the shelves, and lie in piles on the floor. I walk from place to place, and sometimes sit on something that may be a chair. It’s just like home.

As the being known as Tibbs negotiates with the locals to get us access to the zone containing the confluence, and Steve and Kristen wander the city like happy tourists (well, that is not a simile, really; they wander the city as happy tourists), I have drifted here, which feels entirely natural.

And now, having communicated the above to my hypothetical reader without becoming distracted (except for that one parenthetical about simile versus plain description), I can calmly note that these things may not be “books” in the obvious sense, that moonlight coming in through the windows is a phenomenon that quantum physics can just barely explain, and for that matter that “makes me happy” is enough to keep a phalanx (a committee, a department, a specialty) of psychology and anthropology scholars occupied for a very long time.

And that time is an illusion.

I’ve always been able to separate language from thoughts about language, to separate thoughts about reality from meta-thoughts about thoughts. At least in public. At least when talking to other people.

But I think I’m better at it now, even when I’m in private, just talking to myself, or writing for you, dear cherished hypothetical reader (cherished, inter alia, for your hypothetically inexhaustible interest and patience).

Ironically (so much to learn about irony!), I credit much of this improvement to long discussions (how long? how does the flow of time go in the cabin of an impossibly fast sharp vehicle, speeding twinnedly from one end to another of a infinite rainbow band?) with an arguably non-existent being called Tibbs, and an enigmatic pilot called Alpha, after her ship.

(Why do we use the female pronoun toward the pilot Alpha? Why does she speak English? Or how do we communicate with her if she does not? Is the intricate shiny blackish plastic or metal construct at the front of her head a helmet, or her face? Is the rest of her a uniform, flight suit, or her own body, or some of each, or entirely both? Would it have been rude to ask?)

Tibbs and Alpha, I feel, are kindred spirits, my kindred, beings blessed or cursed with a tendency to look through everything and try to see the other side, even knowing there is finally no other side, to overthink, to overthink the notion of overthinking. But they have, perhaps, had longer to get used to it.

The being Tibbs claims to be millions of years old, while also claiming to have slept for most of that time. The Pilot Alpha suggests, by implying (how? what words or gestures did she actually use?) that questions of her origin are meaningless, that she has always existed, or has at least existed for so long that information about her coming-to-be is no longer available in the universe, having been lost to entropy long since.

(At what level is entropy real? Time is an illusion; so is entropy a statement about memory? A statement about what we remember, compared to what we experience right now and anticipate, right now, about the future? Or should we only talk about entropy after we have thoroughly acknowledged that time is an illusion, but gone on to speak about it as though it were real anyway, with only a vague notion, an incomplete explanation, of why that is okay?)

Here is a thought about the illusory nature of the past and future: given that this present moment is all that exists, then all that exists outside of memory and anticipation, is this one breath, this one side of this one room containing these shelves and piles (never enough shelves!) of books, or the appearance of books.

Everything else, the long / short / timeless journey aboard the fast sharpness Alpha, the anticipation felt while listening to the sound of something like wind just before meeting Tibbs for the first time, Kristen’s palm against my cheek in that other library, the glittering brass something that she received from the Mixing, the fine hairs at the back of her neck, all of that is only (“only”?) memory. Does that mean that it is no more important, no more valid, no more real, than anything else purely mental? No more significant than a wondrous pile of multi-colored giant ants made of cheese, singing hypnotic songs, that I have just this moment imagined, and now remember as a thing I first thought of a few moments ago?

This seems… this seems wrong. (See how the ellipsis these, if you are experiencing these words in a context in which there was one, adds a certain feel of spoken language, and perhaps therefore conveys some nuance of emotion that otherwise would be missing? That is communication, in a complicated and non-obvious form.)

Here is a hypothesis put forward I think by the being Tibbs, as it (he? she? they? they never expressed a preference that I can recall) as they moved slowly and in apparent indifference toward gravity around the front of the cabin of the Alpha: Some of the things, people, situations, events, in memory are especially significant (valid, important, “real”) because they changed me. Others, while equally (what?) wonderful themselves, like the pile of cheese-ants, did not have as much impact, or the same kind of impact.

If we could work out a good theory of what it means for an object or event (or, most likely, an experience) to change a person, given that time is an illusion, then this seems promising.

The Pilot Alpha seemed in some way amused by my desire, or my project, to develop a systematic justification for (something like) dividing the world (dividing memory) into “real” things and “imagined” things, with the former being more important or more valid (or, as the being Tibbs suggested, more cool) than the latter. Amused in a slightly condescending way, perhaps, which is fine for a possibly-eternal being, but which (equally validly) I am personally rather sensitive to, given my own developmental history.

The being Tibbs, however, was not accurate in referring to my just-subsequent behavior as “a snit”.

The moonlight coming through the windows (however that might be occurring) is diffuse because it comes through the visually-thick atmosphere of this world, or this area of this world. It seems implausible that we can breathe the atmosphere without danger; is this evidence that we are in a virtuality? Is it reasonable that I nearly wrote “a mere virtuality”? Was that because “mere” would have had a meaning there that I would have been expressing? (What is it to “express” a “meaning”?) Or only because “mere virtuality” is a phrase that occurs in my memory in (remembered) situations with (for instance) similar emotional tone? What is emotional tone?

I anticipate that the being Tibbs will return to this long library room within a certain amount of time, most likely with some information to convey (what is it to “convey” some “information”?) about our continuing travels (why are we travelling? what is “travel”?). I anticipate that (the humans) Kristen and Steve will return to this long library room within a certain amount of time, most likely exchanging cute little looks and possibly holding hands, possibly having acquired some odd or ordinary souvenir in the bazaars of the city (but is this a city? does it have bazaars? what counts as a bazaar?).

And I look forward to whatever follows.

Fling Thirty-Eight

2022/11/12

NaNoWriMo 2022, Fling Twenty-One

“Here again, eh? How’s the metal bar coming? My-head-wise, that is?”

He had opened his eyes, again, to see Colin and Kris sitting on the ground with him. It was like no time had passed at all, but also like that last time had been a long time ago.

“No worse, but no progress, I’m afraid,” Colin said.

“How long as it been?”

“Four days since the accident.”

“Not too bad. Is it, um, is it more stuck than they thought?”

The virtuality seemed thinner and greyer, and the clouds were more like wisps and rivers of mist, moving faster than the wind between the mountains.

Kristen moved closer to him and rubbed his back. He felt it in a vague and indirect way; it still felt good.

“Yeah,” she said, “they tried once, but they … didn’t like how it was going.”

“Am I gonna die, then?”

“Probably eventually,” Colin said. Kris rolled her eyes.

A strange wind seemed to blow through the virtuality, through him. He felt himself thinning out somehow, and his viewpoint rising into the air.

“Whoa,” he said, and his voice came to him strange and thready.

Colin and Kris stood up, in the virtuality, and looked up toward his viewpoint.

“What’s happening?” he said, his voice still fluttering.

“I’m … not sure,” Kris said.

“Probably just the fMRI connection again?” Colin said, uncharacteristically uncertain.

“Booooo!” Steve said, his viewpoint now moving up and down and bobbing side to side. As far as he could see his own body, it was stretched out and transparent, “I’m a ghooooost!”

Kris put her hand to her forehead and looked down.

“Stop that,” she said, “at least if you can.”

Steve tried to concentrate, to focus on the patterns and concentrations of being in the virtuality, and his viewpoint moved downward slowly.

“Here you come,” Colin smiled.

Steve watched himself re-form with curiosity. “Was that supposed to happen?” he asked.

“Not… especially.”

“Are they working on me again, trying to get the thing out?”

“No, they were just doing some more scans and tests.”

“Including how I interact with you guys?”

“Like last time,” Colin nodded.

“Then why –“

Then there was another, much stronger, gust of that wind, and Steve felt himself torn away, stretched out into mist, and blown somewhere far away.

There was an instant, or a day, of darkness.

“Hello?”

“Steve?” It was Kristen’s voice, somewhere in this dark place.

“Kris? Are we, I mean, is this the real world again?”

“I don’t know.”

“What’s real, after all?”

“Colin, you nerd, where are you?”

“Apparently in absolute darkness, where all I can do is speak, and hear you two.”

“Is this part of your virtuality, Kris?”

“No, I mean, it’s not supposed to be. Not Hints of Home or the fork that I made for you, or any other one I’ve made.”

“It’s really simple, anyway.”

“Could be a bug.”

Steve tried to move, tried to see. It was like trying to move your arm, rather than actually moving it. It was like trying to open your eyes in one of those dreams where you can’t open your eyes.

“You two can just duck back out for a second and see what’s going on, right?”

“That, well, that seems like it’s a problem, too,” Colin said, “at least I can’t duck out; you, Kris?”

“Nope, me, either; it’s heh weird. What a trope.”

“This is always happening to main characters,” Colin said, “so I guess we’re main characters.”

“You could be figments of my imagination now,” Steve said, “like, that wind could have been the virtuality’s interpretation of some brain thing happening, and now I’m totally comatose and hallucinating, and you two are still back there, and I’m just hallucinating that your voices are here.”

“Babe–” Kris started.

“It’s true,” Colin said, “if we were imaginary, it would probably be easier to imagine just our voices, and not have to bother with faces and movement and so on.”

“Oh, that’s helpful!”

“You guys trying to convince me you’re real, through pointless bickering?”

“No, but would it work?”

“It might. I’m hating this darkness, could everyone try to summon up some light?”

There was a short silence, then Steve felt a sort of vibration through everything, and a dim directionless light slowly filled the nearby space.

“That worked.”

“As far as it goes.”

“We still look like us.”

“This isn’t what I was wearing, though.”

“Colin’s the same.”

“Well, what else does he ever wear?”

“Hey!”

Colin did indeed look as he had earlier in the virtuality, as perfectly and nattily suited as ever. Kristen, on the other hand, was wearing a loose flowered dress, and Steve was in a well-tailored suit himself, less formal (and, he imagined, more comfortable) than Colin’s, but still he thought rather elegant.

“This is very gender normative,” Kristen said, standing up and slowly turning around, “but I like it.”

“Where are we?” Steve said, “and why can’t you guys duck out? I know I can’t because I’m the patient and my body’s sedated, but…”

“Wow, I hope we’re okay,” Kristen said.

“If something had happened to our bodies, we should have gotten a warning, and probably pulled out automatically,” Colin said logically.

“I don’t know,” said Steve, “the hallucination theory still seem pretty good.”

“That way lies solipsism,” pointed out Kris. She spun over to Steve and touched his shoulder.

“I felt that,” he said.

She frowned. “Me, too. Really well.”

“See? Hallucinations.”

“I’m know if I was a hallucination,” Kris said.

Colin was walking around at the edge of the lighted circle.

“I wonder if this is all there is,” he said.

“It’s a small hallucination, sorry,” said Steve.

“This could be the whole universe,” he said, “although I seem to remember lots more stuff.”

“Colin–“

“This present moment is all that exists,” Colin said, “and all the other stuff is just a memory, that also exists right now.”

“Here he goes.”

“It might pass the time.”

“Shouldn’t be try to be, like, getting back to the real world, making sure our bodies are all okay…”

“If you can think of a way to do that…”

“Good point.”

Colin walked back into the center of the lighted circle, and the three sat down on the plain flat ground again, close to each other, surrounded by darkness all around.

Fling Twenty-two

2022/11/12

NaNoWriMo 2022, Fling Twenty

Everything comes together deep down.

The gentle tendrils of the mushrooms and the fungi, the mycelia, form into knots beneath the damp ground, and those knots reach out and connect to each other, knots of knots of knots connected in a single vast sheet below the world.

The fungi do not think.

But they know.

There are more connections in the mycelia of the rich dark earth of a single farm, than in the brain of the greatest human genius.

But they do not think.

The stars are connected, by channels where gravity waves sluice in and out of the twelve extra dimensions of the universe, the ones whose nature we haven’t figured out yet.

The stars… the stars think.

But they do not know.

The fungal and stellar networks found each other and connected a long time ago.

Every tree and every stone, every mammal footstep, every shovel of earth. Every spaceship and satellite launch.

They are always watching.

Or no.

Every tree, stone, footstep, and every launch, are part of the network already.

Every tree, stone, footstep, and every launch, is just the galactic star-fungus network, thinking, and knowing.

“Really?”

“I mean, absolutely. There’s no way it could be false.”

“They’re connected? We’re … part of their giant brain?”

“Of course. Everything is part of everything.”

“I — but if it isn’t falsifiable…?”

“That’s right, it’s not really a scientific theory. It’s more a way of thinking.”

“A religion?”

“A philosophy, more.”

“But if it isn’t true…”

“Oh, it’s true.”

“Stars and fungus… sounds sort of paranoid.”

“Nah, it’s just how the universe is; everything is connected, and the fungi and the stars more than anything else.”

“How did they find each other?”

“How could they not have? It was inevitable. Necessary.”

The stars and the mycelium resonate as the ages roll on. Life comes into being, and the network reacts, rings, with pure tones in every octave of the spectrum. War is a rhythmic drumming; peace is a coda, or an overture. And death is percussion.

Deep in the space between the stars, there are nodes where major arteries of coiled dimensions cross and knot, just as the mycelia cross and not deeper and deeper into the intricate ground. In the space around a star-node, in the stone circles above the spore-nodes, beings dance, constituting and manifesting the thoughts of the stars, and the knowledge of the mushrooms.

“Like, faerie circles? There are … star circles of some kind, out in space?”

“There are. Things gather at them, tiny things and big things, people from planets coming in their starships, and beings that evolved there in space, floating in years-long circles on the propulsion of vast fins pushing on interstellar hydrogen.”

“That seems like something that might not be true. What if we go out in a star ship sometime, and there’s nothing like that out there?”

“There is. An endless array of them.”

“How do you know that?”

Those who dance at the nodes of the stars and the fungi, over the centuries, absorb the thinking and knowledge of the infinite universe. Whence our stories of wise ones, of wizards, of the Illuminati. Whence the yearning songs of the star-whales, of forgotten ancient wisdom, and secret covens in the darkness.

Those who evolve on planets have an affinity to the fungal nodes. Those who evolve between the stars have an affinity for the stellar nodes. They complement and complete each other.

No planetary culture is mature until it has allied with a stellar culture.

No stellar culture is mature until it has allied with a planetary culture.

“So are the, y’know, the Greys, are they visited Earth to see if we’re worthy of allying with? Are they, like, an immature steallar culture looking for a fungus-centered culture to hook up with?”

“I don’t know.”

“You don’t know? Haven’t you heard about it in the fungusvine?”

“Fungusvine, funny.”

“Myceliavine?”

Everything comes together deep down.

The semantic tendrils of the realities extend, purely by chance, into the interstices between universes. Over endless time, over expansions and collapses, rollings in and rollings out, the tendrils interact, purely by chance, and meaning begins to flow.

Knots, and knots of knots, and knots of knots of knots, forming a vast extradimensional network that binds the realities together.

Every reality is underlain by its own networks, of mycelia and gravitational strings, or aether winds and dragon spines, the thoughts of Gods and the songs of spirits, or thrint hamuges and the fletts of tintans. And the network of each reality connects to the extradimensional network, and thereby to everything else.

Every tree, stone, footstep, and every starship launch, is part of the unthinkably vast mind of the universe, heart of the universe, the sacred body of everything, in the largest sense.

“Ooh! Are there, like, reality-witches, who find notes in the network between the realities, and have dances and stuff there, and slowly gather extradimensional wisdom?”

“Of course, there are!”

“I want to be one of those.”

“Oh, you will.”

The mind, heart, interconnected web of the universe, the multiverse, thinks (and feels and knows) slowly, deliberately. For a single impulse to travel from one end to the other, if the web had ends, would take almost an eternity. But for the resonating tone, the mood, the energy fluxes, of the network to change, all over, from end to end, takes only an instant.

“Wouldn’t that violate the speed of light and all?”

“Different realities, different speed limits.”

“I don’t know, it seems like you could you that to cheat.”

“You absolutely can.”

It is a category mistake to think that because All Is One, I can make a free transcontinental phone call.

But it is universally true that the extradimensional web of interconnections holds ultimate wisdom.

You are a neuron of the multiversal Mind, you are a beat of the multiversal Heart. You resonate always in harmony with its thoughts, its knowings, its feelings. You can accept the harmony or try to reject it, and either way you are sending your signal from one reality to another, and your breathing is a message to another universe.

Fling Twenty-One

2022/11/07

NaNoWriMo 2022, Fling Twelve

Light passes through windows. This is a puzzle. This is a complicated story, a story that no one understands, in four words.

Beams of sunlight pass through the library windows, making patterns on the wall.

Sitting where I sit, among the shelves and the piles of books, I see beams of sunlight passing through the library windows, making patterns on the wall.

My evidence for the existence of beams of sunlight is (at least) in two parts: I see dust motes dancing (dancing? what is it to dance? what kinds of things can dance?) in the sunbeams, visible (to me) in the light, where in other parts of the (dusty) library air, there are no (to me) visible dust motes dancing (or otherwise) in the air, and one explanation (is it the best one? what is best?) is that there is a sunbeam, or at least a beam of light, passing through the air there. (How does a beam of light pass through the air, let alone through the class of the windows?)

The second part of my evidence is the patterns on the wall. I know, or I remember (in memories that I have now, at this moment, the only moment, the only entity, that exists) that the wall is really (what could that possibly mean?) more or less uniform in color; vaguely white, and the same vague whiteness more or less all over. But what I see, or what I see at some level of awareness (what are levels of awareness? what is awareness? who is aware?) is a complex pattern of light and dark, triangles and rectangles and more complex figures laid out and overlapping, and I theorize (automatically, involuntarily, whether or not I intend to) that these brighter shapes and triangles are where the sunbeam (passing through empty space, and then the air, the window, the air again) strikes the wall, and the darker areas, the ground to the sunbeam’s figure, are simply the rest, the shadows, where the sunbeam does not strike, or does not strike directly.

(Or the dark places are where the frames of the window and the edges of shelves and chairs, things outside and things inside, cast their shadows, and the light places, the ground to the shadows’ figure, is the rest, where the shadows do not fall; figure is ground and ground is figure.)

Can we avoid all of this complexity, if we hold Mind as primary? I am. Or, no, as generations of philosophers have pointed out, feeling clever to have caught Descartes in an error, not “I am” but only something along the lines of “Thought is”. If there is a thought that “thought is”, that can hardly be in error (well, to first order). But if there is a thought “I think”, that could be an error, because there might be no “I”, or it might be something other than “I” that is thinking.

Second attempt, then! Can we avoid all of this complexity, if we start with “Thought is”? Or “Experience is”?

Experience is. There is this instant of experience. In this instant of experience, there is a two-dimensional (or roughly two-dimensional) surface. Whether or not there is an understanding of dimensions and how to count them, there is either way still a two-dimensional surface, here in this experience. In some places, the two-dimensional surface is bright. In some places, it is dark; or it is less bright.

Whether or not there is any understanding of brightness and darkness, what might lead to brightness and darkness, anything about suns or windows or how they work, there is still this brightness, in this single moment of experience, and there is still this darkness.

(Whether the darkness is actually bright, just not as bright as the brightness, whether the surface is really two-dimensional in any strong sense, or somewhere between two and three dimensions, or whether dimensions are ultimately not a useful or coherent concept here, there is still, in this singular moment of experience that is all that there is, this experience, this experience, which is what it is, whether or not the words that might be recorded here as a result of it (and whatever “result” might mean) are the best words to describe it.)

And whether it is described at all, whether description is even possible, does not matter; the main thing, dare I write “the certain thing”, is that this (emphasized) is (similarly emphasized).

So.

From this point of view, we may say, Mind is primal. Mind, or whatever we are attempting successfully or unsuccessfully to refer to when we use the word “Mind”, does exist, or at any rate has whatever property we are attempting to refer to when we say “does exist”. Except that “refer to” and “property” are both deeply problematic themselves here.

This is why the ancient Zen teachers (who exist, in this singular present moment of experience, only as memories of stories, memories that exist now) are said to have, when asked deep and complex questions through the medium of language, and impossibly concerning language, have responded with primal experience, with blows or shouts or (more mildly) with references to washing one’s bowl, chopping wood, carrying water.

We can remember that there is this. So, what next?

Language (this language, many other languages, those languages that I am familiar with, that I know of) assumes time.

Without the concept of time, it is difficult to speak, to write, to hypothesize.

Let alone to communicate.

To communicate!

The impossible gulf: that you and I both exist (what does it mean to exist?) and that by communicating (whatever that might be) one or both of us is changed (otherwise why bother?). But change, like time (because time), is an illusion.

So!

Is it necessary (categorically, or conditionally) to participate in the illusion? Or pretend? To act (or to speak, as speech is after all action) as though time and change were real?

The sun comes through the windows and casts shadows on the wall. Is there someone at the door?

Fling Thirteen

2022/10/08

Simulation Cosmology at the Commune

Last night I visited the Commune in my dreams. I’ve been there before, to the extent that that statement means anything; it’s a great dim ramshackle place full of omnigendered people and spontaneous events and kink and music and general good feeling. I’d love to get there more regularly (and in fact this time one of the regulars teased me for always saying how at home I felt, and yet only visiting once in a while; I said that it wasn’t that simple [crying_face_emoji.bmp]).

As well as having ice cream, I took part in a spirited discussion about whether this reality is just a simulation running in some underlying reality, and whether that underlying reality is also a simulation, and then eventually the question of whether that series has to eventually end in some reality that isn’t a simulation, or if it can just go on forever. (The question of whether we might be in, not a simulation, but a dream, and the extent to which that’s a different thing anyway, didn’t arise that I recall.)

I remember trying to take a picture with my phone, of some book that was interesting and relevant to the discussion, and of course it was a dream so the phone camera would only take a picture a few seconds after I pushed the button, and it was frustrating. (Also there was some plot about someone releasing very small robots to find and blow up the dictator of a foreign country, recognizing them by their DNA, but that probably counts as “a different dream”, to the extent that that means anything.)

Then after waking up, I took a lovely walk outside in the sun and 50-something air, and it was lovely, and I thought thoughts and things. So now I am writing words!

Anything that is consistent with all of your (all of my, all of one’s) experience so far, is true, in the sense that it is true for someone out there in the possible worlds who is exactly identical to you in experience, memory, and so on. (Which is in a sense all that there is.) And so it would seem mean or something so say it wasn’t true.

So there are many incompatible things that are true; it is true that you live in a simulation running on a planet-size diamond computer created by intelligent plants, and it is also true that you live in a simulation running on the equivalent of a cellphone owned by a golden-thewed Olympian deity who does chartered accountancy in his spare time.

It is probably also true that you don’t live in a simulation at all, but in a free-floating reality not in any useful sense running on any underlying other reality, created by some extremely powerful being from the outside, if that can be made logically coherent. And also that you live in a non-simulation reality that just growed, so to speak, without external impetus.

(I notice that I keep writing “love” for “live”, which is probably an improvement in most ways.)

The “if that can be made logically coherent” thing is interesting; in my undergraduate thesis I had to distinguish between “possible worlds” and “conceivable worlds”, because I needed a word for a category of worlds that weren’t actually possible (because they contained Alternate Mathematical Facts, for instance) but were conceivable in that one could think vaguely of them; otherwise people would turn out to know all necessary truths, and clearly we don’t.

So now given this liberal notion of truth that we’re using here, are the negations of some necessary truths, also true? Is there a version of you (of me) that lives in a world in which the Hodge Conjecture is true, and another in which it is false? Not sure. Probably! :)

That is, given that in the relevant sense my current experience (this present moment, which is all that exists) doesn’t differentiate between Hodge and not-Hodge, there’s no reason to say either Hodge or not-Hodge, and therefore neither Hodge nor not-Hodge is any truer than the other.

Eh, what?

It’s an interesting question what would happen if I got and thoroughly understood a correct proof that, say, not-Hodge. Would there at that point no longer be any of me living in a universe in which Hodge? Given that I can (think that I) have thoroughly understood something that is false, there would still be me-instances in worlds in which Hodge, I’m pretty sure. Which kind of suggests that everything is true?

Given all that, it seems straightforwardly true for instance that you (we) live in a simulation running in a reality that itself a simulation running in a … and so on forever, unlikely as that sounds. Is there some plausible principle that, if true, would require that the sequence end somewhere? It sort of feels like such a principle ought to exist, but I’m not sure exactly what it would be.

It seems that if there is anything in moral / ethical space (for instance) that follows from being in a simulation, it would then be difficult and/or impossible to act correctly, given that we are both in and not in a simulation. Does that suggest that there isn’t anything moral or ethical that follows from being in a simulation? (I suspect that in fact there isn’t anything, so this would just support that.)

It’s true that you can communicate with the underlying reality that this reality is running on, by speaking to a certain old man in Toronto. It’s also true that you can communicate with that underlying reality by closing your eyes and thinking certain words that have long been forgotten almost everywhere. You can even be embodied in a physical body in that underlying reality! You could make your way down the realities, closer and closer to the source (which is also an infinite way off!)!

If you were to arrange to be downloaded into an individual body in the underlying reality, would you want the copy of you in this reality to be removed at the same time? That would sort of seem like suicide; on the other hand leaving a you behind who will discover that they didn’t make it to the underlying reality, might be very cruel. Perhaps the left-behind you could be (has been) flashy-thing’d, per Men in Black.

Another thing that appears to be true: it’s very hard to use any of these AI things to create an image that accurately reflects or elicits or even is compatible with my dreams of visiting the Commune and having ice cream! Not sure what I’m doing wrong. Here is one that I like in some way, even though it doesn’t really reflect the dream. (The center divider is part of the image; it’s not two images.) Enjoy!

2022/09/03

Those Born in Paradise

My old Jehovah’s Witness friend, from all those Saturdays ago, stopped by this morning! He says he’s just started doing door-to-door work again since the Pandemic started. He had with him a young Asian man, as they do, always traveling in pairs to avoid temptation and all.

As well as catching up on life and all, and him showing me the latest jay doubleyou dot org interactive Bible teachings and stuff, we talked a little about religion and philosophy.

He talked about how Jehovah has a name (“Jehovah”) as well as various titles (“God”, “Father”, etc), just like people do. (I didn’t ask where the name came from, although I am curious.) He said that, as with humans, Jehovah has a name because Jehovah is a person. I asked what that meant, and it came down to the idea that Jehovah has “a personality”. I tried to ask whence this personality came, and whether Jehovah could have had a different personality, but that was apparently a bit too advanced.

They claimed that one of Jehovah’s personality traits is humility, and this … surprised me. Their evidence for this was two pieces of Bible verse, one which has nothing whatever to do with humility, and the other being Psalms 18:35, which the KJV renders as:

Thou hast also given me the shield of thy salvation: and thy right hand hath holden me up, and thy gentleness hath made me great.

but the JW’s favorite translation, the New World Translation has as:

You give me your shield of salvation,
Your right hand supports me,
And your humility makes me great.

Given all of the contrary evidence, about being jealous and wrathful and “where were you when the foundations of the Earth were laid?”, I was not convinced of the humility thing, and we sort of dropped it.

(The Hebrew is apparently “עַנְוָה” (wheee, bidirectional text!), which is variously translated as either “gentleness” or “humility” or “meekness”, with suggestions of “mercy”; imho “gentleness” makes more sense here, as I don’t know by what mechanism God’s humility would lead to David’s greatness, whereas God being gentle and merciful (about David’s flaws) is a better candidate.)

Anyway :) what I really wanted to talk about was the thing I’ve alluded to before, the puzzle where, in the JW theory, once we (well, the good people!) are in the Paradise Earth, there is still free will, and there is still sin, at a presumably small but still non-zero rate, and as soon as the sinner sins in their heart (before they can hurt anyone else) they just cease to be.

(I wrote a microfiction on this theme here, and it’s also a plot element in the 2020 NaNoWriMo novel . Just by the way. :) )

“Those Born in Paradise”, made with MidJourney of course

My concern with this JW theory was that, given eternity and free will, everyone will sin eventually, and so the Paradise Earth (and even Heaven, assuming the 144,000 are also like this, I’m not sure) will slowly slowly ever so slowly empty out! Uh oh, right?

But in talking to my JW friend (who opined that at least people wouldn’t sin very often, even though as I pointed out Adam and Eve were in roughly the same circumstances and they sinned like two hours in), it turns out that there is still birth on Paradise Earth!

That had not occurred to me. He was quick to point out that there wouldn’t be enough birth to make the place overcrowded (perhaps that’s something that lesser doubters bring up?). I said that sure, I guess there’s just enough to make up for the rate of insta-zapped sinners! (I did not actually use the term “insta-zapped”.)

So that solves that puzzle. It does seem inevitable that eventually the only people will be people who were born in the Paradise Earth (or heaven?), and who therefore didn’t have to go through the whole “world dominated by Satan” phase, but only learn about it in History class or something.

Which seems kind of unfair to the rest of us! But there we are. As I say, some interesting stories to be written in that setting.

Neither my JW friend nor the younger person he was going door-to-door with seemed entirely comfortable with my theory, even though it’s the obvious consequences of their beliefs. I hope I didn’t disturb their faith at all, hee hee. (I like to think that there is some sort of warning next to my address in their list of people to visit, not to send anyone unsteady in their faith; it’s not very likely, but I like to think it anyway.)

2022/04/30

“The Ruliad”: Wolfram in Borges’ Library

I think people have mostly stopped taking Stephen Wolfram very seriously. He did some great work early in his career, at CalTech and the Institute for Advanced Study, and (with a certain amount of intellectual property mess) went on to create Mathematica, which was and is very cool.

Then in 1992 he disappeared into a garret or something for a decade, and came out with the massive A New Kind of Science, which got a lot of attention because it was Wolfram after all, but which turned out to be basically puffery. And a certain amount of taking credit for other people’s earlier work.

Being wealthy and famous, however, and one imagines rather surrounded by yes-folks, Wolfram continues in the New Kind of Science vein, writing down various things that sound cool, but don’t appear to mean much (as friend Steve said when bringing the current subject to my attention, “Just one, single, testable assertion. That’s all I ask”).

The latest one (or a latest one) appears to be “The Ruliad”. Wolfram writes:

I call it the ruliad. Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways.

It’s not clear to me what “entangled” could mean there, except that it’s really complicated if you try to draw it on a sheet of paper. But “the result of following all possible computational rules in all possible ways” is pretty clearly isomorphic to (i.e. the same thing as) the set of all possible strings. Which is to say, the set of all possible books, even the infinitely-long ones.

(We can include all the illustrated books by just interpreting the strings in some XML-ish language that includes SVG. And it’s probably also isomorphic to the complete graph on all possible strings; that is, take all of the strings, and draw a line from each one to all of the others. Or the complete graph on the integers. Very entangled! But still the same thing for most purposes.)

Now the set of all possible strings is a really amazing thing! It’s incomprehensibly huge, even if we limit it to finite strings, or even finite strings that would fit in a reasonably-sized bound volume.

And if we do that latter thing, what we have is the contents of the Universal Library, from Borges’ story “The Library of Babel”. As that story notes, the Library contains

All — the detailed history of the future, the autobiographies of the archangels, the faithful catalog of the Library, thousands and thousands of false catalogs, the proof of the falsity of those false catalogs, a proof of the falsity of the true catalog, the gnostic gospel of Basilides, the commentary upon that gospel, the commentary on the commentary on that gospel, the true story of your death, the translation of every book into every language, the interpolations of every book into all books, the treatise Bede could have written (but did not) on the mythology of the Saxon people, the lost books of Tacitus.

Borges — The Library of Babel

It also contains this essay, and A New Kind of Science, and every essay Wolfram will ever write on “the Ruliad”, as well as every possible computer program in every language, every possible finite-automaton rule, and to quote Wolfram “the result of following all possible computational rules in all possible ways.” (We’ll have to allow infinite books for that one, but that’s a relatively simple extension, heh heh.)

So, it’s very cool to think about, but does it tell us anything about the world? (Spoiler: no.) Wolfram writes, more or less correctly:

it encapsulates not only all formal possibilities but also everything about our physical universe—and everything we experience can be thought of as sampling that part of the ruliad that corresponds to our particular way of perceiving and interpreting the universe.

and sure; for any fact about this particular physical universe (or, arguably, any other) and anything that we experience, the Library of Babel, the set of all strings, the complete graph on all strings, “the Ruliad”, contains a description of that fact or experience.

Good luck finding it, though. :)

This is the bit that Wolfram seems to have overlooked, depending on how you read various things that we writes. The set of all strings definitely contains accurate statements of the physical laws of our universe; but it also contains vastly more inaccurate ones. Physicists generally want to know which are which, and “the Ruliad” isn’t much help with that.

Even philosophers who don’t care that much about which universe we happen to be in, still want correct or at least plausible and coherent arguments about the properties of formal systems, or the structure of logic, or the relationship between truth and knowledge, and so on; the Universal Library / “Ruliad” does contain lots of those (all of them, in fact), but it provides no help in finding them, or in differentiating them from the obviously or subtly incorrect, implausible, and incoherent ones.

There is certainly math that one can do about the complete graph over the set of all strings, and various subgraphs of that graph. But that math will tell you very little about the propositions that those strings express. It’s not clear that Wolfram realizes the difference, or realizes just how much the utter generality of “the Ruliad” paradoxically simplifies the things one can say about it.

For instance, one of the few examples that Wolfram gives in the essay linked above, of something concrete that one might study concerning “the Ruliad” itself, is:

But what about cases when many paths converge to a point at which no further rules apply, or effectively “time stops”? This is the analog of a spacelike singularity—or a black hole—in the ruliad. And in terms of computation theory, it corresponds to something decidable: every computation one does will get to a result in finite time.

One can start asking questions like: What is the density of black holes in rulial space?

It somewhat baffles me that he can write this. Since “the Ruliad” represents the outputs of all possible programs, the paths of all possible transition rules, and so on, there can be no fixed points or “black holes” in it. For any point, there are an infinite number of programs / rules that map that point into some other, different point. The “density of black holes in rulial space” is, obviously and trivially, exactly zero.

He also writes, for instance:

A very important claim about the ruliad is that it’s unique. Yes, it can be coordinatized and sampled in different ways. But ultimately there’s only one ruliad.

Well, sure, there is exactly one Universal Library, one set of all strings, one complete graph on the integers. This is, again, trivial. The next sentence is just baffling:

And we can trace the argument for this to the Principle of Computational Equivalence. In essence there’s only one ruliad because the Principle of Computational Equivalence says that almost all rules lead to computations that are equivalent. In other words, the Principle of Computational Equivalence tells us that there’s only one ultimate equivalence class for computations.

I think he probably means something by this, well maybe, but I don’t know what it would be. Obviously there’s just one “result of following all possible computational rules in all possible ways”, but it doesn’t take any Principle of Computational Equivalence to prove that. I guess maybe if you get to the set of all strings along a path that starts at one-dimensional cellular automata, that Principle makes it easier to see? But it’s certainly not necessary.

He also tries to apply terminology from “the Ruliad” to various other things, with results that generally turn out to be trivial truths when translated into ordinary language. We have, for instance:

Why can’t one human consciousness “get inside” another? It’s not just a matter of separation in physical space. It’s also that the different consciousnesses—in particular by virtue of their different histories—are inevitably at different locations in rulial space. In principle they could be brought together; but this would require not just motion in physical space, but also motion in rulial space.

What is a “location in rulial space”, and what does it mean for two things to be at different ones? In ordinary language, two things are at different points in “rulial space” if their relationships to other things are not the same; which is to say, they have different properties. (Which means that separation in physical space is in fact one kind of separation in “rulial space”, we note in passing.) So this paragraph says that one human consciousness can’t get inside another one, because they’re different in some way. And although you might somehow cause them to be completely identical, well, I guess that might be hard.

This does not seem like a major advance in either psychology or philosophy.

Then he gets into speculation about how we might be able to communicate between “different points in rulial space” by sending “rulial particles”, which he identifies with “concepts”. The amount of hand-waving going on here is impressive; Steve’s plea for a falsifiable claim is extremely relevant. In what way could this possibly turn out to be wrong?

(It can, on the other hand, easily turn out to be not very useful, and I think so far it’s doing a good job at that.)

He also proceeds, hands still waving at supersonic speed, to outline a Kantian theory that says that, although “the Ruliad” contains all possible laws of physics, we seem to live in a universe that obeys only one particular set of laws. This, he says, is because “for observers generally like us it’s a matter of abstract necessity that we must observe general laws of physics that are the ones we know”.

What “observers like us” means there is just as undefined as it was when Kant wrote the same thing only with longer German words. He goes on like this for some time, and eventually writes:

People have often imagined that, try as we might, we’d never be able to “get to the bottom of physics” and find a specific rule for our universe. And in a sense our inability to localize ourselves in rulial space supports this intuition. But what our Physics Project seems to rather dramatically suggest is that we can “get close enough” in rulial space to have vast predictive power about how our universe must work, or at least how observers like us must perceive it to work.

which is basically just gibberish, on the order of “all we have to do is find the true physics text in the Universal Library!”.

It’s hard to find anyone but Wolfram writing on “the Ruliad” (or at least I haven’t been able to), but the Wolfram essay points to an arxiv paper “Pregeometric Spaces from Wolfram Model Rewriting Systems as Homotopy Types” by two authors associated with Wolfram Research USA (one also associated with Pompeu Fabra University in Barcelona, and the other with the University of Cambridge in Cambridge, and one does wonder what those institutions think about this). That paper notably does not contain the string “Ruliad”. :)

I may attempt to read it, though.

2022/04/29

God is not a source of objective moral truth

I mean, right?

I’ve been listening to various youtubers, as mentioned forgetfully in at least two posts, and some of them spend considerable time responding to various Theist, and mostly Christian, Apologists and so on.

This is getting pretty old, to be honest, but one of the arguments that goes by now and then from the apologists is that atheists have no objective basis for moral statements; without God, the argument goes, atheists can’t say that torturing puppies or whatever is objectively bad. Implicit, and generally unexamined, is a corresponding claim that theists have a source of objective moral statements, that source being God.

But this latter claim is wrong.

What is an objective truth? That is a question that tomes can be, and have been, written about, but for now: in general an objective truth is a true statement that, once we’re clear on the meanings of the words, is true or false. A statement on which there is a fact of the matter. If Ben and I can agree on what an apple is, which bowl we’re talking about, what it means to be in the bowl, and so on, sufficient to the situation, then “there are three apples in the bowl” is objectively true, if it is. If Ben insists that there are six apples in the bowl, and we can discover that for some odd reason Ben uses “an apple” to refer to what we would think of as half an apple, we have no objective disagreement.

What is a moral truth? Again, tomes, but for now: a moral truth is (inter alia) one that provides a moral reason for action. A typical moral truth is “You should do X” for some value of X. In fact we can say that that (along with, say, “You should not do X“) is the only moral truth. No other fact or statement has moral bearing, unless it leads to a conclusion about what one should do.

(We will take as read the distinction between conditional and categorical imperatives, at least for now; we’re talking about the categorical imperative, or probably equally well about the “If you want to be a good person, you should X” conditional one.)

What would an objective moral truth look like, and where would it come from? We would have to be able to get to a fact of the matter about “You should do X” from things about which there are facts of the matter, modulo word meanings. The theist is almost certainly thinking that the argument is simple and looks like:

  • You should do what God wants,
  • God wants you to do X,
  • You should do X.

Since we’re talking about whether the theist’s argument works, we stipulate that God exists and wants you (me, us, etc.) to do X for some X. And if we should do what God wants, we should therefore do X.

But is it objectively true that we should do what God wants?

If I disagree, and say that I don’t think we should do what God wants, the theist can claim that we differ on the meanings of words, and that what they mean by “should do” is just “God wants you to do”. But that’s not very interesting; under those definitions it’s just a tautology, and “you should do X” turns out not to be a moral truth, since “should do X” may no longer be motivating.

To get further, the theist will have to claim that “God wants you to do X” implies “You should do X” in the moral sense of “should”; that it’s objectively motivating. And it’s not clear how that would work, how that claim is any stronger than any other. A utilitarian can equally say “X leads to the greatest good for the greatest number” is objectively motivating, a rule-utilitarian can say that “X follows the utility-maximizing feasible rules” is objectively motivating, and so on.

(“You should do X because God will punish you if you don’t” can be seen as objectively motivating, but not for moral reasons; that’s just wanting to avoid punishment, so not relevant here.)

Why would someone think that “You should do what God wants you to do” is any more objectively true than “You should do what maximizes utility” or “You should do what protects your family’s honor”? I don’t find myself with anything useful to say about that; because they grew up hearing it, or they’ve heard it in Church every Sunday or whatever, I suppose?

So that’s that. See title. :) Really we probably could have stopped at the first sentence.

2022/04/10

How about that Kalam argument?

While we’re talking about philosophical arguments for the existence of God, we should apparently consider the so-called Kalam argument.

In its simplest form it’s nice and short:

  1. Whatever begins to exist has a cause of its beginning.
  2. The universe began to exist.
  3. Therefore, the universe has a cause to its beginning.

This is, obviously, an argument for the existence of God, only if God is defined as “A cause of the beginning of the universe” and nothing further, which doesn’t seem all that significant, but still. There are further associated arguments attempting to extend the proof more in the direction of a traditional (i.e. Christian) God, being “personal” and all, but let’s look at the simple version for now.

I think it’s relatively straightforward that the conclusion (3) follows from the premises (1) and (2), so that narrows it down. Now, are (1) and (2) true?

First we should figure out what we mean by “the universe”, because that matters a lot here. Three possible definitions occur immediately, in increasing size order:

(U1) All of the matter and energy that’s around, or has been around as far back in time as we can currently theorize with any plausibility. All of the output of the Big Bang, more or less.

(U2) Anything transitively causally connected in any way to anything in U1. Everything in the transitive closure of past and future light-cones of me sitting here typing this (which is, at least arguably, the same as everything in the transitive closure of past and future light-cones of you standing there reading this).

(U3) Anything in any of the disjoint transitively-causally-connected sets of things that are picked out in the same way that U2 is, starting from different seed points that aren’t transitively-causally-connected. The “multiverse”, if you will, consisting of all those things that aren’t logically (or otherwise) impossible.

It’s interesting to note here that “the universe” as used in the Kalam can be at most U1. This is because nothing outside of U2 can be causally connected to, can create or cause or otherwise have any effect at all on, anything in U2. Anything that claims to be a cause of U2 or U3 is either not actually a cause, or is part of U2 or U3 by virtual of being causally connected to it.

This works, I think, via (2) in the argument above; U1 might plausibly be said to have begun to exist, but it’s hard to see how U2 or U3 could.

Or, I dunno, is that true? We can certainly imagine U2, that is, this universe right here, somewhat broadly construed but still undeniably this one, did just start up at some time T0. That it could, I suppose, turn out to be a fact that at all times T >= T0 there are some facts to write down about this universe, but at times T < T0 there simply aren’t.

The reaction of Kalam proponents to that suggestion seems to be just incredulity, but in general I don’t see anything wrong with the idea; a universe simply coming into being doesn’t seem logically contradictory in any way. We can certainly write down equations and state transitions that have a notion of time, and that have well-defined states only at and after a particular time; it’s not hard.

So I guess, even if the Kalam must mean U1 by “universe” even in its first premise, (1), there’s no strong reason to think that (1) is true even then. This universe right here, this collection of matter and energy, could have just sprung into existence eight billion years ago or whatever, without any particular cause. Why not?

Premise (2) is less ambitious, and therefore more plausible. Did this particular batch of matter and energy, U1, begin to exist at some time? Could be. I mean, I can’t prove it or anything, and neither can anyone else, but I might be willing to stipulate it for the sake of argument.

(Even U2 might have, although the Kalam proponent probably has to disagree with that: since they want to have a backwards-eternal God creating U1, that means that that God counts as part of U2, which means that U2 is backwards-eternal, and never came into being. So the Kalam folks are still stuck with U1.)

U3 has the interesting property that it doesn’t have a common clock, even to the limited relativistic extent that U1 and U2 have common clocks. Since U3 contains disjoint sections that have no causal connections to each other, it’s not really meaningful to speak of the state of U3 at “a” time, so referring to it beginning to exist (i.e. at “a” time) turns out not to really mean anything. I think that’s neat. :)

If we’re willing to stipulate (1) and (2) as long as “the universe” means only U1, the conclusion isn’t very powerful; we find out only that this particular batch of matter / energy that existed shortly after the Big Bang (or equivalent) must have been caused by something. And fine, maybe it was, but if it was that something was just some earlier and likely quite ordinary piece of U2. Calling that “God” just because it happens to be so long ago that we can’t theorize about it very well seems very far removed from what “God” is usually supposed to mean.

I’ve read various things on the Kalam argument, including the Craig piece linked above, and the counterarguments both don’t seem to actually understand physics and cosmology very well, and are mostly of the “proof by incredulity” variety; Craig writes, for instance,

To claim that something can come into being from nothing is worse than magic. When a magician pulls a rabbit out of a hat, at least you’ve got the magician, not to mention the hat! But if you deny premise (1′), you’ve got to think that the whole universe just appeared at some point in the past for no reason whatsoever. But nobody sincerely believes that things, say, a horse or an Eskimo village, can just pop into being without a cause.

— William Lane Craig

“Worse than magic” is hardly a logical argument, it’s just ridicule. And to state as a raw fact that no one seriously believes the argument one is attacking is, again, content-free. (The bit about Eskimo villages is a silly evasion; what may have come from nothing is for instance an unimaginably hot and dense ball of energy, not a horse. But even for a horse, expressing incredulity that one might appear spontaneously is not a logical argument; more work is required!)

This reminds me of the rather popular fundamentalist Christian statement that everyone knows deep down that God exists, and atheists are simply in denial. This is of course false and silly.

This also reminds me, now that I think of it, of an excellent lecture that I saw the other week, “God is not a Good Theory“. Among other things, the speaker here makes a similar move to my positing a universe that simply springs into being and seeing no contradiction in it; he describes various simple universes and shows that they can be explained perfectly well with no reference to any external God. “All I need to do is invent a universe that God does not play a role in” (a bit before the 10 minute mark). He also talks about the issue of causes with respect to the universe, and briefly mentions the Kalam. Definitely worth a listen.

On the Kalam in general, then, I find it extremely non-compelling. It doesn’t even have a sort of verbal paradox in it to have fun with, the way the Ontological argument does; it’s just weak. So I do wonder why it’s so popular. Thoughts in the comments are most welcome.

2022/04/02

Why the Ontological Argument doesn’t work

Back in the Rocket Car posting, we (following ol’ Gaunilo) showed, via a kind of reductio ad absurdum, that the Ontological Argument for the Existence of God doesn’t work (unless I have a really cool rocket car in my basement, which does not appear to be true).

Reductio arguments of this kind can be a little unsatisfying, because they just show that a thing is false, by showing that it being true would imply other things being true that we aren’t prepared to say are true. But they don’t tell us how the thing is false; in this case, the lack of a Z2500 Rocket Car in my basement doesn’t tell us how the argument fails, only that it fails.

But the other day, somewhere, I saw hints of an old refutation of the Ontological Argument that showed where it went wrong. I only glimpsed a few words of it, while looking for something else, and then forgot where or what it was, but a while later my brain said, “Hey look, I bet this is what that argument was saying!”, so here is that subconscious reconstruction. If anyone knows who made this argument, or an argument like it, anciently, do let me know!

Conversationally, the Ontological Argument goes something like:

A: Let’s define ‘God’ as that entity which has all perfections.

B: Okay.

A: Now, existence is a perfection, therefore since God has all perfections, God has existence, ergo God exists.

B: Wow!

The present argument against the argument changes the conversation, by having B point out problems in the underlying frame:

A: Let’s define ‘God’ as that entity which has all perfections.

B: We should be careful here, since there might not be any such entity. Let’s say instead that ‘God’ is defined as that entity which, if it exists, has all perfections.

A: Why do we have to do that? I can define ‘Humpty’ as a square circle, and that definition holds even though there are no square circles

B: Not really. If we define Humpty simply as a square circle, then if someone says “there are no square circles,” we can reply “sure there is; there is Humpty!”, and that’s wrong. It’s better to say that, strictly speaking, Humpty is a thing that, if it exists, is both a square and a circle. If it doesn’t exist, then of course it’s neither a square nor a circle, so we can hardly define it that way.

A: Hm, Oh. Well, if we define ‘God’ as something which … I guess … has all perfections if it exists, and then note that existence is a perfection —

B: We can conclude that God exists, if it exists! Much like everything else, really. :)

A: Wait, no…

The underlying observation here is that, strictly speaking, when we define or imagine something, we are defining or imagining the properties that that thing would have if it existed. If it doesn’t exist, of course, it has no properties at all. So when we imagine a seven-storey duck, we are imagining what one would be like if it existed. We aren’t imagining what it’s really like, because it doesn’t really exist at all, so it isn’t like anything; it isn’t a duck, doesn’t have seven storeys, and so on.

Therefore when we define God as having all perfections, we are actually saying that for any property which is a perfection, God would have that property if God exists.

And then the conclusion of the Ontological Argument will be just that God exists, if God exists; and that isn’t very interesting.

This isn’t an utterly formal (dis)proof, but I find it attractive.

2022/03/02

Wandering Dazed Through Everything

First of all, I’m sick. Three COVID tests over three days are all negative, so probably not COVID, but still. I’d rather not be sick. It started over the weekendish, and is gradually getting better.

Other than being sick, and therefore sleeping a lot, I’ve been doing not much more than generating more and more and more and more images on ol’ NightCafe. They have cleverly rate-limited the three-credit bonus for twittering (or, as it turns out, Instagramming) creations to one per hour, so I no longer have an infinite number of credits, but they are well worth a dime or two each.

Oh, I also reviewed another book. It was… well, you can read the review. :)

Otherwise I have been generating lots and lots and lots and lots more images, and wondering at them. I feel like I want a huge coffee-table book of them to page through, or a vast gallery of them arranged by a thoughtful curator. And on the other hand I also feel that I’m plateauing slightly in my fascination, in a way, and that they haven’t been … surprising me as much lately. We’ll see how that goes!

There are lots of gorgeous complex maximalist images from it in the Twitter (and my own Yeni Cavan is quite maximalist for all of that), but what I’ve been most struck by lately are the small and simpler things, in the spirit of Pencil on White the other day. So here are some of them, pretty much randomly. Some of them result from prompts that are only about style, not content, so the AI is free to use whatever content it thinks is likely. Some are from vaguely suggestive prompts, some abstract, some in French. :) I’ll see if I can get WordPress to lay them out more interestingly than one per row…

I observe that (1) this WordPress “gallery” control is kind of awkward and non-optimal, and (2) this particular “Illustration, pencil on paper” prompt tends to produce odd African faces sometimes; I wonder what that tells us about the AI and the training set.

Part of the reason, I think, that I want to wander among these gazing for many hours, is the feeling that there must be a whole story, probably an interesting one, or several interesting ones, behind each of the images. If that turns out not to be true, or one comes not to believe that it’s true, it might significantly reduce the fascination. Or can one just gaze and make up one’s own back stories for each and every image?

Those four are “Monochrome print of…” but you’ll have to click through to see the individual titles; the captions on the WordPress gallery control were overlaying too much of the images.

See that rather creepy result from “Colored pencil on paper” up there? Well, that’s the least creepy result I’ve gotten from that prompt all by itself. I don’t know what that means at all. Is there a whole bunch of creepy colored pencil on paper body-horror stuff in the training set? Or is it some strange local maximum that happened to form in the neural net? Mysteries!

The captions interfere with the images there a bit, at least in this view, but YOLO, eh? I feel really torn by these pictures, between being fascinated by the thought of the artist looking out over the valley from their shack on a cloudy afternoon, and then feeling betrayed because there was no artist, and then feeling that they come from an amalgamation of all the artists who created the AI’s training set in all their separate times and places, and finally that they are as fascinating as the accidental (or not!) patterns in the water threading between the rocks and barnacles as the tide comes and goes.

To finish up for tonight, we just show off that it knows some French, as generative AIs trained on as much as possible of what was lying around tend to be casually multi-lingual by accident.

(I don’t know why it’s made that last image so gigantic; apologies if it does that for you also and is disturbing.)

For some other time, I’ve also generated some sets like images from Leonard Cohen lyrics (there’s a crack in everything, that’s how the light gets in), from the World Tree (all sepia), book covers (did I already post some of those?), the wonders of Xyrlmn, cute Xenobots, and some other things. I feel like I should post all of them! And also that they can as easily be allowed to slip away relatively unrecorded.

In the meantime, we wander between the pictures, turn the pages, stroll the galleries, and let the patterns touch our minds.

2021/10/23

Can even an omniscient, omnipotent God have certain knowledge?

Or even an omniscient, omnipotent, omnibenevolent one, if that isn’t a contradiction (it seems to me to be, but I’m not certain).

First, we should establish what we mean by “certainty” or “certain knowledge”. An initial (but wrong) attempt would be that a subject S knows a proposition P “with certainty”, or “certainly”, or “has certain knowledge of P”, if S knows that P, and P cannot be false, in the sense that it is necessarily true, or true in all possible worlds, or in general that P in some relatively ordinary modal logic.

This is wrong, because it implies that all knowledge of necessary truths is certain knowledge. But that’s not the case; for instance I know that Khinchin’s theorem (the one with Khinchin’s constant in it) K, “for almost all real numbers x, the coefficients of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x”, is true. But my knowledge is not at all certain; my only evidence for it being true is basically the Wikipedia page and a tweet, and that is far from sufficient for certainty.

One could describe all sorts of situations in which the theorem might be false and still be in Wikipedia and on Twitter: the known proofs of the theorem might have a subtle flaw, the whole thing could be an internet prank, etc. Now if the theorem is actually true, it’s necessarily true, so the worlds described by those situations aren’t possible worlds (in the rather strong sense of mathematical or logical possibility), but they are “for all that I know possible”, and that’s the relevant property here.

In my undergraduate thesis I called this superset of possible worlds “conceivable worlds”, and that seems like a good-enough term for this little essay. The basic idea is that S knows P (for reasons R) iff S believes P, and P is true in all reasonably nearby conceivable worlds in which S believes P for R. (“Reasonably nearby” isn’t directly relevant here yet, but you get the idea.)

Note that here I’m saying only that I can’t be certain that K is true. I can definitely know that it’s true, and I claim that I do know that; it’s an assumption in this discussion that knowledge doesn’t require certainty.

So, if it’s not enough for certainty that S knows P, and that P is true in all possible worlds, what does give certainty? Is it that S knows P, and P is true in all conceivable worlds? That seems plausible. Perhaps equivalently, the criteria could be that S knows P, and that S is not required, as a responsible epistemic agent, to entertain any suggestion that P might be false. S would be reasonable, to put it another way, to risk absolutely anything at all on the bet that P is true.

To say that we are never certain of anything (and I don’t think we are), is to say that it’s never responsible to completely ignore any possibility that any given belief is false. We can certainly ignore contrary evidence in some circumstances; if I get spam about exciting new evidence that the Earth is flat, I can go phht and ignore it, because there are so many false claims like that around, and comparatively little rides on the issue. But there are other circumstances where this would be irresponsible; if some being asks me to bet a dollar against the lives of every person in Maine, that the Earth is not flat, it would be irresponsible of me to accept. That is, perhaps equivalently, my Bayesian prior on the Earth being flat is not quite 100%; if it was, it would be irrational not to take that bet and win the dollar.

For simpler necessary truths, like “two plus two is four”, it seems that one might claim certainty. But would it be responsible to bet a dollar against 1.3 million human lives, on the assumption that one is not somehow mistaken, or has been hypnotized into using the wrong names for numbers, or something so peculiar that one hasn’t thought of it? I think it’s pretty clear that the answer is No; weird stuff happens all the time, and risking lives that there is no sufficiently weird stuff in this case isn’t warranted. So, basically, we can never be certain.

(Another possible reading of “S knows P with certainty” or “S is certain that P” would be something like “S knows that P for reasons R, and in every possible / conceivable world in which S knows P for R, R is true, and furthermore S knows those preceding things”. I think it’s relatively clear that the “something weird might still be going on” argument applies in this case as well, and since something really weird might be going on for all we know, and mere knowledge is not certainty, we also never have certainty under this modified definition.)

Having more or less established what certainty is, and that we ordinary mortals don’t have it, we can now ask whether an omniscient, omnipotent (and optionally omnibenevolent) being G can be certain of anything. At first blush is seems that the answer must be Yes, because being omnipotent G can do anything, and “being certain” is a thing. But this is also like “creating a boulder so heavy that G cannot lift it”, so we ought to think a bit more about how that would work.

G is omniscient, which we can safely take to mean that for all P, G believes P if and only if P is true. I think we can also safely grant that for all P, G knows P if and only if P is true. That is, for all and only true propositions P, G believes P, and in every reasonably nearby conceivable world in which G believes P, P is true. (Note that I’m mostly thinking these thoughts as I go along, and have just noticed that I have nothing in particular to say about the reasons R for which G believes P. We’ll see if we need to think about that as we proceed.)

Since G knows P, and G is omniscient, G also knows that G knows P, and so on. But can G be certain of one or more propositions P? Is God’s Bayesian prior 100% for all, or any, true propositions?

Just what is G’s evidence for any given P? How does God’s knowledge work? (Ah, apparently R is coming up almost immediately; that’s good!) G knows “I am omniscient,” so that’s a start. It seems that, given that, G could go from “I believe P” to “P” more or less directly. That feels rather like cheating, but let’s let G have it for the moment.

If G’s evidence for P is typically “I believe P” and “I am omniscient”, can G get certainty from that? We are often somewhat willing to grant “incorrigibility” to beliefs about one’s own beliefs, and while normally I’d say that only gets one to the level of knowledge, let’s stipulate for the moment that G can be certain “I believe P” for any P for which that’s true.

But what about “I am omniscient”? How does G know that? What is G’s evidence? Can they truly be certain of it?

There is a set of possible beings Q, each of which believes “I am omniscient”, but is mistaken about that. Some of them are quasi-omniscient, and know everything except for some one tiny unimportant detail D of their universe, of which they are unaware, and for the fact that they are not omnipotent due to not knowing D. They believe two false propositions: not-D, and “I am omniscient”. They also believe an infinite number of other false propositions, including those of the form “not-D and 3 > 2” and so on, and perhaps some that are more interesting (depending among other things on just what D is).

Other members of Q are simply deluded, and believe “I am omniscient” even though their knowledge of their universe is in fact extremely spotty, even less than the average human’s, and they have just acquired the belief “I am omniscient” through trauma or inattention or that sort of thing.

Now, how does G know, how is G certain, that G is not in the set Q? This seems like a hard question! For any reasoning that might lead G to believe that G is not in Q, it seems that we can imagine a member of Q who could reason in the same way, at least in broad outline. G can do various experiments to establish that a vast number of G’s beliefs are in fact true, but so can a member of Q. The quasi-omniscient in Q can do experiments not involving D, and the deluded in Q can do experiments but then be mistaken about or misinterpret the results, or unconsciously choose experiments which involve only members of their comparatively small set of true beliefs. G can follow various chains of logic that imply that G is not in Q, but so can many members of Q. The latter chains of logic are invalid, but how can G be certain that the former aren’t?

One conventional sort of being G is one that not only knows everything there is to know about the universe, but also created it in the first place, and in some cases resides outside of it, sub specie aeternitatis, observing it infallibly from the outside. Such a G has a comparatively easy time being omniscient, but can even this G be certain that they are? Again, we can look at various members of Q who also believe that they created the universe, exist outside of it able to see and comprehend it all at once, etc. They have, at least in broad terms, the same evidence for “I am omniscient” that G does, but they are mistaken. The quasi-omniscient created D by accident, perhaps, or created it and then forgot about it, and when looking in from the outside fail to notice it because it is behind a bush or equivalent. The deluded in Q are simply mistaken in more ordinary ways, stuck in a particular sort of psychedelic state, and so on. Those in Q who are in-between, are mistaken for in-between sorts of reasons.

There may be many ways that G can determine “I am not in Q” with relatively high reliability. One can imagine various checks that G can do to confirm “I am not like that” for various subsets of Q, and we can easily grant I think that G can come to know “I am not in Q” and “I am omniscient.” But the question remains of how G can become certain of either of those. It seems that any method will fall victim to some sort of circularity; even if G attempts to become certain of a P by exercising omnipotence, we can imagine a member of Q (or an enhanced Q-prime) who believes “I am omnipotent and have just exercised my omnipotence to become certain of P”, but who is mistaken about that. And then we can ask how G will determine “I am not like that” in a way that confers not merely belief or knowledge, but certainty.

We can imagine a skeptic’s conversation with G going from a discussion of G’s omniscience, to questions about the certainty of G’s knowledge of G’s omniscience, to questions about how G knows that G is not in some particular subset of Q, and ultimately to how G knows that G is not in some subset of Q that the discussants simply haven’t thought of, and to how G can be certain of that. It would seem that every “since I am omniscient, I know that there is no part of Q that we haven’t covered” can be called out as circular, and that “I know that I have certain knowledge of P” can always be countered by observing that knowledge does not imply certainty.

Another consideration (pointed out by a colleague who asked if I wasn’t anthropomorphizing God) is that I am assuming here that G forms beliefs and qualifies for certainty in roughly the same way(s) that we humans do. But shouldn’t we expect G’s ways of knowing and being certain to be very different from our mortal ways? Why should these human-based considerations apply to G at all? The response, again, is that members of Q, and in particular deluded human members of Q, could make exactly that same claim: that they come to know things with certainty through special means that aren’t accessible or understandable to mere humans, and to which none of this logic applies. And, again, if a deluded human member of Q could make this claim without in fact having certainty of anything, then G’s making this claim has no useful traction either, unless G can separately demonstrate not being in Q, which is exactly the point at issue.

To put it roughly and briefly, perhaps, the question comes down to how even G can be certain that G is not just a deeply deluded mortal human, who (in addition to lots of ignorance and false beliefs) is very good at self-deception. Even if G truly is omniscient, can G be certain of that? If so, how? Is it perhaps fundamental to the nature of knowledge and evidence, that no knowledge is ever certain knowledge? Is certainty, more or less by definition, a property that no knowledge can ever in fact have?

And is there anything more to this question than the perhaps rather casual and vaguely-defined (and/or perhaps very important) idea that however confident some being might be in their evidence for a belief, there might always be something weird going on?

That was fun!

2021/08/21

Interlude with Devil’s Lettuce

I haven’t gotten to the point of posting any more in my translation of that tiny piece of Bodhidharma that we’ve been working on, because I’ve been like working and playing Satisfactory and posting too much to r/zen and stuff. (Rumors at work suggest that we might be able to start going into Manhattan for work at least a few days a week starting as early as the second week of September, woot!).

But this other thing that occurred is kind of interesting, so I thought I’d write it down interlude-fashion here meanwhile. Before I like forget.

Marijuana (pot, weed, grass, THC, the Devil’s Lettuce, reefer, Mary Jane) is now legal, in some senses, in the State of New York, and a certain young relative and I went off into the local little park and up into the old quarry, and relaxing on a big rock overlooking the now-treelined main basin, we indulged.

Here is a photograph of my very nice vape “pen”, which is really mostly a battery. The pen was acquired probably without breaking many laws, by someone who travelled to nearby Massachusetts where it is legal to sell such things (it may be technically legal to sell such things in New York as well, but only under like a dispensary license that it not currently obtainable). The cartridge could potentially have been acquired that way as well.
PXL_20210821_172838095

This particular cartridge contains Sativa Blue, or perhaps Blue Sativa, but not I suspect Sativa Blue Dream. Or if I’ve gotten the pair of containers mixed up, it might contain Indica Blue, or perhaps Blue Indica, but given that I didn’t fall asleep when using it, I suspect that this didn’t happen. (Wouldn’t want to put the strain name on the actually cartridge or anything!!)

Today was the first day I’ve indulged out in nature since college, and the most highest I’ve gotten since then also.

It was extremely interesting!

My main memory from college is that, sort of oppositely from alcohol, marijuana made me feel like everything was light and hollow, insubstantial, like you could bat it up into the sky or burst it with a pin.

More recently, I’ve thought of it as focusing my mental attention down into like a small spotlight, so my mind isn’t always jumping around between things, and also can’t keep track of multiple things at once even if one might want it to, but focuses in singly on that single (potentially random) thing.

Today, as I was lying there talking about things with the relative, I was amused to find that I would be in the middle of a relatively long and complicated sentence, with no memory of how the sentence had started or how it was intended to end. But, I found, if I didn’t let that bother me, I could just be still and watch, and my voice would continue on with the sentence just fine, and I would find out what it was about.

That was interesting! And I had some thoughts about it that I want to write down here.

Brief lemma: (A) I used to think that either our inner experience and decisions cause our (our bodies’) actions, or (B) they don’t. (A) has against it that how would that work anyway? and that there are some interesting experiments that show the body starting to do a thing before (in time) the experiencing part of the brain has decided to do the thing. (B) has against it that what our bodies do has a strong correlation with what we experience and report; if our experience is just passive fizz on reality, how could reality come to contain things (like philosophical essays) that talk about experience?

I read somewhere, and I wish I could remember where, a beautiful and obvious-in-retrospect hypothesis that solves most of this: our inner experience and decisions don’t cause our bodies’ actions, nor do the actions cause the experiences, but they are still correlated because they both have a common cause. That is, some currently-mysterious process happens, and that process causes both the body motion, and the subjective experience. The process also (and the remaining mystery is here, around “how does it do that?”) gets feedback from both the bodily and conscious processes, so a later bodily action can for instance consist of the body writing down a rough description of the recent subjective experience.

This struck me as lovely. And now as I lie there and my body is fluently saying a long complex sentence that I personally have completely lost track of, I can see an approach to explaining this: the Devil’s Lettuce is interfering with the connection between the mysterious process and the subjective experience, but not interfering as much with the connection between the MP and the body doing things. So the body goes along doing things relatively well, and the subjective awareness is like “whoa, I’m lost.”

Similarly when I stood up, my subjective mind was like “yow I’m dizzy,” but my body was not unsteady on its (my) feet particularly at all. So again more interference with the subjective branch of the causal chain than the physical one. I’ve felt something similar when I forget to take my Effexor; not that it feels at all like being high, but that I feel like I’m dizzy, except without the dizziness. Which makes no sense at first, but might it we recast it as “my subjective I feels dizzy, but no message to that effect got to/from my body”.

That’s probably all for now. :) How long does one continue feeling effects after partaking in The Reefer? It was like five or six hours ago, and although most of the effects are gone, I still feel a bit more separate than usual from my body. Or something. It could be I’m just sleepy. :) And invigorated from the extremely hilly and rather rocky quarry park!

It was a good day. :)

2020/10/20

The Ontological Argument for the Existence of the Z2500 Rocket Car in my Basement

Let us define “the Z2500 Rocket Car” to be the most amazing rocket car in my basement that it’s possible to imagine.

We can use simple logic to discover various things about the Z2500 Rocket Car.

It is a rocket car, and it is in my basement, by definition. It is also really amazing as a direct consequence of the definition and our own experience, as it is possible to imagine some really amazing rocket cars in my basement.

Obviously it has a siren and flashing lights, and can not only drive really fast on land, but can also fly and travel on and under the water, since a rocket car that can do all these things is obviously more amazing than one that can’t.

Equally obviously, the Z2500 Rocket Car exists, because a rocket car that exists is clearly more amazing than one that doesn’t.

And beyond that, it’s impossible to conceive of it not existing! Because, again, a rocket car that it’s impossible to conceive of not existing, is more amazing than one that it’s possible to conceive of not existing.

So that’s very cool: there exists a rocket car in my basement that has a siren and flashing lights, can fly, etc. Woot!

There are two problems with this:

  • We don’t have a garage or anything, or even a cellar door, so I’m not sure how I’m going to get the Z2500 Rocket Car out of my basement. However, I’m pretty sure that I can leave that up to the car itself, because a car that could solve a problem like that by itself would clearly be more amazing than one that couldn’t.
  • Also, when I go down into my basement, I don’t see the Z2500 Rocket Car that is down there. Presumably this is because it’s invisible (a rocket car that can become invisible is obviously more amazing than one that can’t), but it also means that I can’t get into it and ride around in it right now, and that seems contradictory, since a rocket car that I can get into and ride around in, right now, would seem to be more amazing than one that I can’t.

Clearly further thought is needed.

With thanks, obviously, to old Gaunilo of Marmoutiers and his excellent island. It is sad that Anselm’s response to this is to ignore the island entirely, and just restate the original argument in different words. They weren’t really aware of rigorous logic back then (Hi, Aristotle!).

2015/12/14

Consciousness and the Verbal Bias

I have been privileged lately to be part of a little informal group that meets (twice now, I think) in a Chinese restaurant on the West Side and (more often but more diffusely) in email, and talks about the mysteries (or otherwise) of consciousness (whatever that is).

There is me, and Steve who used to have a weblog a million years ago, and some smart folks from Columbia University. We are all amateurs (although Steve threatens to lure in a professional philosopher in some capacity), but that may be a Good Thing.

(Extremely long-time readers of this weblog in its various incarnations, if there are any, may recall the ancient Problems of Consciousness pages that Steve and I did. Highly related!)

books

Here is a pile of books.

As we’ve talked about these things, I have for some reason found myself increasingly attracted to the “we are just passengers” approach to consciousness. Not so much as something to believe, but as something to think about.

The idea behind this approach (which may or may not be the same thing as “epiphenominalism”) is that while subjective experience reflects what happens in the objective (or “physical” or “natural” or whathaveyou) world, it does not influence it in any way.

Subjective experience (and therefore “us”, if we identify with our subjective experiences) is just a passenger, an observer, and to the extent that we feel like we are making decisions and carrying them out, we are just mistaken. Either we are so constituted that we always (or almost always) decide to do that thing that our bodies were going to do anyway, or (perhaps more likely) we actually “decide” what to do a few milliseconds after our bodies do it; we make up stories quickly and retroactively to explain why we “decided” to do that.

What if this were true? We’d no longer have to worry about how the subjective realm has effects on the objective (it doesn’t). We may still have to worry about how subjective experience finds out what is happening in the objective world, but that’s always been the easier part; we can probably even say that well it just does, which is roughly what we say to someone who worries about why masses experience gravitational attraction.

We don’t really have to worry about Other Minds any more, either!  Or rather, we will be nicely justified in giving up on that entirely!  No way I’m going to be able to determine anything like objectively whether you, or any other physical system, has subjective experience, since subjective experience causes no discernible (or indiscernible) effects in the objective world; so I don’t need to feel guilty about not actually knowing, but just being content with whatever working hypothesis seems to result in the best parties and so on.

One puzzle that seems to remain in the We Are Just Passengers (perhaps better called the I Am Just A Passenger) theory, is why it should be the case that some of the things that my body says seem to reflect so accurately what I subjectively experience. If I have no effect on the objective world, why should this part of the objective world (the things my body says, writes in weblogs, etc) in fact correspond so well to how it feels to be me?

I started out thinking that it would be really interesting to see a theory about that: that would explain why objective biological bodies would tend to commit speech-acts that describe subjective experience, without that explanation including actual subjective experience anywhere in the causal chain.

Then like yesterday or something I had what may or may not be an insight: if a version of the Passenger theory can hold that I make my “decisions” by quickly rationalizing to myself the things that I (“subconsciously”) observe my body doing, why can’t it also hold that some significant part of what I “experience” is similarly made up just after the fact, as I retroactively experience (or remember experiencing) things corresponding to what I perceive my body saying (where “saying” here includes whatever unvoiced but subvocalized inner narration-acts occur).

That is, how certain are we (am I) that we (where each “we” identifies with our individual subjective experience) actually cause the speech-acts that our bodies carry out? I don’t see any reason we should be particularly infallible about that, at least any more than we should be infallible about causing our bodies to do other things, like buying chicken instead of turkey, or going to the opera.

We have a bias toward verbal behaviors, I will suggest, and tend to assume (without, I will suggest, any really very good reason) that verbal behaviors reflect the contents of subjective experience more or better than other kinds of behaviors do.

Of the various studies that have been done suggesting that our bodies start to do things before “we” have actually “decided” to do them, I recall (without actually going back and looking, because yolo) that the experimenters more or less assumed that verbal behaviors reflected the activity of subjective consciousness, whereas other body behaviors (and neural firings and so on) reflected mere physical stuff.  But why assume that?

In the extremely surreal and fascinating phenomenon of blindsight, a person will (for instance) claim verbally not to be able to see anything to the right, but will pretty reliably catch (or avoid) a ball tossed from the right side.

This is pretty much universally described as a case where our bodies can react to something (the ball from the right) that we don’t have conscious awareness of.

But why is this the right description? One thing the body does (the catching or avoiding) indicates awareness of the ball, and another thing the body does (the saying “no, I can’t see anything on that side”) indicates a lack of awareness.

Why do we assume that the verbal act reflects the contents of subjective awareness, and the other behavior doesn’t?

If someone couldn’t speak, but could catch a ball, we would generally not hesitate to say that they had subjective awareness of the ball.

But if the person does speak, and says things about their subjective awareness, we take that saying as overriding the non-verbal behaviors.

Could we be wrong?

The two main other kinds of things that might be happening are: that the person has subjective awareness of the ball, but for some reason the speech parts of his body insist on denying the fact; or that there are two subjective consciousnesses here, and one (associated with the speech behaviors) is not aware of the ball, but the other (associated with the catching or avoiding) is.

The first of these seems weird because we aren’t used to thinking about verbal behaviors (at least from people) happening without consciousness. The second seems weird because we aren’t used to thinking (outside of split-brain cases) of two consciousnesses associated with the same person.

(Would it be terribly frustrating to be the non-verbal consciousness in the second case? Aware of the ball, catching the ball, experiencing those things, but unable to speak when asked about it, and unable to stop the bizarrely traitorous speech organs from denying it. Or maybe that consciousness is more deeply non-verbal, and doesn’t understand and/or doesn’t have any particular desire to respond to, the questions being asked.)

Hm, where did all of that get us? I think I’ve written down pretty much what I wanted to capture: the idea that even our own speech acts might be, not things uniquely caused by our subjective consciousnesses, but simply more things that happen in the world that might or might not have any particular causal connection to subjectivity.  And that, perhaps consequent to that, that when we are developing theories about what other consciousnesses there might be out there in the world, we should watch ourselves for unwarranted bias toward speech acts over other behaviors.

Perhaps we can develop some good theory about why speech acts are in fact special in these ways, but I don’t have one at the moment, and I don’t know if anyone else has seen a need for one and written down any words in that direction.  (If you do, please let me know!)

And in the meantime, perhaps not assuming that speech-acts are special can help us reach some interesting places we would not otherwise have reached, or avoid some puzzles that would otherwise have puzzled us.

 

2015/11/04

Demographic substitution does not preserve truth

When I was in kid-school, a Social Studies teacher pointed out to us that there was no entry in the index of our textbook for “Women’s history” or “Women” in general.

I flipped through it and raised my hand, and said that hey, there was nothing for “Men’s history” or “Men”, either!

This is because I was a smug little shit who didn’t have the first clue how the world actually works.

(I like to think that this is a bit less true now.)

The teacher more or less adored me just because I was smart and (usually) well-behaved, and rather than giving me the smack-down I really needed, she (I vaguely recall) just said something like “It’s not the same thing”.

Which is entirely correct.

It’s easy to see why we might expect statements about one group to have the same status (truth, objectionability, etc.) as the same statements applied to another group.  In many contexts, there is basic fairness involved.  “Women should be able to participate in government” and “Men should be able to participate in government” are both true.  “Men should not be jerks” and also “Women should not be jerks”.  Or simple fact: “Most white people have toes”, and “Most people of color have toes”.

On the other hand, a few moments of thought reveals lots of statements for which this doesn’t work.  “Most pregnant people are women” is true; but “Most pregnant people are men” is false.  “Until comparatively recently, the law considered women to be essentially property” is true; but “Until comparatively recently, the law considered men to be essentially property” is false.  “Western society grants extensive privilege to white men per se” is pretty clearly true, but “Western society grants extensive privilege to disabled women per se” is implausible at best.

So far these examples are all of “ought” statements that survive under demographic substitution, and some “is” statements that don’t.  But in any plausible morality, situated “ought” statements are implied by “is” statements about their situation; their context.

A very strong case could be made, for instance, that “Western society grants extensive privilege to white men per se”, and “Mainstream study of history has been from a heavily male-oriented perspective” are both true, and that as a result “It is unfortunate that there is no entry about women in the index of this history textbook” can be true, while “It is unfortunate that there is no entry about men in the index of this history textbook” is silly (because, as I vaguely recall my Social Studies teacher pointing out, the whole book is about that).

More significantly (and I imagine more controversially, although perhaps not among y’all weblog readers), there are sets of “is” statements that don’t survive demographic substitution, from which we can conclude that for instance “Women, people of color, and LGBTQ people have a legitimate need for safe spaces that exclude those not in the relevant group” is true, whereas “Men, white people, and straight people have a legitimate need for safe spaces that exclude those not in the relevant group” is not. Or in shorter words, Women’s Rights and Black Power are not necessarily in the same moral categories as Men’s Rights and White Power.

And I am happy to have written that down, because I’ve had the argument rattling around inchoate in my head for some years.

Now there are a significant number of people posting things on the Internet who would claim that that the concluding sentence, that Women’s Rights and Black Power are not necessarily in the same moral categories as Men’s Rights and White Power, is just obviously false, and unfair, and sexist / racist, and so on. Some of them are, I imagine, smug little shits who don’t have the first clue how the world actually works; some others are just doing a good imitation.  To avoid the argument that we would use to get to the conclusion, they would either deny some of the initial “is” statements (denying that there is currently structural oppression of women or people of color, for instance), or deny in one way or the other that those statements imply the conclusion.

Or, perhaps more commonly, they would just repeat that the concluding sentence is sexist / racist, because what’s good for the goose is good for the gander, because fairness, and so on.  Because, that is, demographic substitution ought to preserve the truth of “ought” statements, and saying that it doesn’t is sexist / racist / etc.

What finally pushed me over the edge to write this down was some Twitter discussion of this rather baffling story on the often-odious “Breitbart” site, by the often-odious Milo somebody.  It’s still not clear to me what the intent of the story is, aside from a general suspicion that it’s supposed to be humorous in some way (I do like the part where someone asks what direction they’re driving, and someone else looks at the GPS and says “up”; that’s funny!).  But at least some of the Milo supporters in the Twitter thread that I foolishly walked into, thought that it was obviously a parody of feminist claims that various aspects of technology are gendered against women.

The argument would be, I guess, something like “I have written this piece claiming that an aspect of technology is anti-male, and the piece is silly; therefore other pieces, claiming that other aspects of techhnology are anti-female, are also silly.”  Or, perhaps more charitably, “See how silly this claim that a technology is anti-male is; claims that technologies are anti-female are similar to it, and are just as silly!”.

And this brought to mind some sort of claim like “It’s silly to analyze technology for signs of structural oppression of women, because it’s silly to analyze technology for signs of structural oppression of men, and demographic substitution preserves silliness!”.

But (whatever other additional things might or might not be going on in the case), demographic substitution doesn’t preserve silliness.  Or various other properties.

So there we are!

2015/02/03

A footnote in Kaufmann’s translation of “I and Thou”

I was struck just now to find, tucked away at the end of a footnote discussing the technical details of one of the many tricky bits of translation in Buber’s “I and Thou”, this paragraph from Kaufmann:

The main problem with this kind of writing is that those who take it seriously are led to devote their attention to what might be meant, and the question is rarely asked whether what is meant is true, or what grounds there might be for either believing or disputing it.

It is easy to read this as a sort of jarring Philistinism, as though Kaufmann is wondering wistfully (or grumpily) why Buber has to use all of these coinages and poetic turns of phrase, all of these images and metaphors, rather than laying out his argument clearly, in simple and common words, perhaps as a set of bulleted lists (maybe a PowerPoint deck!), so that one could analyze it logically and decide whether or not it’s likely to be true.

Which seems like a hysterically inappropriate thing to think, given that what Buber is doing here is laying out a particular way of thinking about the nature of reality and each individual’s relationship to God (or equivalent). A deeply personal way of seeing the world, that he invites the reader to consider, and (implicitly) to adopt or not according to taste.

This isn’t really a thing that admits of being true or false, or of being expressed in plain and simple words (or at least of words where “what is meant” is immediately evident without special attention being paid to the question).

For me at least, Buber is saying, “think of the things we do as divided into two kinds: the I-It and the I-You; then think of…”. This is in the imperative, and doesn’t admit of being true or false (or likely or unlikely).

And surely Kaufmann, being the translator of the silly thing, realizes this.

I see only three plausible theories here so far: that Kaufmann is just pulling our leg in this paragraph (which would be wonderful); that there is an entirely different way of understanding Buber under which the paragraph makes more sense (I would be very curious what that way is); and that Kaufmann really does fail to understand the material as anything more than muzzily-expressed truth-claims that, if only more concretely written down, one could study objectively in the lab (this seems both the most obvious, and in some way the least plausible, explanation).

It’s a funny world. :)

2014/12/30

Liebe ist ein welthaftes Wirken

Kaufmann translates this, from Buber’s “I and Thou”, as “love is a cosmic force”, but gives us the original in a footnote to see for ourselves.

One thing I like about German and how synthetic it is (in the technical sense that I just learned; I was going to say “agglutinative“, but that turns out to be wrong) is that you can look at the parts of many words, and see how the meaning compares to the sum of those parts.

The most simple-minded translation of that phrase might be “Love is a worldly work”, which has the same nice consonance of double-ues, but a very different sense, since the English “worldly” has strong connotations that are almost the opposite of Kaufmann’s “cosmic”.

It’s interesting that the translator chose “force” here, rather than the obvious “work” (which would have read a bit awkwardly), or perhaps “act”. Because Buber is talking about love in the context of “those who stand in it and behold in it”, “force” probably makes more sense than “act”, since you can stand in a force (a force field!), but not so much in an act.

$50 FINEBut then I wonder why Buber wrote Wirken rather than say Kraft. And then I am at, or perhaps well beyond, the very end of my competence as a translator. :)

The other day the little daughter, watching me staring into my phone and clicking and swiping without end, commented more or less “you’re taking in so much content; I don’t know if that’s healthy”.

I found myself very much in agreement with that thought, and put the phone away (temporarily) and looked at various stacks of books sitting unread here and there, and picked up “I and Thou”, read the Acknowledgements and Translator’s Key, skipped Kaufmann’s very long Prologue (these things should generally be at the end of a book, in my ever so humble opinion, so that one can encounter the work itself with more or less fresh eyes, and then read the prologue-writer’s thoughts about it afterward, when one has already one’s own ideas to compare them to), and started very slowly into the work (Werk, Wirken, Kunstwerk?) itself.

It’s a very dense book, or feels like it deserves to be treated as such, which means that I have to be careful not to spend so much time on each sentence that I eventually drift off and do other things before I get past the first chapter.

As I tweeted not long after starting (and yeah, I know; somehow Twitter and the Face Book and now even plague have all taken up residence in my ways of relating to the world):

I can’t of course actually empty the cup, and I admit I’m not really trying all that hard to.

Currently, a few more pages in, I’m wondering if Buber will go from talking about the ineffable relating that is I-You (and that he identifies with, or as, love in some sense), to a realization that the duality present even in I-You (because after all there is still I, and You) is at some level an illusion. Because that would be so Buddhist.

There are no sentient beings,
And I vow to save them.

It will be interesting either way; if he does get to some kind of non-duality, I’m sure it will have a flavor all its own. If he doesn’t, it will be interesting to see if he simply stops short of it, actively considers and denies it, or goes off in some other direction entirely.

I’ve been meaning to read this book since college sometime :) and it’s nice to finally get to it.

Solstice was nice, thank you for asking, if a little atypical. All four of us were here together, but instead of the usual Christmas Dinner with ham an’ all, we went out to the local diner.

The story: M smelled gas in the basement, so on I forget maybe the 22nd we had the gas man come and test things, and he found there was a leak somewhere in the kitchen range, and while we were moving the range out from the wall it got caught on something and when we pushed on it a little to get it past the something, the entire glass front of the oven door very enthusiastically shattered into a zillion pieces and fell onto the floor.

That was exciting!

We called the appliance place who sent out a person who determined that the range was old enough to vote, and that no one makes parts for it anymore (either for replacing the door glass or fixing any possible leak).

A new range arrived yesterday and I have baked my first loaves of bread in it, but between the breaking of the old and the installing of the new we could cook only in the microwave and crockpot, and although we considered trying to design a satisfying Solstice dinner around those, in the end we decided the local Diner would be more fun.

And it was very nice.

How do Diners do it, by the way; anyone know? How can you have that enormous a menu of available things, and be able to produce absolutely any of them in a reasonably short span of time? Are they all designed to be producible from some smallish set of ingredients, and you keep those around and ready at all times? Do all of the chefs know how to make all of the things? Are there big recipe books? Or do they look at the menu when the order comes in, figure out what you are probably expecting, and wing it?