Archive for ‘philosophy’

2023/01/20

County Jury Duty

Well, that’s over! For another six years (for state / country / town) or four years (for Federal). This is probably going to be chatty and relatively uninteresting.

Top tip: park in the parking lot under the library; it’s very convenient to the courthouse (although you still have to walk outside for a bit, and it was windy and rainy yesterday).

I had to report originally on Friday (the 13th!) because Monday was MLK day. On Friday 60-70 of us sat around in a big auditoriumish jury room for a while, with WiFi and allowed to use our cellphones and everything. Then they called attendance and talked about random things like the $40/day stipend if our employer doesn’t pay us or we’re self-employed (where did that tiny amount of money come from, one wonders) and where to park and so on. Then we were basically allowed to decide whether to come back on Tuesday or Wednesday (although I imagine if you were far down the perhaps-random list and most people had said one, you had to take the other).

A cute isomorphic pixel-art image of a bunch of people waiting around in a large room. Note this does not accurately reflect the County Courthouse except in spirit. Image by me using Midjourney OF COURSE.

I elected to come back on Wednesday for no particular reason. We were originally supposed to arrive on Wednesday at 9:30am, but over the weekend they called and said to arrive at 11am instead. Due to an inconvenient highway ramp closure and a detour through an area of many traffic lights, I got there at 11:10 or so and felt guilty, but hahaha it didn’t matter.

In the big Jury Room again, the 30+ of us waited around for a long time, then were led upstairs to wait around in the hallway outside the courtroom, and then after waiting some more were ushered into the courtroom to sit in the Audience section, and introduced to the judge and some officers, and then dismissed until 2pm for lunch (seriously!).

Some time after 2pm they let us back into the courtroom and talked to us for awhile about how it was a case involving this and that crime, and might take up to a month to try, and the judge is busy doing other things on Mondays and Thursday mornings so it would be only 3.5 days / week. Then they called 18 names, and those people moved from the Audience section to the Jury Box section. They started asking them the Judge Questions (where do you live, how long have you lived there, what do you do, what does your spouse and possible children do, do you have any family members who are criminal lawyers, etc, etc), and we got though a relatively small number of people and it was 4:30pm and time to go home.

I had a bit of a hard time sleeping, thinking about what the right answers to The Questions would be (how many times have I been on a jury in all those years? did we deliberate? do I know anyone in Law Enforcement? does the NSA count? should I slip in a reference to Jury Nullification to avoid being on the jury, or the opposite?) and like that.

Since the judge is busy on Thursday mornings, we appeared back at the courtroom at 2pm on Thursday, and waited around for quite awhile in the hallway, then went in and they got through questioning the rest of the 18 people in the Jury Box (after the judge asked the Judge Questions, the People and the Defense asked some questions also, although it was mostly discussions of how police officers sometimes but not always lie under oath, and how DNA evidence is sometimes right but now always, and how it’s important to be impartial and unbiased and so on, disguised as question asking).

Then they swore in like 6 of those 18 people, told the rest of the 18 that they were done with Jury Duty, and told the rest of us in the Audience section to come back at 9:30am on Friday (today!).

At 9:30 nothing happened for quite awhile in the hallway outside the auditorium, then for no obvious reason they started to call us into the courtroom one person at a time by name. There got to be fewer and fewer people, and then finally it was just me which was unusual and then they called my name and I went in. The Jury Box was now entirely full of people, so I sat in the Audience Section (the only person in the Audience Section!).

Then I sat there while the judge asked the same ol’ Judge Questions to every one of the dozen+ people (I know, I don’t have the numbers quite consistent) ahead of me, and then finally, as the last person to get them, I got them. And the Judge went through them pretty quickly, perhaps because he’d said earlier that he wanted to finish with this stage by lunchtime, and I had no chance to be indecisive about the issue of following his legal instructions exactly and only being a Trier of Fact, or anything else along those lines.

Then we had another couple of lectures disguised as questions, plus some questions, from The People and the The Defense. I’d mentioned the cat as someone who lived with me (got a laugh from that, but the Whole Truth, right?), and The People asked me the cat’s name and nature, and when I said it was Mia and she was hostile to everyone, The People thanked me for not bringing her with me (haha, lighten the mood, what?). And asked about my impartiality.

Now we’d had a bunch of people from extensive cop families say boldly that they couldn’t promise not to be biased against the defendant (and when The Defense I think it was asked if anyone would assume from The Defendant’s name on the indictment that He Must Have Done Something a couple people even raised their hands (whaaaaat)), and I perhaps as a result and perhaps foolishly said that while my sympathies would generally be with a defendant, I would be able to put that aside and be unbiased and fair and all.

So The People asked me if I could promise “100%” that I would not be affected by that sympathy, and I said quite reasonably that hardly any sentences with “100%” in them are true, and the judge cut in to say that he would be instructing the jurors to put stuff like that aside (implying that then I would surely be able to), and I said that I would (but just didn’t say “100%”) and then The People came back in saying that they need people who are “certain” they can be unbiased (so, no way), but then actually asked me if I was “confident” that I could be (a vastly lower bar) so I said yes I would.

And when all of that was over, they had us all go out to the hallway again, and wait for awhile, and then go back in to sit in the same seats. And then they had I think four of us stand up and be sworn in as jurors, and the rest of us could go out with the officer and sit in the big jury room again until they had our little papers ready to say that we’d served four days of Jury Duty.

And that was it!

My impression is that they were looking for (inter alia, I’m sure) people who either believe, or are willing to claim to believe, that they can with certainty be 100% unbiased in their findings as jurors. That is, people who are in this respect either mistaken, or willing to lie. And that’s weird; I guess otherwise there’s too much danger of appeals or lawsuits or something? (Only for Guilty verdicts, presumably, since Not Guilty verdicts are unexaminable?) The Judge did say several times that something (the State, maybe?) demands a Yes or No answer to his “could you be an unbiased Juror and do as you’re told?” question, and when people said “I’ll try” or “I think so” or “I’d do my best” or whatever, he insisted on a “Yes” or a “No”. (So good on the honesty for those cop-family people saying “No”, I suppose.)

So if my calculations are roughly correct, after ummm two or 2.5 days of Jury Selection, they’ve selected only about 10 jurors, and exhausted the Jan 13th jury draw; so since they need at least 12 jurors and 2 (and perhaps more like 6) alternates, they’re going to be at this for some time yet! (Hm, unless it’s not a felony case? In which case 10 might be enough? But it sounded like a felony case.)

2022/12/04

Omelas, Pascal, Roko, and Long-termism

In which we think about some thought experiments. It might get long.

Omelas

Ursula K. LeGuin’s “The Ones Who Walk Away From Omelas” is a deservedly famous very short story. You should read it before you continue here, if you haven’t lately; it’s all over the Internet.

The story first describes a beautiful Utopian city, during its Festival of Summer. After two and a half pages describing what a wise and kind and happy place Omelas is, the nameless narrator reveals one particular additional thing about it: in some miserable basement somewhere in the city, one miserable child is kept in a tiny windowless room, fed just enough to stay starvingly alive, and kicked now and then to make sure they stay miserable.

All of the city’s joy and happiness and prosperity depends, in a way not particularly described, on the misery of this one child. And everyone over twelve years old in the city knows all about it.

On the fifth and last page, we are told that, now and then, a citizen of Omelas will become quiet, and walk away, leaving the city behind forever.

This is a metaphor (ya think?) applicable whenever we notice that the society (or anything else) that we enjoy, is possible only because of the undeserved suffering and oppression of others. It suggests both that we notice this, and that there are alternatives to just accepting it. We can, at least, walk away.

But are those the only choices?

I came across this rather excellent “meme” image on the Fedithing the other day. I can’t find it again now, but it was framed as a political-position chart based on reactions to Omelas, with (something like) leftists at the top, and (something like) fascists at the bottom. “Walk away” was near the top, and things like “The child must have done something to deserve it” nearer the bottom. (Pretty fair, I thought, which is why I’m a Leftist.)

It’s important, though, that “Walk away” wasn’t at the very top. As I recall, the things above it included “start a political movement to free the child”, “organize an armed strike force to free the child”, and “burn the fucking place to the ground” (presumably freeing the child in the process), that latter being at the very top.

But, we might say, continuing the story, Omelas (which is an acronym of “Me also”, although I know of no evidence that Le Guin did that on purpose) has excellent security and fire-fighting facilities, and all of the top three things will require hanging around in Omelas for a greater or lesser period, gathering resources and allies and information and suchlike.

And then one gets to, “Of course, I’m helping the child! We need Councilman Springer’s support for our political / strike force / arson efforts, and the best way to get it is to attend the lovely gala he’s sponsoring tonight! Which cravat do you think suits me more?” and here we are in this quotidian mess.

Pascal

In the case of Omelas, we pretty much know everything involved. We don’t know the mechanism by which the child’s suffering is necessary for prosperity (and that’s another thing to work on fixing, which also requires hanging around), but we do know that we can walk away, we can attack now and lose, or we can gather our forces and hope to make a successful attack in the future. And so on. The criticism, if it can even be called that, of the argument, is that there are alternatives beyond just accepting or walking away.

Pascal’s Wager is a vaguely similar thought experiment in which uncertainty is important; we have to decide in a situation where we don’t know important facts. You can read about this one all over the web, too, but the version we care about here is pretty simple.

The argument is that (A) if the sort of bog-standard view of Christianity is true, then if you believe in God (Jesus, etc.) you will enjoy eternal bliss in Heaven, and if you don’t you will suffer for eternity in Hell, and (B) if this view isn’t true, then whether or not you believe in God (Jesus, etc.) doesn’t really make any difference. Therefore (C) if there is the tiniest non-zero chance that the view is true, you should believe it on purely selfish utilitarian grounds, since you lose nothing if it’s false, and gain an infinite amount if it’s true. More strongly, if the cost of believing it falsely is any finite amount, you should still believe it, since a non-zero probability of an infinite gain has (by simple multiplication) an infinite expected value, which is larger than any finite cost.

The main problem with this argument is that, like the Omelas story but more fatally, it offers a false dichotomy. There are infinitely more possibilities than “bog-standard Christianity is true” and “nothing in particular depends on believing in Christianity”. Most relevantly, there are an infinite number of variations on the possibility of a Nasty Rationalist God, who sends people to infinite torment if they believed in something fundamental about the universe that they didn’t have good evidence for, and otherwise rewards them with infinite bliss.

This may seem unlikely, but so does bog-standard Christianity (I mean, come on), and the argument of Pascal’s Wager applies as long as the probability is at all greater than zero.

Taking into account Nasty Rationalist God possibilities (and a vast array of equally useful ones), we now have a situation where both believing and not believing have infinite expected advantages and infinite expected disadvantages, and arguably they cancel out and one is back wanting to believe either what’s true, or what’s finitely useful, and we might as well not have bothered with the whole thing.

Roko

Roko’s Basilisk is another thought experiment that you can read about all over the web. Basically it says that (A) it’s extremely important that a Friendly AI is developed before a Nasty AI is, because otherwise the Nasty AI will destroy humanity and that has like an infinite negative value given that otherwise humanity might survive and produce utility and cookies forever, and (B) since the Friendly AI is Friendly, it will want to do everything possible to make sure it is brought into being before it’s too late because that is good for humanity, and (C) one of the things that it can do to encourage that, is to create exact copies of everyone that didn’t work tirelessly to bring it into being, and torture them horribly, therefore (D) it’s going to do that, so you’d better work tirelessly to bring it into being!

Now the average intelligent person will have started objecting somewhere around (B), noting that once the Friendly AI exists, it can’t exactly do anything to make it more likely that it will be created, since that’s already happened, and causality only works, y’know, forward in time.

There is a vast (really vast) body of work by a few people who got really into this stuff, arguing in various ways that the argument does, too, go through. I think it’s all both deeply flawed and sufficiently well-constructed that taking it apart would require more trouble that it’s worth (for me, anyway; you can find various people doing variously good jobs of it, again, all over the InterWebs).

There is a simpler variant of it that the hard-core Basiliskians (definitely not what they call themselves) would probably sneer at, but which kind of almost makes sense, and which is simple enough to express in a way that a normal human can understand without extensive reading. It goes something like (A) it is extremely important that a Friendly AI be constructed, as above, (B) if people believe that that Friendly AI will do something that they would really strongly prefer that it not do (including perhaps torturing virtual copies of them, or whatever else), unless they personally work hard to build that AI, then they will work harder to build it, (C) if the Friendly AI gets created and then doesn’t do anything that those who didn’t work hard to build it would strongly prefer it didn’t do, then next time there’s some situation like this, people won’t work hard to do the important thing, and therefore whatever it is might not happen, and that would be infinitely bad, and therefore (D) the Friendly AI is justified in doing, even morally required to do, a thing that those who didn’t work really hard to build it, would strongly rather it didn’t do (like perhaps the torture etc.). Pour encourager les autres, if you will.

Why doesn’t this argument work? Because, like the two prior examples that presented false dichotomies by leaving out alternatives, it oversimplifies the world. Sure, by retroactively punishing people who didn’t work tirelessly to bring it into being, the Friendly AI might make it more likely that people will do the right thing next time (or, for Basiliskians, that they would have done the right thing in the past, or whatever convoluted form of words applies), but it also might not. It might, for instance, convince people that Friendly AIs and anything like them were a really bad idea after all, and touch off the Bulterian Jihad or… whatever exactly that mess with the Spacers was in Asimov’s books that led to their being no robots anymore (except for that one hiding on the moon). And if the Friendly AI is destroyed by people who hate it because of it torturing lots of simulated people or whatever, the Nasty AI might then arise and destroy humanity, and that would be infinitely bad!

So again we have a Bad Infinity balancing a Good Infinity, and we’re back to doing what seems finitely sensible, and that is surely the Friendly AI deciding not to torture all those simulated people because duh, it’s friendly and doesn’t like torturing people. (There are lots of other ways the Basilisk argument goes wrong, but this seems like the simplest and most obvious and most related to the guiding thought, if any, behind his article here.)

Long-termism

This one is the ripped-from-the-headlines “taking it to the wrong extreme” version of all of this, culminating in something like “it is a moral imperative to bring about a particular future by becoming extremely wealthy, having conferences in cushy venues in Hawai’i, and yes, well, if you insist on asking, also killing anyone who gets in our way, because quadrillions of future human lives depend on it, and they are so important.”

You can read about this also all over the InterThings, but its various forms and thinkings are perhaps somewhat more in flux than the preceding ones, so perhaps I’ll point directly to this one for specificity about exactly which aspect(s) I’m talking about.

The thinking here (to give a summary that may not exactly reflect any particular person’s thinking or writing, but which I hope gives the idea) is that (A) there is a possible future in which there are a really enormous (whatever you’re thinking, bigger than that) number of (trillions of) people living lives of positive value, (B) compared to the value of that future, anything that happens to the comparatively tiny number of current people is unimportant, therefore (C) it’s morally permissible, even morally required, to do whatever will increase the likelihood of that future, regardless of the effects on people today. And in addition, (D) because [person making the argument] is extremely smart and devoted to increasing the likelihood of that future, anything that benefits [person making the argument] is good, regardless of its effects on anyone else who exists right now.

It is, that is, a justification for the egoism of billionaires (like just about anything else your typical billionaire says).

Those who have been following along will probably realize the problem immediately: it’s not the case that the only two possible timelines are (I) the one where the billionaires get enough money and power to bring about the glorious future of 10-to-the-power-54 people all having a good time, and (II) the one where billionaires aren’t given enough money, and humanity becomes extinct. Other possibilities include (III) the one where the billionaires get all the money and power, but in doing so directly or indirectly break the spirit of humanity, which as a result becomes extinct, (IV) the one where the billionaires see the light and help do away with capitalism and private property, leading to a golden age which then leads to an amount of joy and general utility barely imaginable to current humans, (V) the one where the billionaires get all the money and power and start creating trillions of simulated people having constant orgasms in giant computers or whatever, and the Galactic Federation swings by and sees what’s going on and says “Oh, yucch!” and exterminates what’s left of humanity, including all the simulated ones, and (VI) so on.

In retrospect, this counterargument seems utterly obvious. The Long-termists aren’t any better than anyone else at figuring out the long-term probabilities of various possibilities, and there’s actually a good reason that we discount future returns: if we start to predict forward more than a few generations, our predictions are, as all past experience shows, really unreliable. Making any decision based solely on things that won’t happen for a hundred thousand years or more, or that assume a complete transformation in humanity or human society, is just silly. And when that decision just happens to be to enrich myself and be ruthless with those who oppose me, everyone else is highly justified in assuming that I’m not actually working for the long-term good of humanity, I’m just an asshole.

(There are other problems with various variants of long-termism, a notable one that they’re doing utilitarianism wrong and/or taking it much too seriously. Utilitarianism can be useful for deciding what to do with a given set of people, but it falls apart a bit when applied to deciding which people to have exist. If you use a summation you find yourself morally obliged to prefer a trillion barely-bearable lives to a billion very happy ones, just because there are more of them. Whereas if you go for the average, you end up being required to kill off unhappy people to get the average up. And a perhaps even more basic message of the Omelas story is that utilitarianism requires us to kick the child, which is imho a reductio. Utilitarian calculus just can’t capture our moral intuitions here.)

Coda

And that’s pretty much that essay. :) Comments very welcome in the comments, as always. I decided not to all any egregious pictures. :)

It was a lovely day, I went for a walk in the bright chilliness, and this new Framework laptop is being gratifyingly functional. Attempts to rescue the child from the Omelas basement continue, if slowly. Keep up the work!

2022/11/21

NaNoWriMo 2022, Fling Thirty-Seven

There are books everywhere; this makes me happy. There is diffuse moonlight coming in through the windows; this also makes me happy.

Essentially none of the books here are in any of the very few languages that I can read. This also makes me happy, in a way. There is so much to know already, this only emphasizes the point. And some of them have really interesting illustrations.

The books fill the shelves, and lie in piles on the floor. I walk from place to place, and sometimes sit on something that may be a chair. It’s just like home.

As the being known as Tibbs negotiates with the locals to get us access to the zone containing the confluence, and Steve and Kristen wander the city like happy tourists (well, that is not a simile, really; they wander the city as happy tourists), I have drifted here, which feels entirely natural.

And now, having communicated the above to my hypothetical reader without becoming distracted (except for that one parenthetical about simile versus plain description), I can calmly note that these things may not be “books” in the obvious sense, that moonlight coming in through the windows is a phenomenon that quantum physics can just barely explain, and for that matter that “makes me happy” is enough to keep a phalanx (a committee, a department, a specialty) of psychology and anthropology scholars occupied for a very long time.

And that time is an illusion.

I’ve always been able to separate language from thoughts about language, to separate thoughts about reality from meta-thoughts about thoughts. At least in public. At least when talking to other people.

But I think I’m better at it now, even when I’m in private, just talking to myself, or writing for you, dear cherished hypothetical reader (cherished, inter alia, for your hypothetically inexhaustible interest and patience).

Ironically (so much to learn about irony!), I credit much of this improvement to long discussions (how long? how does the flow of time go in the cabin of an impossibly fast sharp vehicle, speeding twinnedly from one end to another of a infinite rainbow band?) with an arguably non-existent being called Tibbs, and an enigmatic pilot called Alpha, after her ship.

(Why do we use the female pronoun toward the pilot Alpha? Why does she speak English? Or how do we communicate with her if she does not? Is the intricate shiny blackish plastic or metal construct at the front of her head a helmet, or her face? Is the rest of her a uniform, flight suit, or her own body, or some of each, or entirely both? Would it have been rude to ask?)

Tibbs and Alpha, I feel, are kindred spirits, my kindred, beings blessed or cursed with a tendency to look through everything and try to see the other side, even knowing there is finally no other side, to overthink, to overthink the notion of overthinking. But they have, perhaps, had longer to get used to it.

The being Tibbs claims to be millions of years old, while also claiming to have slept for most of that time. The Pilot Alpha suggests, by implying (how? what words or gestures did she actually use?) that questions of her origin are meaningless, that she has always existed, or has at least existed for so long that information about her coming-to-be is no longer available in the universe, having been lost to entropy long since.

(At what level is entropy real? Time is an illusion; so is entropy a statement about memory? A statement about what we remember, compared to what we experience right now and anticipate, right now, about the future? Or should we only talk about entropy after we have thoroughly acknowledged that time is an illusion, but gone on to speak about it as though it were real anyway, with only a vague notion, an incomplete explanation, of why that is okay?)

Here is a thought about the illusory nature of the past and future: given that this present moment is all that exists, then all that exists outside of memory and anticipation, is this one breath, this one side of this one room containing these shelves and piles (never enough shelves!) of books, or the appearance of books.

Everything else, the long / short / timeless journey aboard the fast sharpness Alpha, the anticipation felt while listening to the sound of something like wind just before meeting Tibbs for the first time, Kristen’s palm against my cheek in that other library, the glittering brass something that she received from the Mixing, the fine hairs at the back of her neck, all of that is only (“only”?) memory. Does that mean that it is no more important, no more valid, no more real, than anything else purely mental? No more significant than a wondrous pile of multi-colored giant ants made of cheese, singing hypnotic songs, that I have just this moment imagined, and now remember as a thing I first thought of a few moments ago?

This seems… this seems wrong. (See how the ellipsis these, if you are experiencing these words in a context in which there was one, adds a certain feel of spoken language, and perhaps therefore conveys some nuance of emotion that otherwise would be missing? That is communication, in a complicated and non-obvious form.)

Here is a hypothesis put forward I think by the being Tibbs, as it (he? she? they? they never expressed a preference that I can recall) as they moved slowly and in apparent indifference toward gravity around the front of the cabin of the Alpha: Some of the things, people, situations, events, in memory are especially significant (valid, important, “real”) because they changed me. Others, while equally (what?) wonderful themselves, like the pile of cheese-ants, did not have as much impact, or the same kind of impact.

If we could work out a good theory of what it means for an object or event (or, most likely, an experience) to change a person, given that time is an illusion, then this seems promising.

The Pilot Alpha seemed in some way amused by my desire, or my project, to develop a systematic justification for (something like) dividing the world (dividing memory) into “real” things and “imagined” things, with the former being more important or more valid (or, as the being Tibbs suggested, more cool) than the latter. Amused in a slightly condescending way, perhaps, which is fine for a possibly-eternal being, but which (equally validly) I am personally rather sensitive to, given my own developmental history.

The being Tibbs, however, was not accurate in referring to my just-subsequent behavior as “a snit”.

The moonlight coming through the windows (however that might be occurring) is diffuse because it comes through the visually-thick atmosphere of this world, or this area of this world. It seems implausible that we can breathe the atmosphere without danger; is this evidence that we are in a virtuality? Is it reasonable that I nearly wrote “a mere virtuality”? Was that because “mere” would have had a meaning there that I would have been expressing? (What is it to “express” a “meaning”?) Or only because “mere virtuality” is a phrase that occurs in my memory in (remembered) situations with (for instance) similar emotional tone? What is emotional tone?

I anticipate that the being Tibbs will return to this long library room within a certain amount of time, most likely with some information to convey (what is it to “convey” some “information”?) about our continuing travels (why are we travelling? what is “travel”?). I anticipate that (the humans) Kristen and Steve will return to this long library room within a certain amount of time, most likely exchanging cute little looks and possibly holding hands, possibly having acquired some odd or ordinary souvenir in the bazaars of the city (but is this a city? does it have bazaars? what counts as a bazaar?).

And I look forward to whatever follows.

Fling Thirty-Eight

2022/11/12

NaNoWriMo 2022, Fling Twenty-One

“Here again, eh? How’s the metal bar coming? My-head-wise, that is?”

He had opened his eyes, again, to see Colin and Kris sitting on the ground with him. It was like no time had passed at all, but also like that last time had been a long time ago.

“No worse, but no progress, I’m afraid,” Colin said.

“How long as it been?”

“Four days since the accident.”

“Not too bad. Is it, um, is it more stuck than they thought?”

The virtuality seemed thinner and greyer, and the clouds were more like wisps and rivers of mist, moving faster than the wind between the mountains.

Kristen moved closer to him and rubbed his back. He felt it in a vague and indirect way; it still felt good.

“Yeah,” she said, “they tried once, but they … didn’t like how it was going.”

“Am I gonna die, then?”

“Probably eventually,” Colin said. Kris rolled her eyes.

A strange wind seemed to blow through the virtuality, through him. He felt himself thinning out somehow, and his viewpoint rising into the air.

“Whoa,” he said, and his voice came to him strange and thready.

Colin and Kris stood up, in the virtuality, and looked up toward his viewpoint.

“What’s happening?” he said, his voice still fluttering.

“I’m … not sure,” Kris said.

“Probably just the fMRI connection again?” Colin said, uncharacteristically uncertain.

“Booooo!” Steve said, his viewpoint now moving up and down and bobbing side to side. As far as he could see his own body, it was stretched out and transparent, “I’m a ghooooost!”

Kris put her hand to her forehead and looked down.

“Stop that,” she said, “at least if you can.”

Steve tried to concentrate, to focus on the patterns and concentrations of being in the virtuality, and his viewpoint moved downward slowly.

“Here you come,” Colin smiled.

Steve watched himself re-form with curiosity. “Was that supposed to happen?” he asked.

“Not… especially.”

“Are they working on me again, trying to get the thing out?”

“No, they were just doing some more scans and tests.”

“Including how I interact with you guys?”

“Like last time,” Colin nodded.

“Then why –“

Then there was another, much stronger, gust of that wind, and Steve felt himself torn away, stretched out into mist, and blown somewhere far away.

There was an instant, or a day, of darkness.

“Hello?”

“Steve?” It was Kristen’s voice, somewhere in this dark place.

“Kris? Are we, I mean, is this the real world again?”

“I don’t know.”

“What’s real, after all?”

“Colin, you nerd, where are you?”

“Apparently in absolute darkness, where all I can do is speak, and hear you two.”

“Is this part of your virtuality, Kris?”

“No, I mean, it’s not supposed to be. Not Hints of Home or the fork that I made for you, or any other one I’ve made.”

“It’s really simple, anyway.”

“Could be a bug.”

Steve tried to move, tried to see. It was like trying to move your arm, rather than actually moving it. It was like trying to open your eyes in one of those dreams where you can’t open your eyes.

“You two can just duck back out for a second and see what’s going on, right?”

“That, well, that seems like it’s a problem, too,” Colin said, “at least I can’t duck out; you, Kris?”

“Nope, me, either; it’s heh weird. What a trope.”

“This is always happening to main characters,” Colin said, “so I guess we’re main characters.”

“You could be figments of my imagination now,” Steve said, “like, that wind could have been the virtuality’s interpretation of some brain thing happening, and now I’m totally comatose and hallucinating, and you two are still back there, and I’m just hallucinating that your voices are here.”

“Babe–” Kris started.

“It’s true,” Colin said, “if we were imaginary, it would probably be easier to imagine just our voices, and not have to bother with faces and movement and so on.”

“Oh, that’s helpful!”

“You guys trying to convince me you’re real, through pointless bickering?”

“No, but would it work?”

“It might. I’m hating this darkness, could everyone try to summon up some light?”

There was a short silence, then Steve felt a sort of vibration through everything, and a dim directionless light slowly filled the nearby space.

“That worked.”

“As far as it goes.”

“We still look like us.”

“This isn’t what I was wearing, though.”

“Colin’s the same.”

“Well, what else does he ever wear?”

“Hey!”

Colin did indeed look as he had earlier in the virtuality, as perfectly and nattily suited as ever. Kristen, on the other hand, was wearing a loose flowered dress, and Steve was in a well-tailored suit himself, less formal (and, he imagined, more comfortable) than Colin’s, but still he thought rather elegant.

“This is very gender normative,” Kristen said, standing up and slowly turning around, “but I like it.”

“Where are we?” Steve said, “and why can’t you guys duck out? I know I can’t because I’m the patient and my body’s sedated, but…”

“Wow, I hope we’re okay,” Kristen said.

“If something had happened to our bodies, we should have gotten a warning, and probably pulled out automatically,” Colin said logically.

“I don’t know,” said Steve, “the hallucination theory still seem pretty good.”

“That way lies solipsism,” pointed out Kris. She spun over to Steve and touched his shoulder.

“I felt that,” he said.

She frowned. “Me, too. Really well.”

“See? Hallucinations.”

“I’m know if I was a hallucination,” Kris said.

Colin was walking around at the edge of the lighted circle.

“I wonder if this is all there is,” he said.

“It’s a small hallucination, sorry,” said Steve.

“This could be the whole universe,” he said, “although I seem to remember lots more stuff.”

“Colin–“

“This present moment is all that exists,” Colin said, “and all the other stuff is just a memory, that also exists right now.”

“Here he goes.”

“It might pass the time.”

“Shouldn’t be try to be, like, getting back to the real world, making sure our bodies are all okay…”

“If you can think of a way to do that…”

“Good point.”

Colin walked back into the center of the lighted circle, and the three sat down on the plain flat ground again, close to each other, surrounded by darkness all around.

Fling Twenty-two

2022/11/12

NaNoWriMo 2022, Fling Twenty

Everything comes together deep down.

The gentle tendrils of the mushrooms and the fungi, the mycelia, form into knots beneath the damp ground, and those knots reach out and connect to each other, knots of knots of knots connected in a single vast sheet below the world.

The fungi do not think.

But they know.

There are more connections in the mycelia of the rich dark earth of a single farm, than in the brain of the greatest human genius.

But they do not think.

The stars are connected, by channels where gravity waves sluice in and out of the twelve extra dimensions of the universe, the ones whose nature we haven’t figured out yet.

The stars… the stars think.

But they do not know.

The fungal and stellar networks found each other and connected a long time ago.

Every tree and every stone, every mammal footstep, every shovel of earth. Every spaceship and satellite launch.

They are always watching.

Or no.

Every tree, stone, footstep, and every launch, are part of the network already.

Every tree, stone, footstep, and every launch, is just the galactic star-fungus network, thinking, and knowing.

“Really?”

“I mean, absolutely. There’s no way it could be false.”

“They’re connected? We’re … part of their giant brain?”

“Of course. Everything is part of everything.”

“I — but if it isn’t falsifiable…?”

“That’s right, it’s not really a scientific theory. It’s more a way of thinking.”

“A religion?”

“A philosophy, more.”

“But if it isn’t true…”

“Oh, it’s true.”

“Stars and fungus… sounds sort of paranoid.”

“Nah, it’s just how the universe is; everything is connected, and the fungi and the stars more than anything else.”

“How did they find each other?”

“How could they not have? It was inevitable. Necessary.”

The stars and the mycelium resonate as the ages roll on. Life comes into being, and the network reacts, rings, with pure tones in every octave of the spectrum. War is a rhythmic drumming; peace is a coda, or an overture. And death is percussion.

Deep in the space between the stars, there are nodes where major arteries of coiled dimensions cross and knot, just as the mycelia cross and not deeper and deeper into the intricate ground. In the space around a star-node, in the stone circles above the spore-nodes, beings dance, constituting and manifesting the thoughts of the stars, and the knowledge of the mushrooms.

“Like, faerie circles? There are … star circles of some kind, out in space?”

“There are. Things gather at them, tiny things and big things, people from planets coming in their starships, and beings that evolved there in space, floating in years-long circles on the propulsion of vast fins pushing on interstellar hydrogen.”

“That seems like something that might not be true. What if we go out in a star ship sometime, and there’s nothing like that out there?”

“There is. An endless array of them.”

“How do you know that?”

Those who dance at the nodes of the stars and the fungi, over the centuries, absorb the thinking and knowledge of the infinite universe. Whence our stories of wise ones, of wizards, of the Illuminati. Whence the yearning songs of the star-whales, of forgotten ancient wisdom, and secret covens in the darkness.

Those who evolve on planets have an affinity to the fungal nodes. Those who evolve between the stars have an affinity for the stellar nodes. They complement and complete each other.

No planetary culture is mature until it has allied with a stellar culture.

No stellar culture is mature until it has allied with a planetary culture.

“So are the, y’know, the Greys, are they visited Earth to see if we’re worthy of allying with? Are they, like, an immature steallar culture looking for a fungus-centered culture to hook up with?”

“I don’t know.”

“You don’t know? Haven’t you heard about it in the fungusvine?”

“Fungusvine, funny.”

“Myceliavine?”

Everything comes together deep down.

The semantic tendrils of the realities extend, purely by chance, into the interstices between universes. Over endless time, over expansions and collapses, rollings in and rollings out, the tendrils interact, purely by chance, and meaning begins to flow.

Knots, and knots of knots, and knots of knots of knots, forming a vast extradimensional network that binds the realities together.

Every reality is underlain by its own networks, of mycelia and gravitational strings, or aether winds and dragon spines, the thoughts of Gods and the songs of spirits, or thrint hamuges and the fletts of tintans. And the network of each reality connects to the extradimensional network, and thereby to everything else.

Every tree, stone, footstep, and every starship launch, is part of the unthinkably vast mind of the universe, heart of the universe, the sacred body of everything, in the largest sense.

“Ooh! Are there, like, reality-witches, who find notes in the network between the realities, and have dances and stuff there, and slowly gather extradimensional wisdom?”

“Of course, there are!”

“I want to be one of those.”

“Oh, you will.”

The mind, heart, interconnected web of the universe, the multiverse, thinks (and feels and knows) slowly, deliberately. For a single impulse to travel from one end to the other, if the web had ends, would take almost an eternity. But for the resonating tone, the mood, the energy fluxes, of the network to change, all over, from end to end, takes only an instant.

“Wouldn’t that violate the speed of light and all?”

“Different realities, different speed limits.”

“I don’t know, it seems like you could you that to cheat.”

“You absolutely can.”

It is a category mistake to think that because All Is One, I can make a free transcontinental phone call.

But it is universally true that the extradimensional web of interconnections holds ultimate wisdom.

You are a neuron of the multiversal Mind, you are a beat of the multiversal Heart. You resonate always in harmony with its thoughts, its knowings, its feelings. You can accept the harmony or try to reject it, and either way you are sending your signal from one reality to another, and your breathing is a message to another universe.

Fling Twenty-One

2022/11/11

NaNoWriMo 2022, Fling Eighteen

Dr. Artemis Zane-Tucker sat working in her personal virtuality, arranging the big books of tables and glossy photos open on her desk, sometimes closing one and returning it to a shelf, sometimes pulling out a new one, at other times closing a book and opening it again on some entirely different content. The photos were mostly black-and-white scenes from a life, from someone’s thoughts and memories, interspersed with similarly monochromatic X-ray and CT scan images. She was judging both the quality of the memories, and their relationship to a particular obviously-damaged area on the scan images.

The small office contained no shelves in the usual sense; when Dr. Zane-Tucker was done with a book, each of which represented a particular data-source, she would close it and then gesture with it in the air in a way vaguely resembling the act of putting a book on a shelf, and the virtuality AI network would recognize the gesture and the book would silently disappear.

Back in what many people still described as the real world, Dr. Zane-Tucker (or, as she would have put it, her body) lay on a comfortable divan of touchless foam, with gracefully-shaped plastic cups over her eyes and a realtime fMRI cap loosely covering her head and connecting her to the virtual. Much of her body was experiencing something very close to sleep, but her brain was actively awake.

The books that the doctor opened and closed and studied and made notes in on this night were mostly related to a difficult case in the local trauma center; some desert hot-rodder had presented with various broken bones, a concussion, and, most interestingly, a penetrating head injury due to a large foreign object in the form of a metal fragment of unknown nature and origin. The patient had been stabilized quickly and effectively, a routine CT scan done, and a cautionary coma induced with neothiopentol. The injury and presence of the object had made it difficult to synchronize an fMRI lace, but some quick and she gathered rather brilliant improvisation by the imaging staff had allowed the patient to be brought more or less normally and consciously into a virtuality for brain-function study.

Now she was going through the records and readings from that study, putting together a baseline picture of the patient’s brain function as stabilized, for use in the operating theater the next day, as the surgical team would attempt to extract the object and any associated foreign matter, and determine more precisely the degree of contusion or laceration, without causing any more additional tissue damage than absolutely necessary. As far as she had seen from the data so far, the patient’s brain function was at normal as could be expected in the circumstances, with no sign of serious or lasting impairment. Even activation paths involving the damaged area were functioning in an apparently normal way.

She hoped in an abstract way that that would continue to be true.

Dr. Zane-Tucker smiled for a moment, thinking how similar she and the patient were at this moment, bodies sleeping in a sleep at least partly induced or assisted by technology, and minds active, or potentially active, in any conceivable artificial reality by virtue of their fMRI laces and attendant AI networks. She got up and walked around her desk, through the vaguely-defined edge of her office, and into the less well-organized back lot of her personal space.

She dictated a shorthand summary of her findings into the air for the AI network to transcribe into her official report, and walked deeper into the woods.

The woods were thick in places, dark, and apparently endless. As she walked deeper, the doctor’s body appeared to thin out, to become transparent and insubstantial, so that she could feel more at one with the illusion (or the reality) here, without the distractions of a simulated body. She thought about the various virtual species, mostly insectoid, that she had worked with the AI network to bring into being in her woods, and how all of it flowed along around her, naturally, without her help or intervention.

The thought was comforting.

She let her awareness travel through the woods, to areas that the AI had not yet filled in, and experienced the slowdown in time that meant that the virtuality was working extra-hard to extend the world further in the direction she was going. She could have whispered or even just emphatically thought instructions to it to alter the general nature of the extensions, or brought out virtual tools to craft with the AI a specific canyon, or tower, or waterfall. But tonight she was content to let it spin out the world as it would, rolling the dice as it were with every meter she proceeded deciding how predictable or surprising the next bit of the world would be. She passed over a small stream, knowing that if she went upstream the ground would rise, and if she went downstream it would fall, perhaps with a pond or a lake, or just a wet place between gentle hills, to receive the flowing water, even if none of that existed just yet.

And when she went out again, to the office or even the real world, she would let all this new area sink back into potentiality; no sense cluttering up permanent storage with bits of woods that could just be rolled out afresh next time she walked this way.

As she often did, she thought of the real world (the “real world”) as being the same way. As you go, the world gets filled in around you, and when you leave again it dissolves into clouds of probability, to reform if and when you return. It was a solipsistic idea, but one that she rather enjoyed.

“We surgeons are supposed to be the self-absorbed ones,” a friend and colleague had laughed when she had shared that thought with him, “but you’ve gone above and beyond there!”

Floating as a disembodied viewpoint through the newly-created but otherwise ancient woods, she remembered that conversation, and her invisible face smiled.

She did hope the young person with the metal intrusion in their skull would be all right. The data looked good so far.

Fling Nineteen

2022/11/07

NaNoWriMo 2022, Fling Twelve

Light passes through windows. This is a puzzle. This is a complicated story, a story that no one understands, in four words.

Beams of sunlight pass through the library windows, making patterns on the wall.

Sitting where I sit, among the shelves and the piles of books, I see beams of sunlight passing through the library windows, making patterns on the wall.

My evidence for the existence of beams of sunlight is (at least) in two parts: I see dust motes dancing (dancing? what is it to dance? what kinds of things can dance?) in the sunbeams, visible (to me) in the light, where in other parts of the (dusty) library air, there are no (to me) visible dust motes dancing (or otherwise) in the air, and one explanation (is it the best one? what is best?) is that there is a sunbeam, or at least a beam of light, passing through the air there. (How does a beam of light pass through the air, let alone through the class of the windows?)

The second part of my evidence is the patterns on the wall. I know, or I remember (in memories that I have now, at this moment, the only moment, the only entity, that exists) that the wall is really (what could that possibly mean?) more or less uniform in color; vaguely white, and the same vague whiteness more or less all over. But what I see, or what I see at some level of awareness (what are levels of awareness? what is awareness? who is aware?) is a complex pattern of light and dark, triangles and rectangles and more complex figures laid out and overlapping, and I theorize (automatically, involuntarily, whether or not I intend to) that these brighter shapes and triangles are where the sunbeam (passing through empty space, and then the air, the window, the air again) strikes the wall, and the darker areas, the ground to the sunbeam’s figure, are simply the rest, the shadows, where the sunbeam does not strike, or does not strike directly.

(Or the dark places are where the frames of the window and the edges of shelves and chairs, things outside and things inside, cast their shadows, and the light places, the ground to the shadows’ figure, is the rest, where the shadows do not fall; figure is ground and ground is figure.)

Can we avoid all of this complexity, if we hold Mind as primary? I am. Or, no, as generations of philosophers have pointed out, feeling clever to have caught Descartes in an error, not “I am” but only something along the lines of “Thought is”. If there is a thought that “thought is”, that can hardly be in error (well, to first order). But if there is a thought “I think”, that could be an error, because there might be no “I”, or it might be something other than “I” that is thinking.

Second attempt, then! Can we avoid all of this complexity, if we start with “Thought is”? Or “Experience is”?

Experience is. There is this instant of experience. In this instant of experience, there is a two-dimensional (or roughly two-dimensional) surface. Whether or not there is an understanding of dimensions and how to count them, there is either way still a two-dimensional surface, here in this experience. In some places, the two-dimensional surface is bright. In some places, it is dark; or it is less bright.

Whether or not there is any understanding of brightness and darkness, what might lead to brightness and darkness, anything about suns or windows or how they work, there is still this brightness, in this single moment of experience, and there is still this darkness.

(Whether the darkness is actually bright, just not as bright as the brightness, whether the surface is really two-dimensional in any strong sense, or somewhere between two and three dimensions, or whether dimensions are ultimately not a useful or coherent concept here, there is still, in this singular moment of experience that is all that there is, this experience, this experience, which is what it is, whether or not the words that might be recorded here as a result of it (and whatever “result” might mean) are the best words to describe it.)

And whether it is described at all, whether description is even possible, does not matter; the main thing, dare I write “the certain thing”, is that this (emphasized) is (similarly emphasized).

So.

From this point of view, we may say, Mind is primal. Mind, or whatever we are attempting successfully or unsuccessfully to refer to when we use the word “Mind”, does exist, or at any rate has whatever property we are attempting to refer to when we say “does exist”. Except that “refer to” and “property” are both deeply problematic themselves here.

This is why the ancient Zen teachers (who exist, in this singular present moment of experience, only as memories of stories, memories that exist now) are said to have, when asked deep and complex questions through the medium of language, and impossibly concerning language, have responded with primal experience, with blows or shouts or (more mildly) with references to washing one’s bowl, chopping wood, carrying water.

We can remember that there is this. So, what next?

Language (this language, many other languages, those languages that I am familiar with, that I know of) assumes time.

Without the concept of time, it is difficult to speak, to write, to hypothesize.

Let alone to communicate.

To communicate!

The impossible gulf: that you and I both exist (what does it mean to exist?) and that by communicating (whatever that might be) one or both of us is changed (otherwise why bother?). But change, like time (because time), is an illusion.

So!

Is it necessary (categorically, or conditionally) to participate in the illusion? Or pretend? To act (or to speak, as speech is after all action) as though time and change were real?

The sun comes through the windows and casts shadows on the wall. Is there someone at the door?

Fling Thirteen

2022/11/06

NaNoWriMo 2022, Fling Nine

Every time I open my eyes, the world becomes narrower, and wider.

What I see tells me that certain things are impossible; what I see tells me that so many things are possible.

Every time I move my elbow and touch the world, the world becomes narrower, and wider.

What I touch, what I feel, tell me that certain things are impossible; what I touch, what I feel, tells me that so many, so very many, things are possible.

When I floated unseeing, unfeeling, in an endless void, everything was possible.

And nothing was possible.

Here is an image of trees set among oddly pointed hills. On the ground, in the image, white trails snake everywhere.

The white trails might be ancient lava flows, might be modern water runnels, might be plants of different colors growing in stripes because of the underlying chemical differences caused by ancient lava. Or modern water.

Trees are shelter from the rain, trees are habitats, are not-yet-decayed masses of food for saprophytes, are just one of the things that happen when you get an area very very hot, and then let it cool very very slowly.

We circle the tree, each of us thinking our own thoughts, each of our thoughts reflecting everyone’s thoughts. The trees are prisms for our thoughts, taking them in as white beams and redistributing them as rainbow tracks; rainbow tracks for the trains of ages, rainbow tracks for the steam-engines of understanding.

Here is another image, of trees set among oddly pointed hills. On the ground, in the image, white trails snake everywhere.

Are these photographs, and has the photographer only turned from east to west, or north to south, between one and the other?

Are these impressionist, semi-abstract, paintings from life, and has the artist turned the easel in one direction on one day, and the other direction on another day?

The artist sits on a hillside, under the outermost reaching branches of a dense dark tree with deep green foliage, and as the artist slowly paints, small animals and large insects rustle in the tree, and in the crevices of the ground cover.

If these are paintings from life, what does the artist smell, moving the brush slowly over the canvas, there under the edge of the tree’s shadow, where stripes run over the ground, or where the ground inspires the artist to paint stripes where, in plain reality, there may be none?

Every time the artist breathes, and scents the air, the world becomes narrower, and wider.

What the artist smells, scents, breathes, makes olfactory note of, tells the artist that certain things are possible. What the artist smells, makes olfactory note of, tells the artist that many things are possible.

The brush of the artist spells out on the canvas what is possible, what is impossible.

With every touch of the brush, the universe becomes narrower, and wider.

When the canvas was blank, everything was possible.

And nothing was possible.

Time passes, for us circling the trees, for the artist painting, for the trees refracting all our thoughts. For the photographer on the hilltop and for the writer in the old overstuffed armchair.

As time passes, as we see and hear and feel and smell (and taste), the world becomes narrower, and wider.

With every tick of the universal clock, what we experience tells us that certain things are impossible.

With every tick of the universal clock, what we experience tells us that so many things are possible.

Circle the tree with me, with love; give the trees the white beams of your thoughts, and accept from the trees the rainbow diffractions of mine. Love is the result of all of it, and love is the cause of all of it. Love and light are the same, trees and stripes on the ground are the same. Darkness and stillness are the same.

Here is a mystery. Here is a question. What will the next moment declare impossible? What will the moment after reveal as possible?

Here is another image, of a river of stripes flowing between thick dark trees, among oddly pointed hills. Under one tree, a feline form melds with the shadows, resting or waiting, relaxed or alert, its ears a pair of points in the dimness, listening, its whiskers quivering in the air.

Every sound the cat hears tells it that certain things are impossible. Every sound the cat hears, tells it that so many things are possible. The sound of prey in the brush, the sound of splashes in the river, the sound of another cat, distant among the trees, raising a brief and plaintive call toward the sun, the moon, the spirits of prey in the trees. The cat’s body is full of potential, full of watching and patience and the thought of sudden fatal motion.

Did the artist see the cat waiting in that shadow, and capture a hint with that moving brush? Or is the cat the diffraction of a thought of the artist, from patterns in the artist’s mind, from memories of the artist’s past? Is the artist also a traveler, a reader, a composer of fiction or symphonies?

Music makes no claims, cannot be judged or faulted for adding imaginary cats to real tree-shadows. Can I lie with music? Can I lie with a photograph, with a painting, with a loaf of bread?

If these images on the table before us came with no words, no labels or cover-letter, nothing claiming anything with words, then perhaps they cannot lie to us, either; they can only be what they are, and we are free to take them (the rainbows from the trees) and use them in any way at all. They tell us that some things are impossible (now that the envelope has proven to contain only these two images, we are not in a universe where it contains something else, as well or instead), and that many things are possible: the images as photographs, as paintings, as narrative, as hallucination, as music; the sending as a gift, a threat, as braggadocio or the fulfillment of a contract.

Hold them near your face and breathe. What is possible?

Fling Ten

2022/11/05

NaNoWriMo 2022, Fling Seven

The rich dark earth was still joyfully absorbing the last of the rain as Alissa unrolled the bundle for Sema to see. What had been rivulets of the pale sandy earth were drying now, making patterns on the ground around them. In a day they would already be fading, as the lighter sank and scattered among the darker clods and grains.

“Do you see?” Alissa asked, placing the two patterned fragments beside each other in her indentation.

Sema waved long antennae in a positive motion.

“This is what the pale curves looked like, looking down from the perch above; is that the doing that you saw?”

“Yes,” Alissa the storyteller replied, “and the pale curves stirred up in my thoughts various memories of scent trails, winding between stems and across open places.”

“Is there a story…” Sema began, and Alissa followed the other’s thoughts in tandem.

“There is,” Alissa said, “one that does not get told all that often.”

It was a story about stories, as most of Alissa’s are, set in a vague distant past, as most of Alissa’s also are. In this story, as she told it to Sema there under the indentation as the rain dried around them, there is an unnamed person with many eyes, with jointed legs and skillful pincers, who begins making patterns in large flat leaves.

“This person patterned the surfaces of some with open curves and closed curves, this person patterned the surfaces of others with straight lines and intersecting lines. They used nectars that would dry in crackling patterns that absorbed light, so that those with sharp eyes could discern the patterns they had made. They used sharp splinters grasped in their skillful pincers to damage the surfaces, in straight and intersecting patterns, in open and closed shapes, in small and large whorls, and the damaged places also shone differently in the light, so that those with sharp eyes could discern the patterns.”

The story rolled in rising and falling words, making patterns in time between the stems of the dark rich earth, between the thoughts of Alissa and the thoughts of Sema, carrying patterns first composed by persons not present, persons long since gone, persons unknown, down the long chain of transmission, from heart to heart, abdomen to abdomen, through all the instants of the earth.

“Some people said that the patterns in the leaves brought to their minds the patterns in the world, this curve in the track of dried nectar corresponding somehow to that curve of a stem against the sky; this closed pattern of damage on a bark chip corresponding somehow to the shape of a fallen petal on the ground. Others denied that such correspondences could exist, and others averred that they could exist for those with sharp eyes, but not for those without. Many days of discussions in the twilight were taken up with the patterns and the markings, and many words flowed through and between the people of that gathering.”

Sema listened to the story, having become still and open, as people do when listening to stories, thoughts steered and shaped by the words of the storyteller, by Alissa’s words there in the indentation, under the green and rising stem.

“Some others tried to make their own marks and patterns on leaves, and even on bits of bark and the surfaces of common seeds, but none had as much skill in their pincers or mandibles. Some invented patterns of their own, patterns that brought to their minds specific times or places, specific people or even specific stories, but to others the patterns were indiscernible, or meaningless. Some made only simple spots, and some made long conjoined markings. In the darkness of the gathering twilight, the markings were indiscernible to all, and some of the marked leaves blew away in the wind, or were eaten by careless visitors.”

Alissa thought of the markings on the carefully-bundled fragments before them, and how they might have been made, and how they, or some few of them, had brought to mind the shapes of scent trails curving in the twilight.

The story continued, as Alissa made her chant and Sema sat in receptive stillness, their thought moving in correspondence in the ancient way.

In the story, the markings had proliferated, becoming more concrete and more abstract, and those with skillful pincers and sharp eyes had become different from those with ordinary pincers and ordinary eyes, until just when it seemed that the gathering might sunder, there had come a heavy rain.

“And in that rain it seemed that all of the marked leaves, and all of the marked seeds, and all of the marked fragments of bark, were washed away, or were wetted or shaken so that the marks were gone, or were eaten by those stranded. And that first person who had begun marking leaves was washed away, or overcome, and vanished and was gone. And the gathering was sorely pained by the heaviness of the rain, having spent too much time in the making of markings, straight and curved, open and closed, and too little time in preparing and locating shelters and indentations, above and away from the falls and the flows of the rain.”

The story was a cautionary tale, perhaps, against the folly of making markings to mirror the world, even the folly of making stories themselves into markings, and so a story that was not often told, because who would ever think to do that, under the stems and the trees and the wind, beside the rushing water and the still water?

After the story ended, the two, Alissa and Sema, teller and hearer, sat still as was appropriate. The air moved gently, and more of the remaining rain water sank into the deep earth.

“And yet,” said Sema after a time, “here are these marks here on these fragments, and they have not been washed off or eaten, and these curves have brought into your mind the curving of scent trails.”

Alissa moved her head and upper arms and made a sound of agreement.

“We do not do it here, but perhaps somewhere in another place, in another gathering, there are people who follow the other limb of the old story, and are even now making cunning marks of various kinds and types on more leaf fragments and bark fragments, variously bringing to mind stems and scent trails, and even names and stories.”

The idea made Alissa uneasy.

“If this curve brought to your mind a scent trail,” Sema ventured, “or brought to mind the rivulets of pale sand which brought to mind a scent trail, curving between the stems, then perhaps whoever made the marks had in their mind a particular scent trail when they did so.”

“A particular scent trail?”

“One curving between particular stems, laid down by a particular person, leading to and from specific particular places.”

“And then … causing these markings? Which then bring to mind those same places, when viewed in the light?”

Sema made a yes motion, or a perhaps motion. Alissa found the idea baffling and thrilling and worrying.

“The aged pale one came here looking for me, for me specifically!” she said, remembering it as though it was a story itself.

“Yes,” motioned Sema.

“Perhaps… if these markings were made with a specific particular scent trail in mind, perhaps that trail was one that began, or one that ended, here in the rich dark earth.”

“Here?” wondered Sema.

“Or perhaps not, perhaps this is all only dreaming. This curved mark could be anything, or nothing; these strange small patterns arrayed beside it could be nothing, or anything, dirt, wear, the chewing of grubs.”

“But then the elder would hardly have kept these fragments so carefully bundled,” pointed out Sema reasonably.

“Even so,” Alissa agreed, “and yet he left afterward, saying nothing, only leaving the bundle on the ground of the dark earth at the base of this stem.”

“Without even stopping for a story.”

“He already had too many, he said.”

“Too many stories. Who has too many stories?”

“A very old storyteller, or story gatherer, perhaps.”

“But if it is a scent trail that this marking brings to your mind, and it begins, or ends, here…”

“Yes?”

“Then the same scent trail ends, or begins, somewhere else.”

Alissa could only agree. The marking was, looking at it more closely with more eyes, in the brighter light, long and sinuous, moving as though it were going around individual stems, and around larger obstructions, and from one place to another and to another, and eventually to, or from, some destination.

“Will you follow it?” Sema asked, as though that were an ordinary question.

“Follow it?” Alissa replied, in distress and great amazement.

Fling Eight

2022/11/04

NaNoWriMo 2022, Fling Six

As the sun moves, somewhere outside, the dusty sunbeam moves across the room, across the piles of books (there are never enough shelves!).

We can take up a particular book, and we can sit in comfort on the window seat or elsewhere, and open it. This book begins with metaphors.

“The cars are the wheels of the city,” the book says. “The city is the body of the car,” the book says.

Cities do not have wheels, and cars cannot be wheels. Cities do not have bodies, and cars are much smaller than cities.

If we expected something like truth or falsity from words, in books, this might be a problem. This might be something wrong.

The book says nothing; it is silent. Inside the book, on one of the flat sheets of fiber near the beginning (near the top of the pile of stacked and cut fibrous sheets), there are patterns of differential reflectivity, patterns of darkness, that are associated with the words, with the letters, with the symbols: “The cars are the wheels of the city.” The association between the reflectivity and the symbols is complex.

Why are these particular marks, lines of chemical, situated on this particular flat sheet of fibrous substance, out of all of the many sheets of fibrous substance in this pile, in this room, in this library, on this planet?

What does it mean to ask “Why”? Again we have this circularity, this difficulty that words can easily talk about anything but words, that language can agilely juggle any subject but language and truth. But with what can we juggle language and truth, if things cannot juggle themselves?

“The cars are the wheels of the city.” These patterns on the page are related to properties of the mind of whoever put them there. These patterns on the page are related to other properties of the mind of whoever reads them. Because this is no different from “The cat is on the mat,” or “The sum of two odd numbers is an even number,” we have no particular problem with metaphors, we have (so far) no need for a special explanation.

Even a simple truth (or falsehood) is related to everything else around it, every facet of its conception, its writing, its reading, its comprehension, in impossibly complex ways, in ways that would take a lifetime even to begin to understand. A complex metaphor, a figure of speech, an allegory, is related to everything else around it, in ways that would take a lifetime even to begin to understand.

But we have established that it is entirely possible, indeed it is likely inevitable, to act and experience without full understanding.

The sun is warm, even hot, through the dusty windows. Whether or not we know what it means for light to “come through” a window, or what “a window” is. Or “warm”. Or “light”.

The book says, “Sharp metal softly pierces and separates tissue”. We shudder at the thought, or at a thought that appears after, and perhaps because, we read the phrase. We shudder even if we cannot explain what “sharp metal” is; what counts as “metal”, and what counts as “sharp”? Are there subjective or objective categories? Is there a useful distinction between “subjective” and “objective”? What makes an act of piercing “soft”? Without being able to say, we imagine the separating of tissue, perhaps the unmentioned welling of blood, and we shudder.

So it is also entirely possible, indeed it is likely inevitable, to be moved by, to be changed by, words, by language, without full understanding.

I close my eyes and turn my face toward the window, and the sunlight hits my eyelids, warms my face, and fills my vision with warm living redness. It is bright, even if I cannot explain to you what “bright” means. Even if, as seems inevitable, I cannot explain to you what “red” means.

This one page of this one book could occupy a lifetime. There is no hope of fully experiencing, internalizing, understanding, even one shelf of the books of this room, even one pile (there are never enough shelves). We can function without full understanding. But what dangers does that entail? (What are “dangers”? What is it for a thing to “entail” another thing? What is a “thing”?)

From another direction (we turn our face toward the interior of the room; now the sunlight hits the back of our head, and our face is cooler, the redness less intense, more black). When we feel certain things, when we have certain physiological properties, we tend to make certain sounds. There are correlations between certain sounds and certain marks, certain patterns of marks, made with reflectivity changes on fibrous sheets.

(Another book, whose title (what is a title?) is “Empire of Dreams”, and whose author is “Giannina Braschi”, says “When I plunge into thought, I walk at the foot of the wind.” The wind has no foot. One cannot plunge into thought. These are all metaphors. Even “The wind has no foot” is a metaphor. All language, perhaps, is metaphor, because no language is literally true. But what does “literally true” mean? Language cannot agilely juggle itself.)

The light is warm on my arm, on the side of my head, on my leg. What does it mean for light to come through the window? When we see light coming through a window, we tend to say “the light is coming through the window”. What is light? What is “light”? Without light, we cannot see; without sight, can we know? Literally, yes. Metaphorically, no.

Light is the mother of knowledge. (The cars are the wheels of the city.) Light is the positive, light is motion and life and progress, knowledge and understanding. “[T]he windows give their light generously to the air,” says the book. Darkness is stillness, potential, ignorance and innocence. (Or guilt?)

From another direction. I cannot touch you, except as I can touch you with my words. I cannot comfort you, inspire you, understand you, or help you to understand me, except as I can do this with my words, because I exist for you only as words. Words cannot be constrained to literal truth, or they could not do the things that we require them to do.

(Ironically enough, the original meaning of “literal” is “having to do with letters”, so only words can convey, contain, constitute, literal truth.)

It is not necessary to think all of these thoughts. We can easily stretch out in the sun on the window seat, with a book, and read, and nap, without understanding how any of these things work (so much to know about books! About naps!).

Fling Seven

2022/11/04

NaNoWriMo 2022, Fling Five

“So you had another fight?”

“Not a fight really, just, I don’t know…”

“A misunderstanding?” Colin grinned. He’d twitted me about using that word before.

I sighed and helped him get down a particularly steep part of the rocky bank. Colin had some hormonal kind of thing, and even though he was older than me, nearly twenty-five, he looked mostly like a little kid. A little kid with a disturbingly knowing face, maybe, but then lots of kids have disturbingly knowing faces, right?

“I don’t know. That’s the problem! Does she understand? Does she even listen to what I say? It’s like I say one thing, and she hears something completely different.”

Colin picked up a stick from the ground and chucked it off the path, such as it was, in the direction of the river.

“She’s not stupid,” he said, “she understands words just fine.”

“Except when I say them,” I said, sighing again I guess. It’s not like I liked talking about this stuff, but it was happening, and Colin was there. Being who he was, and the whole hormonal thing or whatever, he was kind of outside of society, looking in at it in some kind of amused objectivity, and he could see things sometimes that nobody else could, from being too close or whatever.

“Maybe you aren’t saying what you think you’re saying,” he said unhelpfully.

“What?”

“You know, we’ve talked about this; it’s not like when you talk you’re teleporting some fact from your brain to hers. You make some mouth noises, and she experiences some ear-tickles, and –“

“Come on,” I said, “if you think about it that way, you can never say anything! Gee, what kind of ear-tickles will she get if I make these mouth noises, and which synapses in her brain will fire? It’s not like anyone can know any of that. You have to just say things, just say what’s true.”

“Steve…,” he said in that annoying voice. We were coming down to the very edge of the river now, and the water was loud, but not loud enough to make it hard to talk. The river was full of rocks here, like it was everywhere, and the water piled up and splashed around and foamed between them, and the air was cool with spray and bursts of mist.

“What? I’m saying true things here, and you’re hearing them. It has to be simple, otherwise we couldn’t communicate at all. A baby doesn’t need to learn, like, graduate semiotics before it can say that it wants a bottle.”

Colin laughed, sounding like a kid laughing because he had a kid’s throat and mouth, but also sounding like an adult because he had an adult’s brain (a nerdy adult at that).

“Yeah, but you aren’t a baby, and you aren’t telling Kristen that you want a bottle. Boyfriend and girlfriend is way different from that.”

“I just want things to be simple.”

“Yeah, welcome to Earth, Steve-lad,” he said, sitting down carefully on a flat rock surrounded by little branches of swift water. He was, as usual, wearing a kid-sized version of like an Edwardian Moor-Walking Suit or some shit like that, looking crazily proper for someone sitting on a rock. “The way of a man with a maid is seldom simple,” and he sighed elaborately, like an actor over-acting a scene.

“Okay,” I said, “this time it was that she sent me this thing that she’d made, a cute 3D simulation thing with secret places in it, and you walk around and these little guys slowly hint where the secret places are, and it was pretty cool.”

Colin looked at me for a second, waiting for me to say something else I guess, and then said, “Yeah, and…?”

“And I don’t know!” I said, a little loud maybe, “I messaged her back saying that it was very cool, and she was all offended or upset or something!”

“What exactly did you say, Steve-o?”

“I said, like, ‘that’s cool!’ I think; which it was! It was a compliment.”

“Not exactly gushing, that,” Colin said.

“Do I have to be gushing? Do I have to say that everything she does is just the most amazing thing ever?”

“It sounds like she probably worked pretty hard on it.”

“I know! She must have! But right when she sent it I was in the middle of tuning my rig, and I didn’t have a chance to go into it deep, and before I got time, she was already all huffy, and I don’t know. I said it was cool like as soon as she sent it.”

“Look, Steve,” Colin said in a serious kid-voice, “think of it from her point of view, she worked hard on –“

“I know, I should have turned off the rig instantly, wasted half an hour’s tuning –“

He laughed annoyingly on the rock.

“Or you could have told her sweetly that it looked cool, and you’d look at it as soon as you were done with the tuning, or you could have not said anything until you were done, or…”

I know I sighed then. “But those would have made her huffy, too. Isn’t she more important than the tuning thing? Or why do I take so long to get back to her?”

“So you want credit because you said that it was cool, and that was a compliment, and you said it fast, right?”

“That would have been fair.”

“And even though that was what you said, she heard something else?”

The river burbled louder and softer between the rocks, nice and simple, no chance of misunderstanding.

“Yeah. She must have heard, I don’t know, that I didn’t like it, or didn’t respect her, or something.”

“Really? She reacted just like you’d said ‘This sucks’ or ‘You’re dumb’ or like that?”

“No, no, no…” That was the thing about talking to Colin, I think; that when you said something, he heard exactly what you said, and asked you questions about it, and you could realize that what you said wasn’t exactly what you meant. Rather than like getting all huffy and insulted.

“This is where we start singing ‘Why Can’t A Woman, Be More Like A Man?’, right?” Colin loves all these weird old books and movies and stuff. He’d shown me and Kristen that Professor Higgins movie on the rota the other week; it was pretty good, if weird.

I laughed, which felt good. “So I should just apologize to her or whatever, right?” I said, resigning myself.

“Yeah, you could. Never hurts.”

“How do I keep it from happening again?”

“You don’t, Stevie, Steve-man, Steveorino,” he said helpfully, “but if you remember that she will hear whatever you say as Kristen, not as Steve or Colin would, it’ll help.”

“Like I know how it is to hear stuff as Kristen.”

“Well, yeah, that’s the challenge. Be easy on her, and on yourself. It’s all good.”

And that’s how conversations with Colin tended to end. It’s all good. Just keep on keepin’ on.

Good advice I guess.

Fling Six

2022/10/20

I hear they’re calling it “Jazz Cabbage”

Image

If and only if you share my neural architecture, I highly recommend taking a couple of doses of a nice THC edible just about an hour before any dentist appointment involving lots of pain (i.e. any dentist appointment). You may need to hang around in town for a little extra time after, to make sure you’re safe to drive home, but it’s well worth it (even if you accidentally have to eat two extra scoops of ice cream, but that’s another thing).

In my extensive experience (today), I find that it (the THC) has two complementary effects (man, either of those words could be spelt wrong):

Firstly, it drastically shortens the memory of pain. Or at least this was one of the deep insights that I had before the cannabinoids (oh, c’mon WordPress, that’s not mispelt!) started to wear off (and I had the thought “how sad, that these deep insights may be lost when normality returns!”): that most of the suffering from physical pain comes from the memory of the pain, not from the pain itself (there may have been other insights, that I’ve forgotten).

So when the hygienist jabs the spinning drill-head into one’s gums and presses it in (“Hm, you’ve got some bleeding on probes here”), one is like “heh, some pain!” like you just saw a (brief bright) shooting star, but a moment later it’s gone, and not a big deal, and pretty much forgotten (more than a real shooting star would be).

Secondly, it distances one from whatever it might be that is experiencing whatever pain is left. When there are big flares of pain, one experiences it as a sort of label (like one might see a large area of purple), but with the emotional content more like “Whoa, looked like that hurt a lot, poor body!”. Even when there was enough pain that the body winced or twitched or whatever, one was just observing it objectively, thinking, “looks like that hurt really a lot, tsk”, rather than getting upset about it.

I think my body’s reactions to the pain were perhaps, guessing, about half what they usually are. So there was still the initial motion / wince, but that slipped quickly out of memory (maybe the body, per se, doesn’t have much in the way of memory? that could be an insight) and so the physiological effects died down again quickly, not being enforced by consciousness-driving emotional effects (see how deep?).

Thinking about it, one major physiological effect that I associate with the dentist is a significant tightness across my chest and very tight breathing, and I have to consciously let go of those a few (several, many) times per session. I did notice that effect once this time, but similarly it wasn’t bothering me, I just casually noticed it, and consciously relaxed it away for the body’s comfort’s sake.

And that was all really nice. Another effect, or maybe a side-effect of the second effect, is that (as I think I’ve mentioned before) my attention gets considerably narrower (and possibly slightly deeper, but not as much deeper as narrower) than usual, and also it was sliding around all here and there, exploring other more or less nearby realities and planes of existence, and just checking in with this reality and the body now and then, not spending much time there.

So looking back it seemed like the torment part of the appointment was very brief (since, I guess, my consciousness and memory were mostly in other realities), but also occupied a pretty long and active time (all that exploring of alternate realities). Not alternate realities like hallucinatory hallucinogen things, but more attention or thought-region or abstract-concept things. Mostly pretty bliss-filled (because I live right?).

Normally I don’t notice one or two little squares of my Bedrock Bar (“Elevate your life”, about 5mg TAC per square), when all I’m doing is the usual stuff around the house. I took four once, I think, and I did notice that, but as I was just doing normal home things that day, the only effect is that I was aware of my consciousness bugging out to other realms and checking back in to see if I’d finished my sentence or whatever.

But apparently after a couple of hours, two squares (10mg TAC) is enough to make a dental appointment much more bearable than usual. Today. For me.

YMMV.

And all! :)

2022/10/08

Simulation Cosmology at the Commune

Last night I visited the Commune in my dreams. I’ve been there before, to the extent that that statement means anything; it’s a great dim ramshackle place full of omnigendered people and spontaneous events and kink and music and general good feeling. I’d love to get there more regularly (and in fact this time one of the regulars teased me for always saying how at home I felt, and yet only visiting once in a while; I said that it wasn’t that simple [crying_face_emoji.bmp]).

As well as having ice cream, I took part in a spirited discussion about whether this reality is just a simulation running in some underlying reality, and whether that underlying reality is also a simulation, and then eventually the question of whether that series has to eventually end in some reality that isn’t a simulation, or if it can just go on forever. (The question of whether we might be in, not a simulation, but a dream, and the extent to which that’s a different thing anyway, didn’t arise that I recall.)

I remember trying to take a picture with my phone, of some book that was interesting and relevant to the discussion, and of course it was a dream so the phone camera would only take a picture a few seconds after I pushed the button, and it was frustrating. (Also there was some plot about someone releasing very small robots to find and blow up the dictator of a foreign country, recognizing them by their DNA, but that probably counts as “a different dream”, to the extent that that means anything.)

Then after waking up, I took a lovely walk outside in the sun and 50-something air, and it was lovely, and I thought thoughts and things. So now I am writing words!

Anything that is consistent with all of your (all of my, all of one’s) experience so far, is true, in the sense that it is true for someone out there in the possible worlds who is exactly identical to you in experience, memory, and so on. (Which is in a sense all that there is.) And so it would seem mean or something so say it wasn’t true.

So there are many incompatible things that are true; it is true that you live in a simulation running on a planet-size diamond computer created by intelligent plants, and it is also true that you live in a simulation running on the equivalent of a cellphone owned by a golden-thewed Olympian deity who does chartered accountancy in his spare time.

It is probably also true that you don’t live in a simulation at all, but in a free-floating reality not in any useful sense running on any underlying other reality, created by some extremely powerful being from the outside, if that can be made logically coherent. And also that you live in a non-simulation reality that just growed, so to speak, without external impetus.

(I notice that I keep writing “love” for “live”, which is probably an improvement in most ways.)

The “if that can be made logically coherent” thing is interesting; in my undergraduate thesis I had to distinguish between “possible worlds” and “conceivable worlds”, because I needed a word for a category of worlds that weren’t actually possible (because they contained Alternate Mathematical Facts, for instance) but were conceivable in that one could think vaguely of them; otherwise people would turn out to know all necessary truths, and clearly we don’t.

So now given this liberal notion of truth that we’re using here, are the negations of some necessary truths, also true? Is there a version of you (of me) that lives in a world in which the Hodge Conjecture is true, and another in which it is false? Not sure. Probably! :)

That is, given that in the relevant sense my current experience (this present moment, which is all that exists) doesn’t differentiate between Hodge and not-Hodge, there’s no reason to say either Hodge or not-Hodge, and therefore neither Hodge nor not-Hodge is any truer than the other.

Eh, what?

It’s an interesting question what would happen if I got and thoroughly understood a correct proof that, say, not-Hodge. Would there at that point no longer be any of me living in a universe in which Hodge? Given that I can (think that I) have thoroughly understood something that is false, there would still be me-instances in worlds in which Hodge, I’m pretty sure. Which kind of suggests that everything is true?

Given all that, it seems straightforwardly true for instance that you (we) live in a simulation running in a reality that itself a simulation running in a … and so on forever, unlikely as that sounds. Is there some plausible principle that, if true, would require that the sequence end somewhere? It sort of feels like such a principle ought to exist, but I’m not sure exactly what it would be.

It seems that if there is anything in moral / ethical space (for instance) that follows from being in a simulation, it would then be difficult and/or impossible to act correctly, given that we are both in and not in a simulation. Does that suggest that there isn’t anything moral or ethical that follows from being in a simulation? (I suspect that in fact there isn’t anything, so this would just support that.)

It’s true that you can communicate with the underlying reality that this reality is running on, by speaking to a certain old man in Toronto. It’s also true that you can communicate with that underlying reality by closing your eyes and thinking certain words that have long been forgotten almost everywhere. You can even be embodied in a physical body in that underlying reality! You could make your way down the realities, closer and closer to the source (which is also an infinite way off!)!

If you were to arrange to be downloaded into an individual body in the underlying reality, would you want the copy of you in this reality to be removed at the same time? That would sort of seem like suicide; on the other hand leaving a you behind who will discover that they didn’t make it to the underlying reality, might be very cruel. Perhaps the left-behind you could be (has been) flashy-thing’d, per Men in Black.

Another thing that appears to be true: it’s very hard to use any of these AI things to create an image that accurately reflects or elicits or even is compatible with my dreams of visiting the Commune and having ice cream! Not sure what I’m doing wrong. Here is one that I like in some way, even though it doesn’t really reflect the dream. (The center divider is part of the image; it’s not two images.) Enjoy!

2022/09/03

Those Born in Paradise

My old Jehovah’s Witness friend, from all those Saturdays ago, stopped by this morning! He says he’s just started doing door-to-door work again since the Pandemic started. He had with him a young Asian man, as they do, always traveling in pairs to avoid temptation and all.

As well as catching up on life and all, and him showing me the latest jay doubleyou dot org interactive Bible teachings and stuff, we talked a little about religion and philosophy.

He talked about how Jehovah has a name (“Jehovah”) as well as various titles (“God”, “Father”, etc), just like people do. (I didn’t ask where the name came from, although I am curious.) He said that, as with humans, Jehovah has a name because Jehovah is a person. I asked what that meant, and it came down to the idea that Jehovah has “a personality”. I tried to ask whence this personality came, and whether Jehovah could have had a different personality, but that was apparently a bit too advanced.

They claimed that one of Jehovah’s personality traits is humility, and this … surprised me. Their evidence for this was two pieces of Bible verse, one which has nothing whatever to do with humility, and the other being Psalms 18:35, which the KJV renders as:

Thou hast also given me the shield of thy salvation: and thy right hand hath holden me up, and thy gentleness hath made me great.

but the JW’s favorite translation, the New World Translation has as:

You give me your shield of salvation,
Your right hand supports me,
And your humility makes me great.

Given all of the contrary evidence, about being jealous and wrathful and “where were you when the foundations of the Earth were laid?”, I was not convinced of the humility thing, and we sort of dropped it.

(The Hebrew is apparently “עַנְוָה” (wheee, bidirectional text!), which is variously translated as either “gentleness” or “humility” or “meekness”, with suggestions of “mercy”; imho “gentleness” makes more sense here, as I don’t know by what mechanism God’s humility would lead to David’s greatness, whereas God being gentle and merciful (about David’s flaws) is a better candidate.)

Anyway :) what I really wanted to talk about was the thing I’ve alluded to before, the puzzle where, in the JW theory, once we (well, the good people!) are in the Paradise Earth, there is still free will, and there is still sin, at a presumably small but still non-zero rate, and as soon as the sinner sins in their heart (before they can hurt anyone else) they just cease to be.

(I wrote a microfiction on this theme here, and it’s also a plot element in the 2020 NaNoWriMo novel . Just by the way. :) )

“Those Born in Paradise”, made with MidJourney of course

My concern with this JW theory was that, given eternity and free will, everyone will sin eventually, and so the Paradise Earth (and even Heaven, assuming the 144,000 are also like this, I’m not sure) will slowly slowly ever so slowly empty out! Uh oh, right?

But in talking to my JW friend (who opined that at least people wouldn’t sin very often, even though as I pointed out Adam and Eve were in roughly the same circumstances and they sinned like two hours in), it turns out that there is still birth on Paradise Earth!

That had not occurred to me. He was quick to point out that there wouldn’t be enough birth to make the place overcrowded (perhaps that’s something that lesser doubters bring up?). I said that sure, I guess there’s just enough to make up for the rate of insta-zapped sinners! (I did not actually use the term “insta-zapped”.)

So that solves that puzzle. It does seem inevitable that eventually the only people will be people who were born in the Paradise Earth (or heaven?), and who therefore didn’t have to go through the whole “world dominated by Satan” phase, but only learn about it in History class or something.

Which seems kind of unfair to the rest of us! But there we are. As I say, some interesting stories to be written in that setting.

Neither my JW friend nor the younger person he was going door-to-door with seemed entirely comfortable with my theory, even though it’s the obvious consequences of their beliefs. I hope I didn’t disturb their faith at all, hee hee. (I like to think that there is some sort of warning next to my address in their list of people to visit, not to send anyone unsteady in their faith; it’s not very likely, but I like to think it anyway.)

2022/08/29

Yes, works made with an AI can be copyrighted.

In fact in most cases works made with an AI, just like works made with a typewriter or a paintbrush or Photoshop, are copyrighted by the human who created them, the moment that they are “fixed” (to use the wording of the Berne convention). I’m writing this page mostly to address the many statements to the contrary that are all over the web, and that people keep posting on the MidJourney Discord and so on, so that I can like link to this page whether than typing it in yet again every time someone says it.

But I read that a series of rulings found otherwise!

Yes, sadly, I’m sure you did. Here are just a few samples of this misinformation (one is especially disappointed in Smithsonian Magazine, ffs). But if one reads beyond the misleading headlines, these are all about two decisions by the U.S. Copyright Office in the case of Thaler, and (tl;dr) all those decisions do is reject the theory that an AI can create something as a “work for hire”, and the person using the AI thereby get the copyright to it as the “employer”. Since in US law only persons or “individuals”, not including software or computers, can be “creators in fact” of a creative work, they reject that theory.

The decisions in the Thaler case most definitely do not say that a person who uses an AI program in the ordinary way, just like a person who uses a paintbrush in the ordinary way, doesn’t come to own the copyright to that thing automatically, in the ordinary way (as nicely explained here). And in various other countries, the copyright laws explicitly account for things generated by or with an AI, and acknowledge that copyright applies to them (see for instance this short survey).

(If you’re here just because someone posted you this link when you said that images made using AI can’t be copyrighted, that’s all you need to know, but feel free to read on etc!)

But when a person uses an AI, all the creativity is in the AI, so the person shouldn’t get a copyright!

No court case that I know of, in any country, has ever ruled this way. One might as well argue (and people did, when the technology was new) that there is no creativity in a photograph, since all you do is point the camera and push a button. And yet it’s (now) uncontroversial that people get copyright in the photographs that they take.

It’s easy to take a picture, but a good photographer picks a camera and lenses, decides where to point it and in what light to press the button, and then decides which images to keep. It’s easy to use an AI to make a picture, but a good user of an AI image tool picks an engine and settings, decides what prompt(s) to give it and with what switches to invoke it, and then decides which images to keep. I think those are very analogous; you may disagree. The courts have not yet weighed in as of this writing, but it seems to me that denying copyright because a particular kind of software was involved in a certain way would be a mess that courts would not want to wade into.

If there hasn’t been a positive ruling in the US, though, it could turn out…

I agree, since the law doesn’t explicitly say that a person using an AI to make an image has the copyright in the image, and because the “all the creativity is in the AI” argument does exist, it’s not impossible that some US court could find that way. So one might not want to risk anything really important on that not happening.

What’s up with Thaler, anyway?

Thaler is, well, an interesting character, it seems. He believes that some AI programs he has created have had “near death experiences”, and he has attempted to obtain patents with an AI program as the inventor, as well as the attempts to cast them as work-for-hire employees for copyright purposes, as mentioned above. An individual before his time, perhaps. Perhaps.

Update: What if the ToS of a service says…

As a couple of people asked / pointed out after I posted this, sometimes the Terms of Service on a site where you can create stuff, says or implies that you do not own the copyright to the stuff, but they do, and they grant you some sort of license.

The MidJourney ToS, in fact, currently says that “you own all Assets you create with the Services” with a few exceptions including ‘If you are not a Paid Member, Midjourney grants you a license to the Assets under the Creative Commons Noncommercial 4.0 Attribution International License (the “Asset License”).’ This is a bit terse and ambiguous, but the obvious interpretation is that in that case MidJourney owns the Assets, and grants the user a certain CC license.

As far as I know, it isn’t well-established in IP law whether a ToS can unilaterally change who owns what like this; if anyone knows more, I’d be interested! But in any case, this sort of thing still says or implies that someone owns the rights, so it doesn’t directly impact the overall subject here.

Update 2: Show me an actual AI artwork that is registered with the US Copyright office!

Funny you should ask! :)

This is boring, post a picture!

A strange surreal owl-thing or something
2022/08/14

Is it plagiarism? Is it copyright infringement?

So I’ve been producing so many images in Midjourney. I’ve been posting the best ones (or at least the ones I decide to post) in the Twitters; you can see basically all of them there (apologies if that link’s annoying to use for non-Twitterers). And an amazing friend has volunteered to curate a display of some of them in the virtual worlds (woot!), which is inexpressibly awesome.

Lots of people use “in the style of” or even “by” with an artist’s name in their Midjourney prompts. I’ve done it occasionally, mostly with Moebius because his style is so cool and recognizable. It did imho an amazing job with this “Big Sale at the Mall, by Moebius”:

“Big Sale at the Mall, by Moebius” by Midjourney

It captures the coloration and flatness characteristic of the artist, and also the feeling of isolation in huge impersonal spaces that his stuff often features. Luck? Coolness?

While this doesn’t particularly bother me for artists who are no longer living (although perhaps it should), it seems questionable for artists who are still living and producing, and perhaps whose works have been used without their permission and without compensation in training the AI. There was this interesting exchange on Twitter, for instance:

The Midjourney folks replied (as you can I hope see in the thread) that they didn’t think any of this particular artist’s works were in the training set, and that experimentally adding their name to a prompt didn’t seem to do anything to speak of; but what if it had? Does an artist have the right to say that their works which have been publicly posted, but are still under copyright of one kind or another, cannot be used to train AIs? Does this differ between jurisdictions? Where they do have such a right, do they have any means of monitoring or enforcing it?

Here’s another thread, about a new image-generating AI (it’s called “Stable Diffusion” or “Stability AI”, and you can look it up yourself; it’s in closed beta apparently and the cherrypicked images sure do look amazing!) which seems to offer an explicit list of artists, many still living and working, that it can forge, um, I mean, create in the style of:

So what’s the law?

That’s a good question! I posted a few guesses on that thread (apologies again if Twitter links are annoying). In particular (as a bulleted list for some reason):

  • One could argue that every work produced by an AI like this, is a derivative work of every copyrighted image that it was trained on.
  • An obvious counterargument would be that we don’t say that every work produced by a human artist is a derivative work of every image they’ve studied.
  • A human artist of course has many other inputs (life experience),
  • But arguably so does the AI, if only in the form of the not-currently-copyrighted works that it was also trained on (as well as the word associations and so on in the text part of the AI, perhaps).
  • One could argue that training a neural network on a corpus that includes a given work constitutes making a copy of that work; I can imagine a horrible tangle of technically wince-inducing arguments that reflect the “loading a web page on your computer constitutes making a copy!” arguments from the early days of the web. Could get messy!
  • Perhaps relatedly, the courts have found that people possess creativity / “authorship” that AIs don’t, in at least one imho badly-brought case on the subject: here. (I say “badly-brought” just because my impression is that the case was phrased as “this work is entirely computer generated and I want to copyright it as such”, rather than just “here is a work that I, a human, made with the help of a computer, and I want to assert / register my copyright”, which really wouldn’t even have required a lawsuit imho; but there may be more going on here than that.)
  • The simplest thing for a court to decide would be that an AI-produced work should be evaluated for violating copyright (as a derivative work) in the same way a human-produced work is: an expert looks at it, and decides whether it’s just too obviously close a knock-off.
  • A similar finding would be that an AI-produced work is judged that way, but under the assumption that AI-produced work cannot be “transformative” in the sense of adding or changing meaning or insights or expression or like that, because computers aren’t creative enough to do that. So it would be the same standard, but with one of the usual arguments for transformativity ruled out in advance for AI-produced works. I can easily see the courts finding that way, as it lets them use an existing (if still somewhat vague) standard, but without granting that computer programs can have creativity.
  • Would there be something illegal about a product whose sole or primary or a major purpose was to produce copyright-infringing derivative works? The DMCA might possibly have something to say about that, but as it’s mostly about bypassing protections (and there really aren’t any involved here), it’s more likely that rules for I dunno photocopiers or something would apply.

So whew! Having read some of the posts by working artists and illustrators bothered that their and their colleagues’ works are being used for profit in a way that might actively harm them (and having defended that side of the argument against one rather rude and rabid “it’s stupid to be concerned” person on the Twitter), I’m now feeling some more concrete qualms about the specific ability of these things to mimic current artists (and maybe non-current artists whose estates are still active).

It should be very interesting to watch the legal landscape develop in this area, especially given how glacially slowly it moves compared to the technology. I hope the result doesn’t let Big AI run entirely roughshod over the rights of individual creators; that would be bad for everyone.

But I’m still rather addicted to using the technology to make strange surreal stuff all over th’ place. :)

2022/05/31

A couple of books I’ve read some of

To update this post from (gad) three months ago on the book “Superintelligence”, I’m finally slightly more than halfway through it, and it has addressed pretty reasonably my thoughts about perfectly safe AIs, like for instance AIDungeon or LaMDA, that just do what they do, without trying to optimize themselves, or score as many points as possible on some utility function, or whatever. The book calls such AIs “Tools”, and agrees (basically) that they aren’t dangerous, but says (basically) that we humans are unlikely to build only that kind of AI, because AIs that do optimize and improve themselves and try to maximize the expected value of some utility function, will work so much better that we won’t be able to resist, and then we’re all doomed.

Well, possibly in the second half they will suggest that we aren’t all doomed; that remains to be seen.

Conscious experience is unitary, parallel, and continuous

Another book I’ve read some of, and in this case just a little of, is Daniel Ingram’s “Mastering the Core Teachings of the Buddha” (expanded 2nd edition). There’s a page about it here, which includes a link to a totally free PDF, which is cool, and you can also buy it to support his various good works.

An internal Buddhists group at The Employer has had Daniel Ingram join us for a couple of videoconferences, which have been fun to various extents. There’s a lot that one could say about Ingram (including whether he is in fact a Buddhist, what one thinks of him calling himself “The Arahant Daniel M. Ingram” on the cover, etc.), but my impression of him is that he’s a very pragmatic and scientific (and energetic) sort of person, who has a project to study the various paths to things like enlightenment in the world, in a scientific sort of way, and figure out what paths there are, which ones work best for which kinds of people, what stages there are along the paths, what works best if one is at a particular point on a certain path, and so on. There is apparently a whole Movement of some sort around this, called Pragmatic Dharma (I was going to link to something there, but you can Web Search on it yourself at least as effectively).

I’m not sure that this is an entirely sensible or plausible project, since as a Zen type my instinctive reaction is “okay, that’s a bunch of words and all, but better just sit”. But it’s cool that people are working on it, I think, and it’ll be fun to see what if anything they come up with. Being both pragmatic and moral, they are all about the Kindness and Compassion, so they can’t really go far wrong in some sense.

Having started to read that PDF, I have already a couple of impressions that it’s probably far too early to write down, but hey it’s my weblog and I’ll verb if I want to.

First off, Ingram says various things about why one would want to engage on some project along these lines at all, and I get a bit lost. He says that in working on morality (by which he means practical reasoning in the relative sphere, being kind and compassionate and all that) we will tend to make the world a better place around us, and that’s cool. But then the reasons that one would want to work on the next level after morality, which is “concentration”, are all about vaguely-described jhanas, as in (and I quote):

  • The speed with which we can get into skillful altered states of awareness (generally called here “concentration states” or “jhanas”).
  • The depth to which we can get into each of those states.
  • The number of objects that we can use to get into each of those states.
  • The stability of those states in the face of external circumstances.
  • The various ways we can fine-tune those states (such as paying attention to and developing their various sub-aspects)

Now it appears to me that all of these depend on an underlying assumption that I want to get into these “states” at all; unless I care about that, the speed with which I can do it, the depth, the number of objects (?), the stability, and the fine-tuning, don’t really matter.

I imagine he will say more about these states and why they’re desirable later, but so far it really just says that they are “skillful” (and “altered”, but that seems neutral by itself), and “skillful” here just seems to be a synonym for “good”, which doesn’t tell us much.

(In other Buddhist contexts, “skillful” has a somewhat more complex meaning, along the lines of “appropriate for the particular occasion, and therefore not necessarily consistent with what was appropriate on other occasions”, which a cynic might suggest is cover for various Official Sayings of the Buddha appearing to contradict each other seriously, but who wants to be a cynic really?)

It seems that if the jhanas are so fundamental to the second (um) training, he might have made more of a case for why one would want to jhana-up at the point where the training is introduced. (One could make the same sort of comment about Zen types, where the reason that you’d want to meditate is “the apple tree in the side yard” or whatever, but those types make no pretense at being scientific or rational or like that.)

In the Third Training, called among other things “insight”, Ingram talks about becoming directly aware of what experience is like, or as he summarizes it, “if we can simply know our sensate experience clearly enough, we will arrive at fundamental wisdom”. He then talks about some of the ways that he has become more aware of sensate experience, and I am struck by how very different from my own observations of the same thing they are. Let’s see if I can do this in bullets:

  • He starts with basically a “the present moment is all that exists” thing, which I can get pretty much entirely behind.
  • He says that experience is serial, in that we can experience only one thing at a time. He describes focusing on the sensations from his index fingers, for instance, and says “[b]asic dharma theory tells me that it is not possible to perceive both fingers simultaneously”.
  • Relatedly, he says that experience is discrete, and that one sensation must fade entirely away before another one can begin. At least I think he’s saying that; he says things like “[e]ach one of these sensations (the physical sensation and the mental impression) arises and vanishes completely before another begins”, and I think he means that in general, not just about possibly-related “physical sensations” and “mental impressions”. He also uses terms like “penetrating the illusion of continuity” (but what about the illusion of discontinuity?).
  • And relatedly relatedly, he thinks that experience is basically double, in that every (every?) “physical sensation” is followed by a “mental impression” that is sort of an echo of it. “Immediately after a physical sensation arises and passes is a discrete pulse of reality that is the mental knowing of that physical sensation, here referred to as ‘mental consciousness'”.

Now as I hinted above, the last three of these things, that consciousness is serial, discrete, and double, do not seem to accord at all with my own experience of experience.

  • For me, experience is highly parallel; there is lots going on at all times. When sitting in a state of open awareness, it’s all there at once (in the present moment) in a vast and complex cloud. Even while attending to my breath, say, all sorts of other stuff is still there, even if I am not attending to it.
  • Similarly, experience is continuous; it does not come in individual little packets that arise and then fade away; it’s more of an ongoing stream of isness (or at least that is how memory and anticipation present it, in the singular present moment). If thoughts arise, and especially if those thoughts contain words or images, the arising and fading away of those feel more discrete, but only a bit; it’s like foam forming on the tops of waves, and then dissolving into the water again.
  • And finally, there’s no important distinction to be had between “physical sensations” and “mental impressions”; there is only experience happening to / constituting / waltzing with mind. If there were a mental impression following each physical sensation, after all, how would one avoid an infinite regress, with mental impressions of mental impressions stretching out far into the distance? Something like that does happen sometimes (often, even) but it’s more a bug than a feature.

I suspect that some or most of all of these differences come because Ingram is talking about a tightly-focused awareness, where I am more of an open and expansive awareness kind of person, even when attending to the breath and all. If you really pinch down your focus to be as small as possible, then you won’t be able to experience (or at least be consciously aware of) both fingers at once, and you may manage to make yourself see only one sensation at a time in individual little packets, and you may even notice that after every sensation you notice, you also notice a little mental echo of it (which may in fact be the sum of an infinite series of echoes of echoes that with any luck converges).

This kind of tightly-focused conscious awareness goes well, I think, with what Ingram says about it being important to experience as many sensations per second as possible. He puts it in terms of both individual sensations, and vibrations, although the latter doesn’t really fit the model; I think he means something more like “rapid coming into existence and going out of existence” rather than a vibration in some continually-existing violin string.

He is enthusiastic about experiencing things really fast, as in

If you count, “one, one thousand”, at a steady pace, that is about one second per “one, one-thousand”. Notice that it has four syllables. So, you are counting at four syllables per second, or 4 Hertz (Hz), which is the unit of occurrences per second. If you tapped your hand each time you said or thought a syllable, that would be four taps per second. Try it! Count “one, one thousand” and tap with each syllable. So, you now know you can experience at least eight things in a second!

and this strikes me as really funny, and also endearing. But he takes it quite seriously! He says in fact that “that is how fast we must perceive reality to awaken”; I do wonder if he is going to present any scientific evidence for this statement later on. I’m sure it has worked well / is working well for him, but this seems like a big (and high-frequency) generalization. I don’t remember ol’ Dogen, or Wumen, or the Big Guy Himself, talking about experiencing as many things per second as possible, as a requirement. I guess I’ll see!

2022/05/23

The End of the Road is Haunted (and a bit more Ruliad)

Two different and completely unrelated topics (OR ARE THEY?) today, just because they’re both small, and hearkening back to the days when I would just post utterly random unconnected things in a weblog entry those title was just the date (and noting again that I could easily do that now, but for whatever reason I don’t).

First, whatsisname Wolfram and his Ruliad. Reading over some of his statements about it again, it seems like, at least often, he’s not saying that The Ruliad is a good model of our world, such that we can understand and make predictions about the world by finding properties of the model; he’s saying that The Ruliad just is the world. Or, in some sense vice-versa, that the world that we can observe and experience is a tiny subset of The Ruliad, the actual thing itself, the one and only instantiation of it (which incidentally makes his remarks about how it’s unique less obviously true).

I’m not exactly sure what this would mean, or (as usual) whether it’s falsifiable, or meaningful, or useful, or true, in any way. My initial thought is that (at the very least) every point in the Ruliad (to the extent it makes sense to talk about points) has every possible value of every possible property at once, since there is some computation that computes any given bundle of properties and values from any given inputs. So it’s hard to see how “beings like us” would experience just one particular, comparatively immensely narrow, subset of those properties at any given time.

It might be arguable, Kant-like, that beings like us will (sort of by definition) perceive three dimensions and time and matter when exposed to (when instantiated in) the infinite variety of The Ruliad, but how could it be that we perceive this particular specific detailed instance, this particular afternoon, with this particular weather and this particular number of pages in the first Seed Center edition of The Lazy Man’s Guide to Enlightenment?

The alternative theory is that we are in a specific universe, and more importantly not in many other possible universes, and that we experience what we experience, and not all of the other things that we could experience, as a result of being in the particular universe that we are in. This seems much more plausible than the theory that we are in the utterly maximal “Ruliad of everything everywhere all at once”, and that we experience the very specific things that we experience due to basically mathematical properties of formal systems that we just haven’t discovered yet.

We’ll see whether Wolfram’s Physical Project actually turns out to have “vast predictive power about how our universe must work, or at least how observers like us must perceive it to work”. :) Still waiting for that first prediction, so far…

In other news, The End of the Road is Haunted:

These are in the cheapest one-credit mode, because I was in that mood. Also I kind of love how the AI takes “haunted” to mean “creepy dolls”.

2022/04/30

“The Ruliad”: Wolfram in Borges’ Library

I think people have mostly stopped taking Stephen Wolfram very seriously. He did some great work early in his career, at CalTech and the Institute for Advanced Study, and (with a certain amount of intellectual property mess) went on to create Mathematica, which was and is very cool.

Then in 1992 he disappeared into a garret or something for a decade, and came out with the massive A New Kind of Science, which got a lot of attention because it was Wolfram after all, but which turned out to be basically puffery. And a certain amount of taking credit for other people’s earlier work.

Being wealthy and famous, however, and one imagines rather surrounded by yes-folks, Wolfram continues in the New Kind of Science vein, writing down various things that sound cool, but don’t appear to mean much (as friend Steve said when bringing the current subject to my attention, “Just one, single, testable assertion. That’s all I ask”).

The latest one (or a latest one) appears to be “The Ruliad”. Wolfram writes:

I call it the ruliad. Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways.

It’s not clear to me what “entangled” could mean there, except that it’s really complicated if you try to draw it on a sheet of paper. But “the result of following all possible computational rules in all possible ways” is pretty clearly isomorphic to (i.e. the same thing as) the set of all possible strings. Which is to say, the set of all possible books, even the infinitely-long ones.

(We can include all the illustrated books by just interpreting the strings in some XML-ish language that includes SVG. And it’s probably also isomorphic to the complete graph on all possible strings; that is, take all of the strings, and draw a line from each one to all of the others. Or the complete graph on the integers. Very entangled! But still the same thing for most purposes.)

Now the set of all possible strings is a really amazing thing! It’s incomprehensibly huge, even if we limit it to finite strings, or even finite strings that would fit in a reasonably-sized bound volume.

And if we do that latter thing, what we have is the contents of the Universal Library, from Borges’ story “The Library of Babel”. As that story notes, the Library contains

All — the detailed history of the future, the autobiographies of the archangels, the faithful catalog of the Library, thousands and thousands of false catalogs, the proof of the falsity of those false catalogs, a proof of the falsity of the true catalog, the gnostic gospel of Basilides, the commentary upon that gospel, the commentary on the commentary on that gospel, the true story of your death, the translation of every book into every language, the interpolations of every book into all books, the treatise Bede could have written (but did not) on the mythology of the Saxon people, the lost books of Tacitus.

Borges — The Library of Babel

It also contains this essay, and A New Kind of Science, and every essay Wolfram will ever write on “the Ruliad”, as well as every possible computer program in every language, every possible finite-automaton rule, and to quote Wolfram “the result of following all possible computational rules in all possible ways.” (We’ll have to allow infinite books for that one, but that’s a relatively simple extension, heh heh.)

So, it’s very cool to think about, but does it tell us anything about the world? (Spoiler: no.) Wolfram writes, more or less correctly:

it encapsulates not only all formal possibilities but also everything about our physical universe—and everything we experience can be thought of as sampling that part of the ruliad that corresponds to our particular way of perceiving and interpreting the universe.

and sure; for any fact about this particular physical universe (or, arguably, any other) and anything that we experience, the Library of Babel, the set of all strings, the complete graph on all strings, “the Ruliad”, contains a description of that fact or experience.

Good luck finding it, though. :)

This is the bit that Wolfram seems to have overlooked, depending on how you read various things that we writes. The set of all strings definitely contains accurate statements of the physical laws of our universe; but it also contains vastly more inaccurate ones. Physicists generally want to know which are which, and “the Ruliad” isn’t much help with that.

Even philosophers who don’t care that much about which universe we happen to be in, still want correct or at least plausible and coherent arguments about the properties of formal systems, or the structure of logic, or the relationship between truth and knowledge, and so on; the Universal Library / “Ruliad” does contain lots of those (all of them, in fact), but it provides no help in finding them, or in differentiating them from the obviously or subtly incorrect, implausible, and incoherent ones.

There is certainly math that one can do about the complete graph over the set of all strings, and various subgraphs of that graph. But that math will tell you very little about the propositions that those strings express. It’s not clear that Wolfram realizes the difference, or realizes just how much the utter generality of “the Ruliad” paradoxically simplifies the things one can say about it.

For instance, one of the few examples that Wolfram gives in the essay linked above, of something concrete that one might study concerning “the Ruliad” itself, is:

But what about cases when many paths converge to a point at which no further rules apply, or effectively “time stops”? This is the analog of a spacelike singularity—or a black hole—in the ruliad. And in terms of computation theory, it corresponds to something decidable: every computation one does will get to a result in finite time.

One can start asking questions like: What is the density of black holes in rulial space?

It somewhat baffles me that he can write this. Since “the Ruliad” represents the outputs of all possible programs, the paths of all possible transition rules, and so on, there can be no fixed points or “black holes” in it. For any point, there are an infinite number of programs / rules that map that point into some other, different point. The “density of black holes in rulial space” is, obviously and trivially, exactly zero.

He also writes, for instance:

A very important claim about the ruliad is that it’s unique. Yes, it can be coordinatized and sampled in different ways. But ultimately there’s only one ruliad.

Well, sure, there is exactly one Universal Library, one set of all strings, one complete graph on the integers. This is, again, trivial. The next sentence is just baffling:

And we can trace the argument for this to the Principle of Computational Equivalence. In essence there’s only one ruliad because the Principle of Computational Equivalence says that almost all rules lead to computations that are equivalent. In other words, the Principle of Computational Equivalence tells us that there’s only one ultimate equivalence class for computations.

I think he probably means something by this, well maybe, but I don’t know what it would be. Obviously there’s just one “result of following all possible computational rules in all possible ways”, but it doesn’t take any Principle of Computational Equivalence to prove that. I guess maybe if you get to the set of all strings along a path that starts at one-dimensional cellular automata, that Principle makes it easier to see? But it’s certainly not necessary.

He also tries to apply terminology from “the Ruliad” to various other things, with results that generally turn out to be trivial truths when translated into ordinary language. We have, for instance:

Why can’t one human consciousness “get inside” another? It’s not just a matter of separation in physical space. It’s also that the different consciousnesses—in particular by virtue of their different histories—are inevitably at different locations in rulial space. In principle they could be brought together; but this would require not just motion in physical space, but also motion in rulial space.

What is a “location in rulial space”, and what does it mean for two things to be at different ones? In ordinary language, two things are at different points in “rulial space” if their relationships to other things are not the same; which is to say, they have different properties. (Which means that separation in physical space is in fact one kind of separation in “rulial space”, we note in passing.) So this paragraph says that one human consciousness can’t get inside another one, because they’re different in some way. And although you might somehow cause them to be completely identical, well, I guess that might be hard.

This does not seem like a major advance in either psychology or philosophy.

Then he gets into speculation about how we might be able to communicate between “different points in rulial space” by sending “rulial particles”, which he identifies with “concepts”. The amount of hand-waving going on here is impressive; Steve’s plea for a falsifiable claim is extremely relevant. In what way could this possibly turn out to be wrong?

(It can, on the other hand, easily turn out to be not very useful, and I think so far it’s doing a good job at that.)

He also proceeds, hands still waving at supersonic speed, to outline a Kantian theory that says that, although “the Ruliad” contains all possible laws of physics, we seem to live in a universe that obeys only one particular set of laws. This, he says, is because “for observers generally like us it’s a matter of abstract necessity that we must observe general laws of physics that are the ones we know”.

What “observers like us” means there is just as undefined as it was when Kant wrote the same thing only with longer German words. He goes on like this for some time, and eventually writes:

People have often imagined that, try as we might, we’d never be able to “get to the bottom of physics” and find a specific rule for our universe. And in a sense our inability to localize ourselves in rulial space supports this intuition. But what our Physics Project seems to rather dramatically suggest is that we can “get close enough” in rulial space to have vast predictive power about how our universe must work, or at least how observers like us must perceive it to work.

which is basically just gibberish, on the order of “all we have to do is find the true physics text in the Universal Library!”.

It’s hard to find anyone but Wolfram writing on “the Ruliad” (or at least I haven’t been able to), but the Wolfram essay points to an arxiv paper “Pregeometric Spaces from Wolfram Model Rewriting Systems as Homotopy Types” by two authors associated with Wolfram Research USA (one also associated with Pompeu Fabra University in Barcelona, and the other with the University of Cambridge in Cambridge, and one does wonder what those institutions think about this). That paper notably does not contain the string “Ruliad”. :)

I may attempt to read it, though.

2022/04/29

God is not a source of objective moral truth

I mean, right?

I’ve been listening to various youtubers, as mentioned forgetfully in at least two posts, and some of them spend considerable time responding to various Theist, and mostly Christian, Apologists and so on.

This is getting pretty old, to be honest, but one of the arguments that goes by now and then from the apologists is that atheists have no objective basis for moral statements; without God, the argument goes, atheists can’t say that torturing puppies or whatever is objectively bad. Implicit, and generally unexamined, is a corresponding claim that theists have a source of objective moral statements, that source being God.

But this latter claim is wrong.

What is an objective truth? That is a question that tomes can be, and have been, written about, but for now: in general an objective truth is a true statement that, once we’re clear on the meanings of the words, is true or false. A statement on which there is a fact of the matter. If Ben and I can agree on what an apple is, which bowl we’re talking about, what it means to be in the bowl, and so on, sufficient to the situation, then “there are three apples in the bowl” is objectively true, if it is. If Ben insists that there are six apples in the bowl, and we can discover that for some odd reason Ben uses “an apple” to refer to what we would think of as half an apple, we have no objective disagreement.

What is a moral truth? Again, tomes, but for now: a moral truth is (inter alia) one that provides a moral reason for action. A typical moral truth is “You should do X” for some value of X. In fact we can say that that (along with, say, “You should not do X“) is the only moral truth. No other fact or statement has moral bearing, unless it leads to a conclusion about what one should do.

(We will take as read the distinction between conditional and categorical imperatives, at least for now; we’re talking about the categorical imperative, or probably equally well about the “If you want to be a good person, you should X” conditional one.)

What would an objective moral truth look like, and where would it come from? We would have to be able to get to a fact of the matter about “You should do X” from things about which there are facts of the matter, modulo word meanings. The theist is almost certainly thinking that the argument is simple and looks like:

  • You should do what God wants,
  • God wants you to do X,
  • You should do X.

Since we’re talking about whether the theist’s argument works, we stipulate that God exists and wants you (me, us, etc.) to do X for some X. And if we should do what God wants, we should therefore do X.

But is it objectively true that we should do what God wants?

If I disagree, and say that I don’t think we should do what God wants, the theist can claim that we differ on the meanings of words, and that what they mean by “should do” is just “God wants you to do”. But that’s not very interesting; under those definitions it’s just a tautology, and “you should do X” turns out not to be a moral truth, since “should do X” may no longer be motivating.

To get further, the theist will have to claim that “God wants you to do X” implies “You should do X” in the moral sense of “should”; that it’s objectively motivating. And it’s not clear how that would work, how that claim is any stronger than any other. A utilitarian can equally say “X leads to the greatest good for the greatest number” is objectively motivating, a rule-utilitarian can say that “X follows the utility-maximizing feasible rules” is objectively motivating, and so on.

(“You should do X because God will punish you if you don’t” can be seen as objectively motivating, but not for moral reasons; that’s just wanting to avoid punishment, so not relevant here.)

Why would someone think that “You should do what God wants you to do” is any more objectively true than “You should do what maximizes utility” or “You should do what protects your family’s honor”? I don’t find myself with anything useful to say about that; because they grew up hearing it, or they’ve heard it in Church every Sunday or whatever, I suppose?

So that’s that. See title. :) Really we probably could have stopped at the first sentence.