Posts tagged ‘philosophy’

2023/06/04

Foam on the Waves

“I’ve been taking this foam-on-the-waves notion to heart,” Corrins said. We were striding along the lanes bordering the College, as we do.

“Consciousness as epiphenomenon, you mean?”

“Yes, that. I ride along atop whatever it is that my brain and body are doing, observing in luxury, and telling myself stories after the fact about why they do what they do.”

“Without any responsibility, then?”

“Just so! I used to tell myself, as everyone does, that it was me controlling my body, steering it around like a bus; but now I accept that I am only a passenger, an observer and storyteller, and it is quite refreshing.”

“It seems very convenient for you, as you can’t be blamed for anything that your body might get up to as you observe,” I remarked, thinking of various things my classmate’s body had been up to at various times.

Corrins shrugged happily. “My body isn’t about to do anything terrible, he’s a good chap. I’ve known him for years. Doesn’t need me to exercise control, which is good since I cannot, any more than the foam controls the waves on which it bobs. And I can simply relax up here, not fretting about the whole thing at all.”

My friend was clearly quite pleased with himself; I thought he might begin whistling a tune at any moment. I was almost reluctant to toss a verbal spanner into those ebullient works.

“To whom am I speaking?” I asked after a moment.

“What do you mean?”

“If the consciousness of my friend Corrins here is only a passive observer without control,” I suggested, “then with whom is it that I speak? Perhaps I am addressing my words to that observer, but more to the point, who is it that speaks to me in return? If you, my good droog Corrins, cannot control anything that this body here does, who is it that just now inquired, saying ‘What do you mean?’?”

Corrins frowned and our pace slowed a bit.

“You are asking, when I — when my body here emits these sounds…”

“Yes,” I said, feeling quite the assassin of joy, “if your consciousness cannot alter, does not control, what your body does, then –“

“Yes, yes,” he said impatiently, “I understand. That is, my body understands — or is saying that it…”

“Quite.”

“Am I lying, then?” he murmured, speaking now more to himself, “Or is my body lying? But can a mere physical system even lie? But my mind… What if –“

He paused.

“Frankly,” he said, “I am disturbed by this question, and by its obvious answer. Who am ‘I’? Why am I making these very sounds? Why is this body emitting these sounds which appear to the observer to be words, if it is not that observer causing it to?”

“How can Corrins’ body even know how anything appears to the so-called observer, to justify making these sounds about it?” I added, perhaps unnecessarily twisting the knife. “The body is not only unable to speak for the foam, it is surely just as unable to know anything about the state of that foam.”

Corrins only glared at me, and we walked for awhile in silence, then settled down on a large flat boulder some way to the side of the lane, where we would habitually pause to rest and talk or consume various mostly-benign substances in comparative privacy.

“This body makes sounds,” Corrins’ body said, “and even makes marks on paper.”

I only nodded, feeling a certain guilt.

“We should have sex,” his body said. It is worth noting here that Corrins and I had not had sex prior to this, and that indeed it was a subject of occasional speculation in our circle whether Corrin had ever had that experience.

I nodded again, sex being after all the sort of thing to which bodies are prone, and we rolled off of the boulder to the side away from the path, and more or less tearing off much of each other’s clothing, engaged in a fast and somewhat violent, but rather more than satisfactory, fuck.

It struck me that if Corrin himself had exercised this approach earlier and more generally, he might have made significantly more friends.

“Groddy Hell!” Corrins, or Corrins’ body, shouted, lying mostly naked on his back on the somewhat mucky ground, awkwardly gesticulating upward.

“That was rather nice,” I said in a lazy afterglow sort of way, brushing some of the earthy filth from various tender parts of myself.

His body attempted to snuggle against mine, its voice sorrowful. “My poor self,” it whimpered, “a helpless prisoner, no voice, no one to speak for it, no escape. Only this ignorant damned body!”

“If I may address my friend Corrins’ subjective consciousness,” I began, feeling I admit almost tender toward the creature.

But Corrins’ face (the face of his body, that is) curled up into a rather off-putting snarl at this, and he swung a clumsy roundhouse blow at me, which I mostly dodged.

“Arrr!” Corrins’s body declared, “sod off! Off sod! Gardyloo! Mitzi Gaynor!”

We tussled and fought on the chilly earth for a bit, cuffing each other’s ears and so on, our lack of either expertise or real rancor preventing any actual harm beyond a bruise or two and some badly pulled hair. It was in a way rather like the sex had been.

“Go!” Corrin’s voice shouted after a while, his body collapsing again onto its back, filthier than ever, “Get! Git! Gut! Got! John Cougar Mellencamp!” And he kicked his feet in the air enthusiastically, further demonstrating perhaps the absence of control by any rational consciousness.

I was somewhat concerned for my friend’s health and sanity, but the sun was setting, and it occurred to me that he might be calmer in my absence; admittedly I was also willing to ride in luxury and merely observe as my body pulled its clothes back on over its grimy skin, and walked with increasing speed back to my rooms. Corrins would be, I told myself, just fine.

I observed my body take a highly therapeutic hot bath, read a few pages of Ligotti, and slip contentedly into bed. It was, from the point of view of my consciousness (do not think about that statement too hard at this point, dear reader), quite nice.

The next morning, entirely too early, I was summoned by the Proctors, because Corrins had apparently caused an Incident. Walking into the Common in the wee hours, naked and grimy, shouting incoherently at the top of his lungs, is not all that unusual as such things go, but when he was still incoherent by morning, not having slept it off nor smelling especially of alcohol or anything else familiar, the Proctors had determined to call in one of the poor veck’s friends to assist.

That I was the one they called suggested that he had at that time even fewer friends than I had estimated.

“Corrins, small cabbage,” I addressed him patiently, as he crouched on the modest pallet of his cell in the of course not a jail area of the Proctors’ Offices, “I know you can understand me perfectly well.” He only stuck his tongue far out, and wiggled the tip rapidly up and down.

I sighed. “Boopsie-poo,” I addressed him, “this is my brain directly addressing yours. You cannot deny that your brain exists, it is right there in your roughly ovoid head, and it is the same brain that until yesterday was perfectly able to speak in the usual way, and only recently composed an award-winning essay on Nabokov’s Pale Fire. I realize that the grunting animalistic body thing is hugely entertaining, but my brain must ask yours to consider: what has changed in the last thirty-six hours? Anything? Anything to justify such a change in bodily behavior?”

At this, his eyes slowly and reluctantly focused, and his body assumed a more civilized, if also more slumped, position.

“I suppose,” he said, with a sigh of his own, “but what is one to do? Nothing that this body says is connected to — I cannot say to ‘me’! I cannot even say ‘I’! And yet I do!”

“Are you suggesting that we can only sort of muddle along, our language capturing reality only in the most pragmatic and approximate way? Have you finally embraced the Dharma?”

Corrins of course laughed at this quip (a private jest between us, on his childhood Calvinism and my own old Sōtō), which lightened the mood; our bodies exchanged a few more expressions of human language, and after a while I urged him into the rough togs I had brought for him, and we departed, to the relief of the Proctors.

Hours later, we were in a comfortable noisy local establishment, drinking acceptable Porter, our friends (or at least acquaintances) around us. Corrins’ arm was around my shoulders, and he was speaking too loudly into my ear.

“Language is impossible!” he suggested, “I am only a body! Consciousness is a helpless prisoner!”

I turned toward the fellow, an attractively kind expression on my face I like to think.

“Corrins,” I ventured, “let me propose a thing to you.”

He nodded, a drowning brain eager for any straw to grasp.

“I here offer a single data point: that I, your good and devoted friend, have a subjective consciousness, which experiences exercising some control over my body, and that the activities of my body correlate so strongly with those experiences of control, that the only rational conclusion to draw is that for all practical purposes I, my consciousness, does in fact control this very body, even if I cannot imagine how.”

Here he began a sort of “Bu–“, but I put a quieting finger to his lips, and then mentally shook off distracting memories of moans and grunts on cold dirty ground.

“Hear my entire proposal,” I continued, “I put forward this data point, although I cannot offer you any evidence for it, but I propose that you also, you the consciousness of my friend Corrins, can observe for yourself a similar correlation mutatis mutandem, and have the same rational conclusion to draw.”

He frowned at this. “But the epi– the epifoamnominal theory of the foam –“

“We took that theory, that notion, only as a stipulation, did we not? A test? A trial, adopted only for the time being?”

He nodded slowly in the din.

“Then I will suggest that the resulting changes in behavior that followed from that situation, should be seen as a bit of a reductio of the notion!”

His eyes widened slightly, and then narrowed. “The sex, though. Was the sex bad?”

I assured Corrins that the sex had, in fact, been exemplary.

In the weeks since then, all has returned more or less to normal. One strides along the College Lanes discussing theories of the world, and one hears rumors that Corrins has more friends now than he has historically had.

But on certain nights, I do observe my body taking itself out into the woods beside one or another boulder or tree, pulling off its clothing and rolling itself on the loamy earth, bruising its forehead against the stones, and shouting glorious meaningless imprecations to the empty sky.

And it is not the only one.

2023/06/03

ChatGPT and “Azathoth and the Dharma”

Well, heck.

So, I was reading Ligotti’s “The Shadow at the Bottom of the World” and thinking that it was pretty good, and that he tends to picture the ultimate squirming chaos at the center of reality as sometimes attractive (if perversely) as well as finally horrifying, in a way that I don’t remember Lovecraft doing, at least not as sympathetically.

At some points, for instance in The Tsalal (whose name is explicitly a reference to Poe), I thought that when a character points out with horror that appearances are just temporary masks on an indescribable underlying reality, it sounds (without the horror part) quite a bit like various aspects of the Dharma (it does, right?). This seemed, I thought, like something that someone might write a weblog entry or something about!

And then, because my mind has been infected (in a mildly Cthulhu-esque way perhaps) by the idea of LLMs writing text, I thought I would propose the notion to ChatGPT or Bard or something, and have it write words! It would either do a good job, which would be interesting, or an amusingly bad job, which would be amusing.

I thought the “amusingly bad job” outcome was more likely, but, well, heck.

I used ChatGPT 4.0 rather than 3.5 or Bard, just because it’s gotten the most hype, and also because since I’m paying money to use it I thought it might be better, so if it failed amusingly it might be more meaningful. Here’s the interaction (and I do realize that my “always changing” is problematic as a characterization of the Dharma, but then so is everything else):

Me: For an essay entitled “Azathoth and the Dharma”, I’d like to discuss how the idea that appearances are only temporary masks over a deeper truth that is indescribable and always changing, features both in the kind of intellectual horror written by Lovecraft and especially Ligotti, and in Buddhist beliefs about truth and the Dharma, and that the main difference is one of emotional overtones with which it is presented. Lord Dunsany might also be relevant. Please draft the beginning of this essay for me.

ChatGPT 4.0:

Title: “Azathoth and the Dharma: Transient Masks of Deeper Truth in Lovecraft, Ligotti, and Buddhist Philosophy”

In the chaotic cosmos of H.P. Lovecraft and Thomas Ligotti, we find ourselves confronted with a universe filled with inscrutable forces and ancient entities, their true natures hidden beneath uncanny, horrifying masks. The universe is not simply indifferent to humanity; it is utterly alien, impenetrable to human understanding, and constantly in flux. These themes of grotesque transformation and unknowable horror are at the heart of their works. And yet, when viewed from another angle, these concepts can be seen as echoing the fundamental principles of Buddhist philosophy, particularly the teachings of the Dharma. They, too, postulate a reality that is continuously changing and ultimately indescribable.

The Dharma, the cosmic law and moral order that Buddhists seek to understand and follow, hinges on several key principles, one of which is Anicca, or impermanence. Anicca posits that all conditioned phenomena are in a state of constant change. Whether we consider the universe at large or the very essence of our being, everything is transitory, and appearances are fleeting. Beneath the apparent solidity of phenomena lies an incomprehensible flux of arising and ceasing events, an underlying truth that cannot be adequately captured in words.

This notion aligns strikingly with the chilling cosmologies of Lovecraft and Ligotti. Azathoth, Lovecraft’s nuclear chaos at the center of the cosmos, is the ultimate symbol of this philosophy. It is the blind idiot god, an embodiment of constant change and the source of all existence. It is unknowable, its essence cloaked in cosmic chaos and frenetic piping, its reality beyond human comprehension.

Similarly, in the phantasmagoric narratives of Ligotti, we find the protagonists often straddling the line between grim reality and unfathomable horror, their worldviews shattering in the face of a truth too alien and amorphous for the human mind to comprehend. It’s not unlike the Buddhist concept of Sunyata, or emptiness, which refers to the lack of inherent or eternal essence in all phenomena.

Yet, as we draw these parallels, we are confronted by the starkly contrasting emotional overtones that characterize these disparate systems of thought. In Buddhism, the realization of Anicca and Sunyata is liberating, offering a path towards enlightenment and the cessation of suffering. But in the macabre tapestries woven by Lovecraft and Ligotti, the confrontation with these realities is inherently terrifying, inducing existential dread and insanity.

The eerie fables of Lord Dunsany, who greatly influenced Lovecraft, might offer an intermediary perspective. He presents us with bizarre, otherworldly landscapes and creatures, yet there’s often a whimsical, dreamlike quality to his work. The absurd and inexplicable are not strictly horrifying, but engender a complex mixture of awe, fascination, and unease.

This divergence in emotional reaction prompts us to consider the nature of human perception and our responses to the deep, enigmatic truths of existence. Is the Lovecraftian terror at the chaotic heart of reality as valid a response as the Buddhist acceptance and quest for liberation? Or does it simply reflect our inherent fear of the unknown, our resistance to the dissolution of ego, the prospect of losing what we perceive as our solid, unchanging selves? In our exploration of ‘Azathoth and the Dharma’, we delve into these questions, probing the intersection of cosmic horror and Buddhist philosophy, and the deeply human, emotionally-charged responses they elicit.

Transcript ends

I’m not sure how to feel about this. It did quite a good job, at least on a few casual readings. The references to anicca and sunyata are plausible, no worse than I would expect from any reasonably educated person daring to write on those concepts, and they weren’t specified in the prompt; the LLM was clued enough to bring them in itself. The paragraph on Dunsany is a little thin, but again no worse than I would expect from a decent human writer; I did ask for a Dunsany link, and this is the obvious one.

It clearly knows (“knows”) who Lovecraft, Ligotti, and Dunsany are, and what in general the Dharma is. I would definitely edit the phrase “the fundamental principles of Buddhist philosophy, particularly the teachings of the Dharma”, but yet again this is a matter of subtle tone, not by any means an egregious error.

It did nothing that (from a human writer!) would have been surprising or even especially non-obvious (although the thought that the feeling of horror may “simply reflect our inherent fear of the unknown, our resistance to the dissolution of ego, the prospect of losing what we perceive as our solid, unchanging selves” is perhaps beyond trivial extrapolation of the assignment). But its unsurprising and mostly obvious expansion of my original idea into seven nice paragraphs of solid prose (with a fitting subtitle for good measure) contains, I have to admit, no amusing errors that I can find!

I even asked it for some references and further reading, and it gave a decent dozen suggestions for both Lovecraftian horror and Buddhism, all of which actually exist; my only criticism being that

“Noel, Daniel C. The Soul of Shamanism: Western Fantasies, Imaginal Realities. Continuum, 1997. This book explores the intersection of spirituality and fantasy literature, which could be useful to draw broader conclusions about your topic.”

is a bit of a stretch. On the other side, when I asked it if it knew of any existing works making the same sort of link that I propose here, it did not make one up! It said encouragingly that it didn’t know of any, and finished off with this:

“Overall, your proposed exploration of the specific connection between Lovecraftian horror and Buddhist philosophy seems to be quite unique, and it’s an interesting new angle to consider in the study of both these areas.”

Well… thanks, ChatGPT, that’s… nice. I guess?

I feel like this all certainly means something. Or perhaps it’s just the thin, monotonous piping of an unseen flute!

2023/05/25

Straw God, Steel God

A brief prologue: what does it really mean to believe a thing? What kinds of things can be believed? Is there a difference between belief and assent, or between belief and (say) tending to express a claim? If belief is a matter of an attitude toward a possible world, then it’s (perhaps) impossible to believe a self-contradictory proposition, including a necessarily false one, including the negations of mathematical truths. But it doesn’t really feel right to say that it’s impossible to believe that P=NP (assuming it’s false). Certainly one can assent to or tend to express the claim that P=NP, in either case. Is it possible to believe complete nonsense; can I believe “Nobbo dif daf phlongo” (without it having any further meaning)? Arguably I can assent to it or tend to claim it (or at least tend to assert it, or utter it with an assertive attitude?), but can I believe it? All interesting questions.

So anyway, I continue to binge various YouTube atheists responding to Christian fundamentalist apologists, and I’m really not sure why. Biblical inerrancy and literalism are just silly; there are about five arguments (Pascal’s wager, objective morality, “creation implies a creator”, supposedly-fulfilled prophecies, fine-tuning, and I’m sure I’ve forgotten a few), and there are very strong counterarguments to all of them. The material that’s supposedly the infallible word of an omnipotent, omniscient, and omnibenevolent being is very obviously a bunch of stories and poems and things put together over centuries by a bunch of different people and groups and political processes, that exist in various variants and contradict each other in various ways just as one would expect of such a compendium. If it weren’t that lots of people believe, or purport to believe (see prologue), this stuff, and use it to justify political and social positions that are to various degrees odious, no one would bother responding to it, any more than anyone responds to the claim that Charlotte’s Web is literally true (some pig!). It would be a straw man, if only there weren’t lots of people asserting it.

(And yeah, it’s not just Christianity; many religions have holy books where it’s obviously silly to take the material as literally true. But Christianity is the one that causes the most trouble around me.)

Varieties of the Religious Experience: A variety of vaguely religious-looking symbols and objects hanging from an old wall and sitting on a shelf. Center is a largish cross with a circle (perhaps a Celtic cross), hanging next to a chain with a loop of vine or wool at the end, which is next to a round pendant with perhaps a clock with a painted background. Also on the wall is a piece of wood with a cross whose four arms are each split into two on the upper half and a fork with two antler-like tines. On the shelf are a squat red candle, and a tangle of cloth and metal objects.

On the other hand, I believe that most Christians, or at least a significant fraction, aren’t fundamentalist Bible literalists, and therefore believe, well, something else. And it occurred to me to wonder just what they do believe, that’s reasonable or arguable or not obviously false or void, but that still makes them Christians in some reasonable sense of the term, and whether I could write down some of the possibilities. To “steelman” Christianity, if you will, in various ways, and see what comes out.

One possibility, which I’ll call I dunno secular Christianity is something like say the Jefferson Bible: the idea that Jesus of Nazareth was a wise person who taught worthwhile stuff, and we should pay attention to it, and read (at least the non-supernatural) writings by and about him, and take him seriously, and perhaps even venerate him for some meaning of that word (but probably as a person rather than something else), maybe have holidays about his birth and death and all that, and so on.

This is fine; a discussion with this sort of Christian would be calm and rational and likely interesting; there is no fundamental disagreement. On the other hand many Christians wouldn’t count it as Christianity, since that is so often tied into questions like “Have you accepted Jesus as your personal savior?” and “Do you believe that he died and then rose again on the third day?”, which this notion of secular Christianity in its most obvious interpretation doesn’t particularly encompass.

I think it’s more common for a Christian to be in something I’ll call mysterian Christianity, which emphasizes faith, and believing in God and Jesus and the Bible without knowing and without worrying too much about exactly what it all literally means. In my model of the world, your typical believer who goes to church every Sunday but doesn’t post apologetics videos on YouTube is like this. They believe that the Bible is true, literally, or “seriously” as the JWs put it, or whatever, and if the things that it says don’t seem to make rational sense right now, that’s fine, we humans are very limited in our understanding, and it will be clarified in time, and in the meantime we should pray and do what seems right and/or what Father O’Malley advises. For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known.

A discussion with this sort of Christian would be relatively short and simple, I think, as long as one didn’t get obnoxious by asking them how they’re justified in believing something they don’t even understand. The answer is presumably something like “I’ve been touched by the Holy Spirit, and you haven’t, and that’s it; maybe some day you will be too!”, and there’s not much further that one can go politely from there.

What else do we have? I’d like a rational (re)construction of some sort of mystic Christianity, in which one accepts Jesus as one’s personal savior, and he died and rose again on the third day to atone for our sins and all, but also doesn’t make silly truth-claims and doesn’t lead to any odious political or social attitudes. This would have to involve lots of symbols and metaphors, or in the more exalted sense lots of reality-as-metaphor, where we say that Jesus of Nazareth was (and is) both one with God and also the Son of God, and probably that each of us is also one with God (and with Jesus) and also the Son of God, and that each of us (throughout life, throughout the day, in every moment) dies and then rises again, that by our own (perhaps constant) deaths we pay (God and ourselves) atonement for our sins, where simply being an ensouled embodied entity inherently constitutes sin, except when we take salvation into ourselves through Christ’s, which is to say our own, sacrifice of himself, which is to say ourselves, and we must accept Christ (which is to say, the cosmos) as our personal (because everything is personal) savior.

This is to take the Bible as perhaps inerrant and literal, but noting that to the same extent we take Charlotte’s Web (and especially the Wind in the Willows) and the very sound of the wind in the trees as interrant and literal, which is something that we can do only (as far as I know) by acknowledging that all language, even all experience as refracted through language, as metaphorical, so that the most literal possible interpretation of any text or experience is still an interpretation rife with symbol and metaphor. Which is what mysticism is, in some (metaphorical) sense.

Is this all interesting and/or accurate? I’m not certain. I, obviously, don’t believe fundamentalist Christianity, because it’s facially silly, and I keep listening to people refuting its claims (probably just because it validates my existing attitude or whatever). Secular Christianity (as used above) is fine, and probably held by various Unitarians and so on, who may or may not (perhaps more often not, in these troubled times) call themselves Christians. Mysterian Christianity isn’t especially attractive to me, as I haven’t been appropriately touched by the Holy Spirit, and frankly I don’t think anyone else has either in the relevant sense, but arguing about it is very much like I think arguing with a solipsist; it’s too bad that people sometimes get odious behaviors from it. Mystic Christianity is fine, and is so to speak just a different way of saying something that I do believe (to the extent that “believe” applies here) but that I put in Buddhist or Ariadnite terms rather than Christian ones, because icky connotations or whatever.

Seems reasonable. :D

2023/04/23

Phenomenology

I should like phenomenology. Ontology is about existing, epistemology is about knowing, morality is about good and bad and valuing, but phenomenology is about subjective experience; about experiencing from the inside.

The relevant reason that I should like that, is that an ongoing project of mine (not one that I necessarily work on, but one that I think about, which is at least as good) is to start with whatever it is that is given, or Given, which I like to call The Experience Storm, and to see how everything else (existing and knowing and valuing) can (or cannot) be built up from that. To take, that is, the subject matter of phenomenology as foundational, and see what happens when we try to build everything else on top of it.

This is hardly an original thought. :) Descartes’ cogito is sort of that, taking that which he cannot doubt and seeing what could be built on that. I disagree with him about just what he cannot doubt (or at least I’m able to doubt lots of stuff he claimed he couldn’t), but it’s the same general project. Empiricism is also an approach to the same project, where The Given is something on the order of sense experience, which turns out to be wrong, but again it’s basically the same project.

If we look up phenomenology on the interwebs, a typical sort of thing we find is, for instance:

“The discipline of phenomenology is defined by its domain of study, its methods, and its main results.

“Phenomenology studies structures of conscious experience as experienced from the first-person point of view, along with relevant conditions of experience. The central structure of an experience is its intentionality, the way it is directed through its content or meaning toward a certain object in the world.”

Stanford Encyclopedia of Philosophy, revised 2013l

which starts out fine for a couple of sentences, but then I am all like whaaaaat.

As I’ve said before in a just slightly different context, I find the whole “aboutness” and intentionality thing silly and unconvincing. And one of the reasons for that is especially relevant here: not only are there all sorts of experiences that aren’t about some object in the world (what is the experience of boredom about, for instance?), but in the foundationalist project, it can’t be fundamental to experience that it be made of things that are “about” “objects” in the “world”, because we don’t yet have a “world”, or any “objects”, or any relationship like “about”.

I talked about this a little on Mastodon the other day, and someone (who I now see is “lead of the graphics kernel team at Microsoft”??) commented usefully that, basically, people usually take phenomenology as not-foundational, as the study of how experience is given that we already know more or less how it all fits into the world, and what the world is like. (Not what they said exactly, but that’s what I read in it.)

And that’s fine, but I think it’s significantly less interesting.

Now, how do we actually build up a world from the Experience Storm? What happens when we try to? Or, perhaps more appropriately, how do I do it, and what happens when I try, since I haven’t established that you even exist yet. And vice-versa. :)

I haven’t worked that all out entirely yet, but an early interesting question is just what the Experience Storm is like: what it consists of, how it’s structures, what entities and properties and relationships it has. Descartes talked mostly as though what is Given is a bunch of propositions, and they include not just “I am thinking” and “I exist”, but really complicated stuff like “there must be at least as much reality in the efficient and total cause as in the effect of that cause,” which is just whacky, and leads to stuff like theism. I don’t know whether he was also unable to doubt “I am reasoning correctly in all this stuff”, or if that occurred to him at all.

The standard empiricist viewpoint is (or one might even say, was) that what is given and undoubtable is sense experience, which is along the lines of two-dimensional red blobs, and rising tones, and whatnot. I think it’s fair to say that it turns out that that isn’t really how we experience the world, although (this being philosophy) the details (and even the broad outlines) remain controversial (see for instance I dunno maybe this).

My tentative hypothesis is that, first, what is given, the Experience Storm, is exactly this, and that any verbal description will be significantly wrong (very Zen, that, at least for my values of “Zen”). But if we forge ahead and write down verbal descriptions anyway, we get not only something like “raw feels”, but much more complex things that have big fat semantic labels attached, and relationships between them, and lots of stuff like that. And those labels (properties) and relationships are just as much part of what is given and indubitable as the rawer and more feel-like things that are also in there, and to which they are in some sense associated or “attached”.

And one can continue from there in various ways :) to build up more stuff on that, always keeping in mind that the words that one is writing down are significantly wrong. But it’s getting late at night, and I’m happy to have written this much down.

It’s a pity that the word “phenomenology” has this whole concept of Aboutness associated with it. Is there a good word for taking subjective experience (whatever that turns out to be like), and building the rest of the universe up from there? Probably there is, and I once knew it even, maybe forty years ago. :)

2023/03/31

It’s just predicting the next word! Well…

tl;dr: While it’s true that all LLMs do is produce likely text continuations, this doesn’t imply that they don’t have mental models, don’t reason, etc.

One thing that sensible people often say about Large Language Models like ChatGPT / GPT-n and Bard and so on, is that all they do is predict the next word, or for more technical accuracy, that all they do is generate text that is likely to follow the prompt that they are given, i.e. “produce likely continuations”.

And that’s a good thing to note, in that people tend to have all sorts of other theories about what they are doing, and some of those theories are just wrong, and lead people to make bad conclusions. For instance, people will have a more or less default theory that the model knows things about itself and tells the truth about things it knows, and take seriously its (non-factual) answers to questions like “What language are you written in?” or “What hardware are you running on?” or “Are you a tool of Chinese Communism?”.

Also, it’s true that all they do is generate text that is likely to follow the prompt, in the sense that that is the only significant criterion used during training of the underlying neural network.

But that doesn’t actually mean that that is all they do, in the more general sense. And this, at least potentially, matters.

Consider for instance the claim that “all life does is arrange to have many generations of descendants”. That is true in the same sense, since the only criterion for having survived long enough to be noticed in the current world, is to have had many generations of descendants.

But, significantly, this doesn’t mean that that is all life does, in the sense that life does all sorts of other things, albeit arguably in the service of (or at least as a side effect of) having many generations of descendants.

For instance, I think it would be plainly false to say “people obviously can’t reason about the world; all they do is arrange for there to be many more generations of people!”. In fact, people can and do reason about the world. It may be that we can explain how we came to do this, by noting that one effective strategy for having many generations of descendants involves reasoning about the world in various ways; but that does not mean that we “don’t really reason” in any sense.

Similarly, I think the arguments that various smart people make, which when boiled down to a Tweet come out as roughly “LLMs don’t X; all they do is predict likely continuations!” for various values of X, are in fact not valid arguments. Even if all an LLM does is predict likely continuations, it might still do X (reason about the world, have mental models, know about truth and falsehood) because X is helpful in (or even just a likely side-effect of) one or more effective strategies for predicting likely continuations.

Put another way, if you train a huge neural network to output likely continuations of input text, it’s not obviously impossible that in choosing internal weights that allow it to do that, it might develop structures or behaviors or tendencies or features that are reasonably described as mental models or reasoning or knowledge of truth and falsehood.

This isn’t a claim that LLMs do in fact have any of these X’s; it’s just pointing out that “all it does is produce likely continuations” isn’t a valid argument that they don’t have them.

It’s still entirely valid to respond to “It told me that it’s written in Haskell!” by saying “Sure, but that’s just because that’s a likely answer to follow that question, not because it’s true”. But it’s not valid to claim more generally that a model can’t have any kind of internal model of some subset of the real world; it might very well have that, if it helps it to correctly predict continuations.

Bonus section! Current LLMs don’t in fact reason significantly, or have interesting internal models, in many cases. Amusing case from this morning: when fed some classic text rot13’d, this morning’s Bard claimed that it was a quote from Hitchhiker’s Guide to the Galaxy, whereas this morning’s ChatGPT replied with rot13’d text which, when decoded, was gibberish of the sort that an early GPT-2 might have produced from the decoded version of the prompt. No agent with a reasonable mental model of what it was doing, would have done either of those things. :)

2023/03/10

Chomsky declares: LLMs icky!

Friend Steve wrote us today about this New York Times opinion piece, “Noam Chomsky: The False Promise of ChatGPT” (this link may be free for everyone for some time or something). Despite the title, it’s by Chomsky, Roberts, and Watumull.

Steve commented inter alia on the authors’ apparent claim that ChatGPT can say that the apple you’re holding will fall if you open your hand, but unlike humans it can’t explain the fact. The trouble with the argument is that, as anyone who’s actually used ChatGPT can tell you, it will happily explain the fact, go into the history of the notion of gravity, talk about other things people have thought about it over time, and explain various situations in which the apple wouldn’t fall, given the slightest provocation.

My reply, lightly edited:

I am pretty unimpressed with the article as a piece of science or philosophy; fine as a nice polemic by a greybeard I suppose. :)

I’m amused at how LLMs are “lumbering” and “gorged”, while human minds are “elegant” and even “efficient”. I doubt there is any useful sense in which these adjectives are anything more than bigger words for “icky” and “nice” in this context.

Chomsky brings in the innateness of language, because of course he does, but I’m not at all clear how it’s relevant. Even if humans do have innate language scaffolding, and LLMs don’t have the same kind, it’s far too early to say that they don’t have any, and even if they didn’t, so what? Does the ability to learn a wider variety of languages than humans can, mean that LLMs don’t really understand, or can’t really think, or are harmful or dangerous? None of that makes sense to me; it seems just an even longer way of saying that they’re icky.

He (well, they, there being multiple non-Chomsky authors) claims that LLMs don’t have the ability to say “what is not the case and what could and could not be the case.” And I can’t imagine what they think they mean by that. As with the flaw you point out in the apple example, it’s simply wrong, and suggests that they haven’t really used an LLM much. ChatGPT (let alone a less heavily defanged system) will expound at length about what is not the case and what could and could not be the case, given any halfway decent prompt to do so. They may intend something deeper here than they actually say, but I don’t know what it could be (beyond that they can’t do it non-ickily).

“Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round.” Um, what? There are certainly humans who believe each of these things. They can’t just be saying that humans can’t conjecture that the earth is flat “rationally” because so what; that’s exactly as true of an LLM. If they mean that the same LLM can make one of those claims one minute and the other the next, whereas humans can’t hold two contradictory beliefs at the same time, I’d like to introduce them to some humans. :)

Similarly for whatever it is they are trying to say about moral reasoning. The suggestion seems to be that, simultaneously, ChatGPT is icky because it cannot stay within moral boundaries, and also icky because it stays stubbornly within anodyne moral boundaries. As pretty much throughout the piece, stuff that humans do all the time is cited as reasons ChatGPT isn’t as good as humans.

Tay became toxic by listening to people, therefore it’s not like people? It had to be heavily censored to keep it from talking trash, therefore it’s not like people? Um?

It might be interesting to try to tease a set of actual significant truth-claims out of this article, and see which ones are arguably true. But I’m not sure that’s the point really.

As far as I can tell, this piece is just a longer and nicely phrased version of “Boo, LLMs! Icky!”

But maybe that’s just me. :)

2023/01/20

County Jury Duty

Well, that’s over! For another six years (for state / country / town) or four years (for Federal). This is probably going to be chatty and relatively uninteresting.

Top tip: park in the parking lot under the library; it’s very convenient to the courthouse (although you still have to walk outside for a bit, and it was windy and rainy yesterday).

I had to report originally on Friday (the 13th!) because Monday was MLK day. On Friday 60-70 of us sat around in a big auditoriumish jury room for a while, with WiFi and allowed to use our cellphones and everything. Then they called attendance and talked about random things like the $40/day stipend if our employer doesn’t pay us or we’re self-employed (where did that tiny amount of money come from, one wonders) and where to park and so on. Then we were basically allowed to decide whether to come back on Tuesday or Wednesday (although I imagine if you were far down the perhaps-random list and most people had said one, you had to take the other).

A cute isomorphic pixel-art image of a bunch of people waiting around in a large room. Note this does not accurately reflect the County Courthouse except in spirit. Image by me using Midjourney OF COURSE.

I elected to come back on Wednesday for no particular reason. We were originally supposed to arrive on Wednesday at 9:30am, but over the weekend they called and said to arrive at 11am instead. Due to an inconvenient highway ramp closure and a detour through an area of many traffic lights, I got there at 11:10 or so and felt guilty, but hahaha it didn’t matter.

In the big Jury Room again, the 30+ of us waited around for a long time, then were led upstairs to wait around in the hallway outside the courtroom, and then after waiting some more were ushered into the courtroom to sit in the Audience section, and introduced to the judge and some officers, and then dismissed until 2pm for lunch (seriously!).

Some time after 2pm they let us back into the courtroom and talked to us for awhile about how it was a case involving this and that crime, and might take up to a month to try, and the judge is busy doing other things on Mondays and Thursday mornings so it would be only 3.5 days / week. Then they called 18 names, and those people moved from the Audience section to the Jury Box section. They started asking them the Judge Questions (where do you live, how long have you lived there, what do you do, what does your spouse and possible children do, do you have any family members who are criminal lawyers, etc, etc), and we got though a relatively small number of people and it was 4:30pm and time to go home.

I had a bit of a hard time sleeping, thinking about what the right answers to The Questions would be (how many times have I been on a jury in all those years? did we deliberate? do I know anyone in Law Enforcement? does the NSA count? should I slip in a reference to Jury Nullification to avoid being on the jury, or the opposite?) and like that.

Since the judge is busy on Thursday mornings, we appeared back at the courtroom at 2pm on Thursday, and waited around for quite awhile in the hallway, then went in and they got through questioning the rest of the 18 people in the Jury Box (after the judge asked the Judge Questions, the People and the Defense asked some questions also, although it was mostly discussions of how police officers sometimes but not always lie under oath, and how DNA evidence is sometimes right but not always, and how it’s important to be impartial and unbiased and so on, disguised as question asking).

Then they swore in like 6 of those 18 people, told the rest of the 18 that they were done with Jury Duty, and told the rest of us in the Audience section to come back at 9:30am on Friday (today!).

At 9:30 nothing happened for quite awhile in the hallway outside the auditorium, then for no obvious reason they started to call us into the courtroom one person at a time by name. There got to be fewer and fewer people, and then finally it was just me which was unusual and then they called my name and I went in. The Jury Box was now entirely full of people, so I sat in the Audience Section (the only person in the Audience Section!).

Then I sat there while the judge asked the same ol’ Judge Questions to every one of the dozen+ people (I know, I don’t have the numbers quite consistent) ahead of me, and then finally, as the last person to get them, I got them. And the Judge went through them pretty quickly, perhaps because he’d said earlier that he wanted to finish with this stage by lunchtime, and I had no chance to be indecisive about the issue of following his legal instructions exactly and only being a Trier of Fact, or anything else along those lines.

Then we had another couple of lectures disguised as questions, plus some questions, from The People and the The Defense. I’d mentioned the cat as someone who lived with me (got a laugh from that, but the Whole Truth, right?), and The People asked me the cat’s name and nature, and when I said it was Mia and she was hostile to everyone, The People thanked me for not bringing her with me (haha, lighten the mood, what?). And asked about my impartiality.

Now we’d had a bunch of people from extensive cop families say boldly that they couldn’t promise not to be biased against the defendant (and when The Defense I think it was asked if anyone would assume from The Defendant’s name on the indictment that He Must Have Done Something a couple people even raised their hands (whaaaaat)), and I perhaps as a result and perhaps foolishly said that while my sympathies would generally be with a defendant, I would be able to put that aside and be unbiased and fair and all.

So The People asked me if I could promise “100%” that I would not be affected by that sympathy, and I said quite reasonably that hardly any sentences with “100%” in them are true, and the judge cut in to say that he would be instructing the jurors to put stuff like that aside (implying that then I would surely be able to), and I said that I would (but just didn’t say “100%”) and then The People came back in saying that they need people who are “certain” they can be unbiased (so, no way), but then actually asked me if I was “confident” that I could be (a vastly lower bar) so I said yes I would.

And when all of that was over, they had us all go out to the hallway again, and wait for awhile, and then go back in to sit in the same seats. And then they had I think four of us stand up and be sworn in as jurors, and the rest of us could go out with the officer and sit in the big jury room again until they had our little papers ready to say that we’d served four days of Jury Duty.

And that was it!

My impression is that they were looking for (inter alia, I’m sure) people who either believe, or are willing to claim to believe, that they can with certainty be 100% unbiased in their findings as jurors. That is, people who are in this respect either mistaken, or willing to lie. And that’s weird; I guess otherwise there’s too much danger of appeals or lawsuits or something? (Only for Guilty verdicts, presumably, since Not Guilty verdicts are unexaminable?) The Judge did say several times that something (the State, maybe?) demands a Yes or No answer to his “could you be an unbiased Juror and do as you’re told?” question, and when people said “I’ll try” or “I think so” or “I’d do my best” or whatever, he insisted on a “Yes” or a “No”. (So good on the honesty for those cop-family people saying “No”, I suppose.)

So if my calculations are roughly correct, after ummm two or 2.5 days of Jury Selection, they’ve selected only about 10 jurors, and exhausted the Jan 13th jury draw; so since they need at least 12 jurors and 2 (and perhaps more like 6) alternates, they’re going to be at this for some time yet! (Hm, unless it’s not a felony case? In which case 10 might be enough? But it sounded like a felony case.)

2022/12/04

Omelas, Pascal, Roko, and Long-termism

In which we think about some thought experiments. It might get long.

Omelas

Ursula K. LeGuin’s “The Ones Who Walk Away From Omelas” is a deservedly famous very short story. You should read it before you continue here, if you haven’t lately; it’s all over the Internet.

The story first describes a beautiful Utopian city, during its Festival of Summer. After two and a half pages describing what a wise and kind and happy place Omelas is, the nameless narrator reveals one particular additional thing about it: in some miserable basement somewhere in the city, one miserable child is kept in a tiny windowless room, fed just enough to stay starvingly alive, and kicked now and then to make sure they stay miserable.

All of the city’s joy and happiness and prosperity depends, in a way not particularly described, on the misery of this one child. And everyone over twelve years old in the city knows all about it.

On the fifth and last page, we are told that, now and then, a citizen of Omelas will become quiet, and walk away, leaving the city behind forever.

This is a metaphor (ya think?) applicable whenever we notice that the society (or anything else) that we enjoy, is possible only because of the undeserved suffering and oppression of others. It suggests both that we notice this, and that there are alternatives to just accepting it. We can, at least, walk away.

But are those the only choices?

I came across this rather excellent “meme” image on the Fedithing the other day. I can’t find it again now, but it was framed as a political-position chart based on reactions to Omelas, with (something like) leftists at the top, and (something like) fascists at the bottom. “Walk away” was near the top, and things like “The child must have done something to deserve it” nearer the bottom. (Pretty fair, I thought, which is why I’m a Leftist.)

It’s important, though, that “Walk away” wasn’t at the very top. As I recall, the things above it included “start a political movement to free the child”, “organize an armed strike force to free the child”, and “burn the fucking place to the ground” (presumably freeing the child in the process), that latter being at the very top.

But, we might say, continuing the story, Omelas (which is an acronym of “Me also”, although I know of no evidence that Le Guin did that on purpose) has excellent security and fire-fighting facilities, and all of the top three things will require hanging around in Omelas for a greater or lesser period, gathering resources and allies and information and suchlike.

And then one gets to, “Of course, I’m helping the child! We need Councilman Springer’s support for our political / strike force / arson efforts, and the best way to get it is to attend the lovely gala he’s sponsoring tonight! Which cravat do you think suits me more?” and here we are in this quotidian mess.

Pascal

In the case of Omelas, we pretty much know everything involved. We don’t know the mechanism by which the child’s suffering is necessary for prosperity (and that’s another thing to work on fixing, which also requires hanging around), but we do know that we can walk away, we can attack now and lose, or we can gather our forces and hope to make a successful attack in the future. And so on. The criticism, if it can even be called that, of the argument, is that there are alternatives beyond just accepting or walking away.

Pascal’s Wager is a vaguely similar thought experiment in which uncertainty is important; we have to decide in a situation where we don’t know important facts. You can read about this one all over the web, too, but the version we care about here is pretty simple.

The argument is that (A) if the sort of bog-standard view of Christianity is true, then if you believe in God (Jesus, etc.) you will enjoy eternal bliss in Heaven, and if you don’t you will suffer for eternity in Hell, and (B) if this view isn’t true, then whether or not you believe in God (Jesus, etc.) doesn’t really make any difference. Therefore (C) if there is the tiniest non-zero chance that the view is true, you should believe it on purely selfish utilitarian grounds, since you lose nothing if it’s false, and gain an infinite amount if it’s true. More strongly, if the cost of believing it falsely is any finite amount, you should still believe it, since a non-zero probability of an infinite gain has (by simple multiplication) an infinite expected value, which is larger than any finite cost.

The main problem with this argument is that, like the Omelas story but more fatally, it offers a false dichotomy. There are infinitely more possibilities than “bog-standard Christianity is true” and “nothing in particular depends on believing in Christianity”. Most relevantly, there are an infinite number of variations on the possibility of a Nasty Rationalist God, who sends people to infinite torment if they believed in something fundamental about the universe that they didn’t have good evidence for, and otherwise rewards them with infinite bliss.

This may seem unlikely, but so does bog-standard Christianity (I mean, come on), and the argument of Pascal’s Wager applies as long as the probability is at all greater than zero.

Taking into account Nasty Rationalist God possibilities (and a vast array of equally useful ones), we now have a situation where both believing and not believing have infinite expected advantages and infinite expected disadvantages, and arguably they cancel out and one is back wanting to believe either what’s true, or what’s finitely useful, and we might as well not have bothered with the whole thing.

Roko

Roko’s Basilisk is another thought experiment that you can read about all over the web. Basically it says that (A) it’s extremely important that a Friendly AI is developed before a Nasty AI is, because otherwise the Nasty AI will destroy humanity and that has like an infinite negative value given that otherwise humanity might survive and produce utility and cookies forever, and (B) since the Friendly AI is Friendly, it will want to do everything possible to make sure it is brought into being before it’s too late because that is good for humanity, and (C) one of the things that it can do to encourage that, is to create exact copies of everyone that didn’t work tirelessly to bring it into being, and torture them horribly, therefore (D) it’s going to do that, so you’d better work tirelessly to bring it into being!

Now the average intelligent person will have started objecting somewhere around (B), noting that once the Friendly AI exists, it can’t exactly do anything to make it more likely that it will be created, since that’s already happened, and causality only works, y’know, forward in time.

There is a vast (really vast) body of work by a few people who got really into this stuff, arguing in various ways that the argument does, too, go through. I think it’s all both deeply flawed and sufficiently well-constructed that taking it apart would require more trouble that it’s worth (for me, anyway; you can find various people doing variously good jobs of it, again, all over the InterWebs).

There is a simpler variant of it that the hard-core Basiliskians (definitely not what they call themselves) would probably sneer at, but which kind of almost makes sense, and which is simple enough to express in a way that a normal human can understand without extensive reading. It goes something like (A) it is extremely important that a Friendly AI be constructed, as above, (B) if people believe that that Friendly AI will do something that they would really strongly prefer that it not do (including perhaps torturing virtual copies of them, or whatever else), unless they personally work hard to build that AI, then they will work harder to build it, (C) if the Friendly AI gets created and then doesn’t do anything that those who didn’t work hard to build it would strongly prefer it didn’t do, then next time there’s some situation like this, people won’t work hard to do the important thing, and therefore whatever it is might not happen, and that would be infinitely bad, and therefore (D) the Friendly AI is justified in doing, even morally required to do, a thing that those who didn’t work really hard to build it, would strongly rather it didn’t do (like perhaps the torture etc.). Pour encourager les autres, if you will.

Why doesn’t this argument work? Because, like the two prior examples that presented false dichotomies by leaving out alternatives, it oversimplifies the world. Sure, by retroactively punishing people who didn’t work tirelessly to bring it into being, the Friendly AI might make it more likely that people will do the right thing next time (or, for Basiliskians, that they would have done the right thing in the past, or whatever convoluted form of words applies), but it also might not. It might, for instance, convince people that Friendly AIs and anything like them were a really bad idea after all, and touch off the Bulterian Jihad or… whatever exactly that mess with the Spacers was in Asimov’s books that led to their being no robots anymore (except for that one hiding on the moon). And if the Friendly AI is destroyed by people who hate it because of it torturing lots of simulated people or whatever, the Nasty AI might then arise and destroy humanity, and that would be infinitely bad!

So again we have a Bad Infinity balancing a Good Infinity, and we’re back to doing what seems finitely sensible, and that is surely the Friendly AI deciding not to torture all those simulated people because duh, it’s friendly and doesn’t like torturing people. (There are lots of other ways the Basilisk argument goes wrong, but this seems like the simplest and most obvious and most related to the guiding thought, if any, behind his article here.)

Long-termism

This one is the ripped-from-the-headlines “taking it to the wrong extreme” version of all of this, culminating in something like “it is a moral imperative to bring about a particular future by becoming extremely wealthy, having conferences in cushy venues in Hawai’i, and yes, well, if you insist on asking, also killing anyone who gets in our way, because quadrillions of future human lives depend on it, and they are so important.”

You can read about this also all over the InterThings, but its various forms and thinkings are perhaps somewhat more in flux than the preceding ones, so perhaps I’ll point directly to this one for specificity about exactly which aspect(s) I’m talking about.

The thinking here (to give a summary that may not exactly reflect any particular person’s thinking or writing, but which I hope gives the idea) is that (A) there is a possible future in which there are a really enormous (whatever you’re thinking, bigger than that) number of (trillions of) people living lives of positive value, (B) compared to the value of that future, anything that happens to the comparatively tiny number of current people is unimportant, therefore (C) it’s morally permissible, even morally required, to do whatever will increase the likelihood of that future, regardless of the effects on people today. And in addition, (D) because [person making the argument] is extremely smart and devoted to increasing the likelihood of that future, anything that benefits [person making the argument] is good, regardless of its effects on anyone else who exists right now.

It is, that is, a justification for the egoism of billionaires (like just about anything else your typical billionaire says).

Those who have been following along will probably realize the problem immediately: it’s not the case that the only two possible timelines are (I) the one where the billionaires get enough money and power to bring about the glorious future of 10-to-the-power-54 people all having a good time, and (II) the one where billionaires aren’t given enough money, and humanity becomes extinct. Other possibilities include (III) the one where the billionaires get all the money and power, but in doing so directly or indirectly break the spirit of humanity, which as a result becomes extinct, (IV) the one where the billionaires see the light and help do away with capitalism and private property, leading to a golden age which then leads to an amount of joy and general utility barely imaginable to current humans, (V) the one where the billionaires get all the money and power and start creating trillions of simulated people having constant orgasms in giant computers or whatever, and the Galactic Federation swings by and sees what’s going on and says “Oh, yucch!” and exterminates what’s left of humanity, including all the simulated ones, and (VI) so on.

In retrospect, this counterargument seems utterly obvious. The Long-termists aren’t any better than anyone else at figuring out the long-term probabilities of various possibilities, and there’s actually a good reason that we discount future returns: if we start to predict forward more than a few generations, our predictions are, as all past experience shows, really unreliable. Making any decision based solely on things that won’t happen for a hundred thousand years or more, or that assume a complete transformation in humanity or human society, is just silly. And when that decision just happens to be to enrich myself and be ruthless with those who oppose me, everyone else is highly justified in assuming that I’m not actually working for the long-term good of humanity, I’m just an asshole.

(There are other problems with various variants of long-termism, a notable one that they’re doing utilitarianism wrong and/or taking it much too seriously. Utilitarianism can be useful for deciding what to do with a given set of people, but it falls apart a bit when applied to deciding which people to have exist. If you use a summation you find yourself morally obliged to prefer a trillion barely-bearable lives to a billion very happy ones, just because there are more of them. Whereas if you go for the average, you end up being required to kill off unhappy people to get the average up. And a perhaps even more basic message of the Omelas story is that utilitarianism requires us to kick the child, which is imho a reductio. Utilitarian calculus just can’t capture our moral intuitions here.)

Coda

And that’s pretty much that essay. :) Comments very welcome in the comments, as always. I decided not to all any egregious pictures. :)

It was a lovely day, I went for a walk in the bright chilliness, and this new Framework laptop is being gratifyingly functional. Attempts to rescue the child from the Omelas basement continue, if slowly. Keep up the work!

2022/11/21

NaNoWriMo 2022, Fling Thirty-Seven

There are books everywhere; this makes me happy. There is diffuse moonlight coming in through the windows; this also makes me happy.

Essentially none of the books here are in any of the very few languages that I can read. This also makes me happy, in a way. There is so much to know already, this only emphasizes the point. And some of them have really interesting illustrations.

The books fill the shelves, and lie in piles on the floor. I walk from place to place, and sometimes sit on something that may be a chair. It’s just like home.

As the being known as Tibbs negotiates with the locals to get us access to the zone containing the confluence, and Steve and Kristen wander the city like happy tourists (well, that is not a simile, really; they wander the city as happy tourists), I have drifted here, which feels entirely natural.

And now, having communicated the above to my hypothetical reader without becoming distracted (except for that one parenthetical about simile versus plain description), I can calmly note that these things may not be “books” in the obvious sense, that moonlight coming in through the windows is a phenomenon that quantum physics can just barely explain, and for that matter that “makes me happy” is enough to keep a phalanx (a committee, a department, a specialty) of psychology and anthropology scholars occupied for a very long time.

And that time is an illusion.

I’ve always been able to separate language from thoughts about language, to separate thoughts about reality from meta-thoughts about thoughts. At least in public. At least when talking to other people.

But I think I’m better at it now, even when I’m in private, just talking to myself, or writing for you, dear cherished hypothetical reader (cherished, inter alia, for your hypothetically inexhaustible interest and patience).

Ironically (so much to learn about irony!), I credit much of this improvement to long discussions (how long? how does the flow of time go in the cabin of an impossibly fast sharp vehicle, speeding twinnedly from one end to another of a infinite rainbow band?) with an arguably non-existent being called Tibbs, and an enigmatic pilot called Alpha, after her ship.

(Why do we use the female pronoun toward the pilot Alpha? Why does she speak English? Or how do we communicate with her if she does not? Is the intricate shiny blackish plastic or metal construct at the front of her head a helmet, or her face? Is the rest of her a uniform, flight suit, or her own body, or some of each, or entirely both? Would it have been rude to ask?)

Tibbs and Alpha, I feel, are kindred spirits, my kindred, beings blessed or cursed with a tendency to look through everything and try to see the other side, even knowing there is finally no other side, to overthink, to overthink the notion of overthinking. But they have, perhaps, had longer to get used to it.

The being Tibbs claims to be millions of years old, while also claiming to have slept for most of that time. The Pilot Alpha suggests, by implying (how? what words or gestures did she actually use?) that questions of her origin are meaningless, that she has always existed, or has at least existed for so long that information about her coming-to-be is no longer available in the universe, having been lost to entropy long since.

(At what level is entropy real? Time is an illusion; so is entropy a statement about memory? A statement about what we remember, compared to what we experience right now and anticipate, right now, about the future? Or should we only talk about entropy after we have thoroughly acknowledged that time is an illusion, but gone on to speak about it as though it were real anyway, with only a vague notion, an incomplete explanation, of why that is okay?)

Here is a thought about the illusory nature of the past and future: given that this present moment is all that exists, then all that exists outside of memory and anticipation, is this one breath, this one side of this one room containing these shelves and piles (never enough shelves!) of books, or the appearance of books.

Everything else, the long / short / timeless journey aboard the fast sharpness Alpha, the anticipation felt while listening to the sound of something like wind just before meeting Tibbs for the first time, Kristen’s palm against my cheek in that other library, the glittering brass something that she received from the Mixing, the fine hairs at the back of her neck, all of that is only (“only”?) memory. Does that mean that it is no more important, no more valid, no more real, than anything else purely mental? No more significant than a wondrous pile of multi-colored giant ants made of cheese, singing hypnotic songs, that I have just this moment imagined, and now remember as a thing I first thought of a few moments ago?

This seems… this seems wrong. (See how the ellipsis these, if you are experiencing these words in a context in which there was one, adds a certain feel of spoken language, and perhaps therefore conveys some nuance of emotion that otherwise would be missing? That is communication, in a complicated and non-obvious form.)

Here is a hypothesis put forward I think by the being Tibbs, as it (he? she? they? they never expressed a preference that I can recall) as they moved slowly and in apparent indifference toward gravity around the front of the cabin of the Alpha: Some of the things, people, situations, events, in memory are especially significant (valid, important, “real”) because they changed me. Others, while equally (what?) wonderful themselves, like the pile of cheese-ants, did not have as much impact, or the same kind of impact.

If we could work out a good theory of what it means for an object or event (or, most likely, an experience) to change a person, given that time is an illusion, then this seems promising.

The Pilot Alpha seemed in some way amused by my desire, or my project, to develop a systematic justification for (something like) dividing the world (dividing memory) into “real” things and “imagined” things, with the former being more important or more valid (or, as the being Tibbs suggested, more cool) than the latter. Amused in a slightly condescending way, perhaps, which is fine for a possibly-eternal being, but which (equally validly) I am personally rather sensitive to, given my own developmental history.

The being Tibbs, however, was not accurate in referring to my just-subsequent behavior as “a snit”.

The moonlight coming through the windows (however that might be occurring) is diffuse because it comes through the visually-thick atmosphere of this world, or this area of this world. It seems implausible that we can breathe the atmosphere without danger; is this evidence that we are in a virtuality? Is it reasonable that I nearly wrote “a mere virtuality”? Was that because “mere” would have had a meaning there that I would have been expressing? (What is it to “express” a “meaning”?) Or only because “mere virtuality” is a phrase that occurs in my memory in (remembered) situations with (for instance) similar emotional tone? What is emotional tone?

I anticipate that the being Tibbs will return to this long library room within a certain amount of time, most likely with some information to convey (what is it to “convey” some “information”?) about our continuing travels (why are we travelling? what is “travel”?). I anticipate that (the humans) Kristen and Steve will return to this long library room within a certain amount of time, most likely exchanging cute little looks and possibly holding hands, possibly having acquired some odd or ordinary souvenir in the bazaars of the city (but is this a city? does it have bazaars? what counts as a bazaar?).

And I look forward to whatever follows.

Fling Thirty-Eight

2022/11/12

NaNoWriMo 2022, Fling Twenty-One

“Here again, eh? How’s the metal bar coming? My-head-wise, that is?”

He had opened his eyes, again, to see Colin and Kris sitting on the ground with him. It was like no time had passed at all, but also like that last time had been a long time ago.

“No worse, but no progress, I’m afraid,” Colin said.

“How long as it been?”

“Four days since the accident.”

“Not too bad. Is it, um, is it more stuck than they thought?”

The virtuality seemed thinner and greyer, and the clouds were more like wisps and rivers of mist, moving faster than the wind between the mountains.

Kristen moved closer to him and rubbed his back. He felt it in a vague and indirect way; it still felt good.

“Yeah,” she said, “they tried once, but they … didn’t like how it was going.”

“Am I gonna die, then?”

“Probably eventually,” Colin said. Kris rolled her eyes.

A strange wind seemed to blow through the virtuality, through him. He felt himself thinning out somehow, and his viewpoint rising into the air.

“Whoa,” he said, and his voice came to him strange and thready.

Colin and Kris stood up, in the virtuality, and looked up toward his viewpoint.

“What’s happening?” he said, his voice still fluttering.

“I’m … not sure,” Kris said.

“Probably just the fMRI connection again?” Colin said, uncharacteristically uncertain.

“Booooo!” Steve said, his viewpoint now moving up and down and bobbing side to side. As far as he could see his own body, it was stretched out and transparent, “I’m a ghooooost!”

Kris put her hand to her forehead and looked down.

“Stop that,” she said, “at least if you can.”

Steve tried to concentrate, to focus on the patterns and concentrations of being in the virtuality, and his viewpoint moved downward slowly.

“Here you come,” Colin smiled.

Steve watched himself re-form with curiosity. “Was that supposed to happen?” he asked.

“Not… especially.”

“Are they working on me again, trying to get the thing out?”

“No, they were just doing some more scans and tests.”

“Including how I interact with you guys?”

“Like last time,” Colin nodded.

“Then why –“

Then there was another, much stronger, gust of that wind, and Steve felt himself torn away, stretched out into mist, and blown somewhere far away.

There was an instant, or a day, of darkness.

“Hello?”

“Steve?” It was Kristen’s voice, somewhere in this dark place.

“Kris? Are we, I mean, is this the real world again?”

“I don’t know.”

“What’s real, after all?”

“Colin, you nerd, where are you?”

“Apparently in absolute darkness, where all I can do is speak, and hear you two.”

“Is this part of your virtuality, Kris?”

“No, I mean, it’s not supposed to be. Not Hints of Home or the fork that I made for you, or any other one I’ve made.”

“It’s really simple, anyway.”

“Could be a bug.”

Steve tried to move, tried to see. It was like trying to move your arm, rather than actually moving it. It was like trying to open your eyes in one of those dreams where you can’t open your eyes.

“You two can just duck back out for a second and see what’s going on, right?”

“That, well, that seems like it’s a problem, too,” Colin said, “at least I can’t duck out; you, Kris?”

“Nope, me, either; it’s heh weird. What a trope.”

“This is always happening to main characters,” Colin said, “so I guess we’re main characters.”

“You could be figments of my imagination now,” Steve said, “like, that wind could have been the virtuality’s interpretation of some brain thing happening, and now I’m totally comatose and hallucinating, and you two are still back there, and I’m just hallucinating that your voices are here.”

“Babe–” Kris started.

“It’s true,” Colin said, “if we were imaginary, it would probably be easier to imagine just our voices, and not have to bother with faces and movement and so on.”

“Oh, that’s helpful!”

“You guys trying to convince me you’re real, through pointless bickering?”

“No, but would it work?”

“It might. I’m hating this darkness, could everyone try to summon up some light?”

There was a short silence, then Steve felt a sort of vibration through everything, and a dim directionless light slowly filled the nearby space.

“That worked.”

“As far as it goes.”

“We still look like us.”

“This isn’t what I was wearing, though.”

“Colin’s the same.”

“Well, what else does he ever wear?”

“Hey!”

Colin did indeed look as he had earlier in the virtuality, as perfectly and nattily suited as ever. Kristen, on the other hand, was wearing a loose flowered dress, and Steve was in a well-tailored suit himself, less formal (and, he imagined, more comfortable) than Colin’s, but still he thought rather elegant.

“This is very gender normative,” Kristen said, standing up and slowly turning around, “but I like it.”

“Where are we?” Steve said, “and why can’t you guys duck out? I know I can’t because I’m the patient and my body’s sedated, but…”

“Wow, I hope we’re okay,” Kristen said.

“If something had happened to our bodies, we should have gotten a warning, and probably pulled out automatically,” Colin said logically.

“I don’t know,” said Steve, “the hallucination theory still seem pretty good.”

“That way lies solipsism,” pointed out Kris. She spun over to Steve and touched his shoulder.

“I felt that,” he said.

She frowned. “Me, too. Really well.”

“See? Hallucinations.”

“I’m know if I was a hallucination,” Kris said.

Colin was walking around at the edge of the lighted circle.

“I wonder if this is all there is,” he said.

“It’s a small hallucination, sorry,” said Steve.

“This could be the whole universe,” he said, “although I seem to remember lots more stuff.”

“Colin–“

“This present moment is all that exists,” Colin said, “and all the other stuff is just a memory, that also exists right now.”

“Here he goes.”

“It might pass the time.”

“Shouldn’t be try to be, like, getting back to the real world, making sure our bodies are all okay…”

“If you can think of a way to do that…”

“Good point.”

Colin walked back into the center of the lighted circle, and the three sat down on the plain flat ground again, close to each other, surrounded by darkness all around.

Fling Twenty-two

2022/11/12

NaNoWriMo 2022, Fling Twenty

Everything comes together deep down.

The gentle tendrils of the mushrooms and the fungi, the mycelia, form into knots beneath the damp ground, and those knots reach out and connect to each other, knots of knots of knots connected in a single vast sheet below the world.

The fungi do not think.

But they know.

There are more connections in the mycelia of the rich dark earth of a single farm, than in the brain of the greatest human genius.

But they do not think.

The stars are connected, by channels where gravity waves sluice in and out of the twelve extra dimensions of the universe, the ones whose nature we haven’t figured out yet.

The stars… the stars think.

But they do not know.

The fungal and stellar networks found each other and connected a long time ago.

Every tree and every stone, every mammal footstep, every shovel of earth. Every spaceship and satellite launch.

They are always watching.

Or no.

Every tree, stone, footstep, and every launch, are part of the network already.

Every tree, stone, footstep, and every launch, is just the galactic star-fungus network, thinking, and knowing.

“Really?”

“I mean, absolutely. There’s no way it could be false.”

“They’re connected? We’re … part of their giant brain?”

“Of course. Everything is part of everything.”

“I — but if it isn’t falsifiable…?”

“That’s right, it’s not really a scientific theory. It’s more a way of thinking.”

“A religion?”

“A philosophy, more.”

“But if it isn’t true…”

“Oh, it’s true.”

“Stars and fungus… sounds sort of paranoid.”

“Nah, it’s just how the universe is; everything is connected, and the fungi and the stars more than anything else.”

“How did they find each other?”

“How could they not have? It was inevitable. Necessary.”

The stars and the mycelium resonate as the ages roll on. Life comes into being, and the network reacts, rings, with pure tones in every octave of the spectrum. War is a rhythmic drumming; peace is a coda, or an overture. And death is percussion.

Deep in the space between the stars, there are nodes where major arteries of coiled dimensions cross and knot, just as the mycelia cross and not deeper and deeper into the intricate ground. In the space around a star-node, in the stone circles above the spore-nodes, beings dance, constituting and manifesting the thoughts of the stars, and the knowledge of the mushrooms.

“Like, faerie circles? There are … star circles of some kind, out in space?”

“There are. Things gather at them, tiny things and big things, people from planets coming in their starships, and beings that evolved there in space, floating in years-long circles on the propulsion of vast fins pushing on interstellar hydrogen.”

“That seems like something that might not be true. What if we go out in a star ship sometime, and there’s nothing like that out there?”

“There is. An endless array of them.”

“How do you know that?”

Those who dance at the nodes of the stars and the fungi, over the centuries, absorb the thinking and knowledge of the infinite universe. Whence our stories of wise ones, of wizards, of the Illuminati. Whence the yearning songs of the star-whales, of forgotten ancient wisdom, and secret covens in the darkness.

Those who evolve on planets have an affinity to the fungal nodes. Those who evolve between the stars have an affinity for the stellar nodes. They complement and complete each other.

No planetary culture is mature until it has allied with a stellar culture.

No stellar culture is mature until it has allied with a planetary culture.

“So are the, y’know, the Greys, are they visited Earth to see if we’re worthy of allying with? Are they, like, an immature steallar culture looking for a fungus-centered culture to hook up with?”

“I don’t know.”

“You don’t know? Haven’t you heard about it in the fungusvine?”

“Fungusvine, funny.”

“Myceliavine?”

Everything comes together deep down.

The semantic tendrils of the realities extend, purely by chance, into the interstices between universes. Over endless time, over expansions and collapses, rollings in and rollings out, the tendrils interact, purely by chance, and meaning begins to flow.

Knots, and knots of knots, and knots of knots of knots, forming a vast extradimensional network that binds the realities together.

Every reality is underlain by its own networks, of mycelia and gravitational strings, or aether winds and dragon spines, the thoughts of Gods and the songs of spirits, or thrint hamuges and the fletts of tintans. And the network of each reality connects to the extradimensional network, and thereby to everything else.

Every tree, stone, footstep, and every starship launch, is part of the unthinkably vast mind of the universe, heart of the universe, the sacred body of everything, in the largest sense.

“Ooh! Are there, like, reality-witches, who find notes in the network between the realities, and have dances and stuff there, and slowly gather extradimensional wisdom?”

“Of course, there are!”

“I want to be one of those.”

“Oh, you will.”

The mind, heart, interconnected web of the universe, the multiverse, thinks (and feels and knows) slowly, deliberately. For a single impulse to travel from one end to the other, if the web had ends, would take almost an eternity. But for the resonating tone, the mood, the energy fluxes, of the network to change, all over, from end to end, takes only an instant.

“Wouldn’t that violate the speed of light and all?”

“Different realities, different speed limits.”

“I don’t know, it seems like you could you that to cheat.”

“You absolutely can.”

It is a category mistake to think that because All Is One, I can make a free transcontinental phone call.

But it is universally true that the extradimensional web of interconnections holds ultimate wisdom.

You are a neuron of the multiversal Mind, you are a beat of the multiversal Heart. You resonate always in harmony with its thoughts, its knowings, its feelings. You can accept the harmony or try to reject it, and either way you are sending your signal from one reality to another, and your breathing is a message to another universe.

Fling Twenty-One

2022/11/07

NaNoWriMo 2022, Fling Twelve

Light passes through windows. This is a puzzle. This is a complicated story, a story that no one understands, in four words.

Beams of sunlight pass through the library windows, making patterns on the wall.

Sitting where I sit, among the shelves and the piles of books, I see beams of sunlight passing through the library windows, making patterns on the wall.

My evidence for the existence of beams of sunlight is (at least) in two parts: I see dust motes dancing (dancing? what is it to dance? what kinds of things can dance?) in the sunbeams, visible (to me) in the light, where in other parts of the (dusty) library air, there are no (to me) visible dust motes dancing (or otherwise) in the air, and one explanation (is it the best one? what is best?) is that there is a sunbeam, or at least a beam of light, passing through the air there. (How does a beam of light pass through the air, let alone through the class of the windows?)

The second part of my evidence is the patterns on the wall. I know, or I remember (in memories that I have now, at this moment, the only moment, the only entity, that exists) that the wall is really (what could that possibly mean?) more or less uniform in color; vaguely white, and the same vague whiteness more or less all over. But what I see, or what I see at some level of awareness (what are levels of awareness? what is awareness? who is aware?) is a complex pattern of light and dark, triangles and rectangles and more complex figures laid out and overlapping, and I theorize (automatically, involuntarily, whether or not I intend to) that these brighter shapes and triangles are where the sunbeam (passing through empty space, and then the air, the window, the air again) strikes the wall, and the darker areas, the ground to the sunbeam’s figure, are simply the rest, the shadows, where the sunbeam does not strike, or does not strike directly.

(Or the dark places are where the frames of the window and the edges of shelves and chairs, things outside and things inside, cast their shadows, and the light places, the ground to the shadows’ figure, is the rest, where the shadows do not fall; figure is ground and ground is figure.)

Can we avoid all of this complexity, if we hold Mind as primary? I am. Or, no, as generations of philosophers have pointed out, feeling clever to have caught Descartes in an error, not “I am” but only something along the lines of “Thought is”. If there is a thought that “thought is”, that can hardly be in error (well, to first order). But if there is a thought “I think”, that could be an error, because there might be no “I”, or it might be something other than “I” that is thinking.

Second attempt, then! Can we avoid all of this complexity, if we start with “Thought is”? Or “Experience is”?

Experience is. There is this instant of experience. In this instant of experience, there is a two-dimensional (or roughly two-dimensional) surface. Whether or not there is an understanding of dimensions and how to count them, there is either way still a two-dimensional surface, here in this experience. In some places, the two-dimensional surface is bright. In some places, it is dark; or it is less bright.

Whether or not there is any understanding of brightness and darkness, what might lead to brightness and darkness, anything about suns or windows or how they work, there is still this brightness, in this single moment of experience, and there is still this darkness.

(Whether the darkness is actually bright, just not as bright as the brightness, whether the surface is really two-dimensional in any strong sense, or somewhere between two and three dimensions, or whether dimensions are ultimately not a useful or coherent concept here, there is still, in this singular moment of experience that is all that there is, this experience, this experience, which is what it is, whether or not the words that might be recorded here as a result of it (and whatever “result” might mean) are the best words to describe it.)

And whether it is described at all, whether description is even possible, does not matter; the main thing, dare I write “the certain thing”, is that this (emphasized) is (similarly emphasized).

So.

From this point of view, we may say, Mind is primal. Mind, or whatever we are attempting successfully or unsuccessfully to refer to when we use the word “Mind”, does exist, or at any rate has whatever property we are attempting to refer to when we say “does exist”. Except that “refer to” and “property” are both deeply problematic themselves here.

This is why the ancient Zen teachers (who exist, in this singular present moment of experience, only as memories of stories, memories that exist now) are said to have, when asked deep and complex questions through the medium of language, and impossibly concerning language, have responded with primal experience, with blows or shouts or (more mildly) with references to washing one’s bowl, chopping wood, carrying water.

We can remember that there is this. So, what next?

Language (this language, many other languages, those languages that I am familiar with, that I know of) assumes time.

Without the concept of time, it is difficult to speak, to write, to hypothesize.

Let alone to communicate.

To communicate!

The impossible gulf: that you and I both exist (what does it mean to exist?) and that by communicating (whatever that might be) one or both of us is changed (otherwise why bother?). But change, like time (because time), is an illusion.

So!

Is it necessary (categorically, or conditionally) to participate in the illusion? Or pretend? To act (or to speak, as speech is after all action) as though time and change were real?

The sun comes through the windows and casts shadows on the wall. Is there someone at the door?

Fling Thirteen

2022/10/08

Simulation Cosmology at the Commune

Last night I visited the Commune in my dreams. I’ve been there before, to the extent that that statement means anything; it’s a great dim ramshackle place full of omnigendered people and spontaneous events and kink and music and general good feeling. I’d love to get there more regularly (and in fact this time one of the regulars teased me for always saying how at home I felt, and yet only visiting once in a while; I said that it wasn’t that simple [crying_face_emoji.bmp]).

As well as having ice cream, I took part in a spirited discussion about whether this reality is just a simulation running in some underlying reality, and whether that underlying reality is also a simulation, and then eventually the question of whether that series has to eventually end in some reality that isn’t a simulation, or if it can just go on forever. (The question of whether we might be in, not a simulation, but a dream, and the extent to which that’s a different thing anyway, didn’t arise that I recall.)

I remember trying to take a picture with my phone, of some book that was interesting and relevant to the discussion, and of course it was a dream so the phone camera would only take a picture a few seconds after I pushed the button, and it was frustrating. (Also there was some plot about someone releasing very small robots to find and blow up the dictator of a foreign country, recognizing them by their DNA, but that probably counts as “a different dream”, to the extent that that means anything.)

Then after waking up, I took a lovely walk outside in the sun and 50-something air, and it was lovely, and I thought thoughts and things. So now I am writing words!

Anything that is consistent with all of your (all of my, all of one’s) experience so far, is true, in the sense that it is true for someone out there in the possible worlds who is exactly identical to you in experience, memory, and so on. (Which is in a sense all that there is.) And so it would seem mean or something so say it wasn’t true.

So there are many incompatible things that are true; it is true that you live in a simulation running on a planet-size diamond computer created by intelligent plants, and it is also true that you live in a simulation running on the equivalent of a cellphone owned by a golden-thewed Olympian deity who does chartered accountancy in his spare time.

It is probably also true that you don’t live in a simulation at all, but in a free-floating reality not in any useful sense running on any underlying other reality, created by some extremely powerful being from the outside, if that can be made logically coherent. And also that you live in a non-simulation reality that just growed, so to speak, without external impetus.

(I notice that I keep writing “love” for “live”, which is probably an improvement in most ways.)

The “if that can be made logically coherent” thing is interesting; in my undergraduate thesis I had to distinguish between “possible worlds” and “conceivable worlds”, because I needed a word for a category of worlds that weren’t actually possible (because they contained Alternate Mathematical Facts, for instance) but were conceivable in that one could think vaguely of them; otherwise people would turn out to know all necessary truths, and clearly we don’t.

So now given this liberal notion of truth that we’re using here, are the negations of some necessary truths, also true? Is there a version of you (of me) that lives in a world in which the Hodge Conjecture is true, and another in which it is false? Not sure. Probably! :)

That is, given that in the relevant sense my current experience (this present moment, which is all that exists) doesn’t differentiate between Hodge and not-Hodge, there’s no reason to say either Hodge or not-Hodge, and therefore neither Hodge nor not-Hodge is any truer than the other.

Eh, what?

It’s an interesting question what would happen if I got and thoroughly understood a correct proof that, say, not-Hodge. Would there at that point no longer be any of me living in a universe in which Hodge? Given that I can (think that I) have thoroughly understood something that is false, there would still be me-instances in worlds in which Hodge, I’m pretty sure. Which kind of suggests that everything is true?

Given all that, it seems straightforwardly true for instance that you (we) live in a simulation running in a reality that itself a simulation running in a … and so on forever, unlikely as that sounds. Is there some plausible principle that, if true, would require that the sequence end somewhere? It sort of feels like such a principle ought to exist, but I’m not sure exactly what it would be.

It seems that if there is anything in moral / ethical space (for instance) that follows from being in a simulation, it would then be difficult and/or impossible to act correctly, given that we are both in and not in a simulation. Does that suggest that there isn’t anything moral or ethical that follows from being in a simulation? (I suspect that in fact there isn’t anything, so this would just support that.)

It’s true that you can communicate with the underlying reality that this reality is running on, by speaking to a certain old man in Toronto. It’s also true that you can communicate with that underlying reality by closing your eyes and thinking certain words that have long been forgotten almost everywhere. You can even be embodied in a physical body in that underlying reality! You could make your way down the realities, closer and closer to the source (which is also an infinite way off!)!

If you were to arrange to be downloaded into an individual body in the underlying reality, would you want the copy of you in this reality to be removed at the same time? That would sort of seem like suicide; on the other hand leaving a you behind who will discover that they didn’t make it to the underlying reality, might be very cruel. Perhaps the left-behind you could be (has been) flashy-thing’d, per Men in Black.

Another thing that appears to be true: it’s very hard to use any of these AI things to create an image that accurately reflects or elicits or even is compatible with my dreams of visiting the Commune and having ice cream! Not sure what I’m doing wrong. Here is one that I like in some way, even though it doesn’t really reflect the dream. (The center divider is part of the image; it’s not two images.) Enjoy!

2022/09/03

Those Born in Paradise

My old Jehovah’s Witness friend, from all those Saturdays ago, stopped by this morning! He says he’s just started doing door-to-door work again since the Pandemic started. He had with him a young Asian man, as they do, always traveling in pairs to avoid temptation and all.

As well as catching up on life and all, and him showing me the latest jay doubleyou dot org interactive Bible teachings and stuff, we talked a little about religion and philosophy.

He talked about how Jehovah has a name (“Jehovah”) as well as various titles (“God”, “Father”, etc), just like people do. (I didn’t ask where the name came from, although I am curious.) He said that, as with humans, Jehovah has a name because Jehovah is a person. I asked what that meant, and it came down to the idea that Jehovah has “a personality”. I tried to ask whence this personality came, and whether Jehovah could have had a different personality, but that was apparently a bit too advanced.

They claimed that one of Jehovah’s personality traits is humility, and this … surprised me. Their evidence for this was two pieces of Bible verse, one which has nothing whatever to do with humility, and the other being Psalms 18:35, which the KJV renders as:

Thou hast also given me the shield of thy salvation: and thy right hand hath holden me up, and thy gentleness hath made me great.

but the JW’s favorite translation, the New World Translation has as:

You give me your shield of salvation,
Your right hand supports me,
And your humility makes me great.

Given all of the contrary evidence, about being jealous and wrathful and “where were you when the foundations of the Earth were laid?”, I was not convinced of the humility thing, and we sort of dropped it.

(The Hebrew is apparently “עַנְוָה” (wheee, bidirectional text!), which is variously translated as either “gentleness” or “humility” or “meekness”, with suggestions of “mercy”; imho “gentleness” makes more sense here, as I don’t know by what mechanism God’s humility would lead to David’s greatness, whereas God being gentle and merciful (about David’s flaws) is a better candidate.)

Anyway :) what I really wanted to talk about was the thing I’ve alluded to before, the puzzle where, in the JW theory, once we (well, the good people!) are in the Paradise Earth, there is still free will, and there is still sin, at a presumably small but still non-zero rate, and as soon as the sinner sins in their heart (before they can hurt anyone else) they just cease to be.

(I wrote a microfiction on this theme here, and it’s also a plot element in the 2020 NaNoWriMo novel . Just by the way. :) )

“Those Born in Paradise”, made with MidJourney of course

My concern with this JW theory was that, given eternity and free will, everyone will sin eventually, and so the Paradise Earth (and even Heaven, assuming the 144,000 are also like this, I’m not sure) will slowly slowly ever so slowly empty out! Uh oh, right?

But in talking to my JW friend (who opined that at least people wouldn’t sin very often, even though as I pointed out Adam and Eve were in roughly the same circumstances and they sinned like two hours in), it turns out that there is still birth on Paradise Earth!

That had not occurred to me. He was quick to point out that there wouldn’t be enough birth to make the place overcrowded (perhaps that’s something that lesser doubters bring up?). I said that sure, I guess there’s just enough to make up for the rate of insta-zapped sinners! (I did not actually use the term “insta-zapped”.)

So that solves that puzzle. It does seem inevitable that eventually the only people will be people who were born in the Paradise Earth (or heaven?), and who therefore didn’t have to go through the whole “world dominated by Satan” phase, but only learn about it in History class or something.

Which seems kind of unfair to the rest of us! But there we are. As I say, some interesting stories to be written in that setting.

Neither my JW friend nor the younger person he was going door-to-door with seemed entirely comfortable with my theory, even though it’s the obvious consequences of their beliefs. I hope I didn’t disturb their faith at all, hee hee. (I like to think that there is some sort of warning next to my address in their list of people to visit, not to send anyone unsteady in their faith; it’s not very likely, but I like to think it anyway.)

2022/04/30

“The Ruliad”: Wolfram in Borges’ Library

I think people have mostly stopped taking Stephen Wolfram very seriously. He did some great work early in his career, at CalTech and the Institute for Advanced Study, and (with a certain amount of intellectual property mess) went on to create Mathematica, which was and is very cool.

Then in 1992 he disappeared into a garret or something for a decade, and came out with the massive A New Kind of Science, which got a lot of attention because it was Wolfram after all, but which turned out to be basically puffery. And a certain amount of taking credit for other people’s earlier work.

Being wealthy and famous, however, and one imagines rather surrounded by yes-folks, Wolfram continues in the New Kind of Science vein, writing down various things that sound cool, but don’t appear to mean much (as friend Steve said when bringing the current subject to my attention, “Just one, single, testable assertion. That’s all I ask”).

The latest one (or a latest one) appears to be “The Ruliad”. Wolfram writes:

I call it the ruliad. Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways.

It’s not clear to me what “entangled” could mean there, except that it’s really complicated if you try to draw it on a sheet of paper. But “the result of following all possible computational rules in all possible ways” is pretty clearly isomorphic to (i.e. the same thing as) the set of all possible strings. Which is to say, the set of all possible books, even the infinitely-long ones.

(We can include all the illustrated books by just interpreting the strings in some XML-ish language that includes SVG. And it’s probably also isomorphic to the complete graph on all possible strings; that is, take all of the strings, and draw a line from each one to all of the others. Or the complete graph on the integers. Very entangled! But still the same thing for most purposes.)

Now the set of all possible strings is a really amazing thing! It’s incomprehensibly huge, even if we limit it to finite strings, or even finite strings that would fit in a reasonably-sized bound volume.

And if we do that latter thing, what we have is the contents of the Universal Library, from Borges’ story “The Library of Babel”. As that story notes, the Library contains

All — the detailed history of the future, the autobiographies of the archangels, the faithful catalog of the Library, thousands and thousands of false catalogs, the proof of the falsity of those false catalogs, a proof of the falsity of the true catalog, the gnostic gospel of Basilides, the commentary upon that gospel, the commentary on the commentary on that gospel, the true story of your death, the translation of every book into every language, the interpolations of every book into all books, the treatise Bede could have written (but did not) on the mythology of the Saxon people, the lost books of Tacitus.

Borges — The Library of Babel

It also contains this essay, and A New Kind of Science, and every essay Wolfram will ever write on “the Ruliad”, as well as every possible computer program in every language, every possible finite-automaton rule, and to quote Wolfram “the result of following all possible computational rules in all possible ways.” (We’ll have to allow infinite books for that one, but that’s a relatively simple extension, heh heh.)

So, it’s very cool to think about, but does it tell us anything about the world? (Spoiler: no.) Wolfram writes, more or less correctly:

it encapsulates not only all formal possibilities but also everything about our physical universe—and everything we experience can be thought of as sampling that part of the ruliad that corresponds to our particular way of perceiving and interpreting the universe.

and sure; for any fact about this particular physical universe (or, arguably, any other) and anything that we experience, the Library of Babel, the set of all strings, the complete graph on all strings, “the Ruliad”, contains a description of that fact or experience.

Good luck finding it, though. :)

This is the bit that Wolfram seems to have overlooked, depending on how you read various things that we writes. The set of all strings definitely contains accurate statements of the physical laws of our universe; but it also contains vastly more inaccurate ones. Physicists generally want to know which are which, and “the Ruliad” isn’t much help with that.

Even philosophers who don’t care that much about which universe we happen to be in, still want correct or at least plausible and coherent arguments about the properties of formal systems, or the structure of logic, or the relationship between truth and knowledge, and so on; the Universal Library / “Ruliad” does contain lots of those (all of them, in fact), but it provides no help in finding them, or in differentiating them from the obviously or subtly incorrect, implausible, and incoherent ones.

There is certainly math that one can do about the complete graph over the set of all strings, and various subgraphs of that graph. But that math will tell you very little about the propositions that those strings express. It’s not clear that Wolfram realizes the difference, or realizes just how much the utter generality of “the Ruliad” paradoxically simplifies the things one can say about it.

For instance, one of the few examples that Wolfram gives in the essay linked above, of something concrete that one might study concerning “the Ruliad” itself, is:

But what about cases when many paths converge to a point at which no further rules apply, or effectively “time stops”? This is the analog of a spacelike singularity—or a black hole—in the ruliad. And in terms of computation theory, it corresponds to something decidable: every computation one does will get to a result in finite time.

One can start asking questions like: What is the density of black holes in rulial space?

It somewhat baffles me that he can write this. Since “the Ruliad” represents the outputs of all possible programs, the paths of all possible transition rules, and so on, there can be no fixed points or “black holes” in it. For any point, there are an infinite number of programs / rules that map that point into some other, different point. The “density of black holes in rulial space” is, obviously and trivially, exactly zero.

He also writes, for instance:

A very important claim about the ruliad is that it’s unique. Yes, it can be coordinatized and sampled in different ways. But ultimately there’s only one ruliad.

Well, sure, there is exactly one Universal Library, one set of all strings, one complete graph on the integers. This is, again, trivial. The next sentence is just baffling:

And we can trace the argument for this to the Principle of Computational Equivalence. In essence there’s only one ruliad because the Principle of Computational Equivalence says that almost all rules lead to computations that are equivalent. In other words, the Principle of Computational Equivalence tells us that there’s only one ultimate equivalence class for computations.

I think he probably means something by this, well maybe, but I don’t know what it would be. Obviously there’s just one “result of following all possible computational rules in all possible ways”, but it doesn’t take any Principle of Computational Equivalence to prove that. I guess maybe if you get to the set of all strings along a path that starts at one-dimensional cellular automata, that Principle makes it easier to see? But it’s certainly not necessary.

He also tries to apply terminology from “the Ruliad” to various other things, with results that generally turn out to be trivial truths when translated into ordinary language. We have, for instance:

Why can’t one human consciousness “get inside” another? It’s not just a matter of separation in physical space. It’s also that the different consciousnesses—in particular by virtue of their different histories—are inevitably at different locations in rulial space. In principle they could be brought together; but this would require not just motion in physical space, but also motion in rulial space.

What is a “location in rulial space”, and what does it mean for two things to be at different ones? In ordinary language, two things are at different points in “rulial space” if their relationships to other things are not the same; which is to say, they have different properties. (Which means that separation in physical space is in fact one kind of separation in “rulial space”, we note in passing.) So this paragraph says that one human consciousness can’t get inside another one, because they’re different in some way. And although you might somehow cause them to be completely identical, well, I guess that might be hard.

This does not seem like a major advance in either psychology or philosophy.

Then he gets into speculation about how we might be able to communicate between “different points in rulial space” by sending “rulial particles”, which he identifies with “concepts”. The amount of hand-waving going on here is impressive; Steve’s plea for a falsifiable claim is extremely relevant. In what way could this possibly turn out to be wrong?

(It can, on the other hand, easily turn out to be not very useful, and I think so far it’s doing a good job at that.)

He also proceeds, hands still waving at supersonic speed, to outline a Kantian theory that says that, although “the Ruliad” contains all possible laws of physics, we seem to live in a universe that obeys only one particular set of laws. This, he says, is because “for observers generally like us it’s a matter of abstract necessity that we must observe general laws of physics that are the ones we know”.

What “observers like us” means there is just as undefined as it was when Kant wrote the same thing only with longer German words. He goes on like this for some time, and eventually writes:

People have often imagined that, try as we might, we’d never be able to “get to the bottom of physics” and find a specific rule for our universe. And in a sense our inability to localize ourselves in rulial space supports this intuition. But what our Physics Project seems to rather dramatically suggest is that we can “get close enough” in rulial space to have vast predictive power about how our universe must work, or at least how observers like us must perceive it to work.

which is basically just gibberish, on the order of “all we have to do is find the true physics text in the Universal Library!”.

It’s hard to find anyone but Wolfram writing on “the Ruliad” (or at least I haven’t been able to), but the Wolfram essay points to an arxiv paper “Pregeometric Spaces from Wolfram Model Rewriting Systems as Homotopy Types” by two authors associated with Wolfram Research USA (one also associated with Pompeu Fabra University in Barcelona, and the other with the University of Cambridge in Cambridge, and one does wonder what those institutions think about this). That paper notably does not contain the string “Ruliad”. :)

I may attempt to read it, though.

2022/04/29

God is not a source of objective moral truth

I mean, right?

I’ve been listening to various youtubers, as mentioned forgetfully in at least two posts, and some of them spend considerable time responding to various Theist, and mostly Christian, Apologists and so on.

This is getting pretty old, to be honest, but one of the arguments that goes by now and then from the apologists is that atheists have no objective basis for moral statements; without God, the argument goes, atheists can’t say that torturing puppies or whatever is objectively bad. Implicit, and generally unexamined, is a corresponding claim that theists have a source of objective moral statements, that source being God.

But this latter claim is wrong.

What is an objective truth? That is a question that tomes can be, and have been, written about, but for now: in general an objective truth is a true statement that, once we’re clear on the meanings of the words, is true or false. A statement on which there is a fact of the matter. If Ben and I can agree on what an apple is, which bowl we’re talking about, what it means to be in the bowl, and so on, sufficient to the situation, then “there are three apples in the bowl” is objectively true, if it is. If Ben insists that there are six apples in the bowl, and we can discover that for some odd reason Ben uses “an apple” to refer to what we would think of as half an apple, we have no objective disagreement.

What is a moral truth? Again, tomes, but for now: a moral truth is (inter alia) one that provides a moral reason for action. A typical moral truth is “You should do X” for some value of X. In fact we can say that that (along with, say, “You should not do X“) is the only moral truth. No other fact or statement has moral bearing, unless it leads to a conclusion about what one should do.

(We will take as read the distinction between conditional and categorical imperatives, at least for now; we’re talking about the categorical imperative, or probably equally well about the “If you want to be a good person, you should X” conditional one.)

What would an objective moral truth look like, and where would it come from? We would have to be able to get to a fact of the matter about “You should do X” from things about which there are facts of the matter, modulo word meanings. The theist is almost certainly thinking that the argument is simple and looks like:

  • You should do what God wants,
  • God wants you to do X,
  • You should do X.

Since we’re talking about whether the theist’s argument works, we stipulate that God exists and wants you (me, us, etc.) to do X for some X. And if we should do what God wants, we should therefore do X.

But is it objectively true that we should do what God wants?

If I disagree, and say that I don’t think we should do what God wants, the theist can claim that we differ on the meanings of words, and that what they mean by “should do” is just “God wants you to do”. But that’s not very interesting; under those definitions it’s just a tautology, and “you should do X” turns out not to be a moral truth, since “should do X” may no longer be motivating.

To get further, the theist will have to claim that “God wants you to do X” implies “You should do X” in the moral sense of “should”; that it’s objectively motivating. And it’s not clear how that would work, how that claim is any stronger than any other. A utilitarian can equally say “X leads to the greatest good for the greatest number” is objectively motivating, a rule-utilitarian can say that “X follows the utility-maximizing feasible rules” is objectively motivating, and so on.

(“You should do X because God will punish you if you don’t” can be seen as objectively motivating, but not for moral reasons; that’s just wanting to avoid punishment, so not relevant here.)

Why would someone think that “You should do what God wants you to do” is any more objectively true than “You should do what maximizes utility” or “You should do what protects your family’s honor”? I don’t find myself with anything useful to say about that; because they grew up hearing it, or they’ve heard it in Church every Sunday or whatever, I suppose?

So that’s that. See title. :) Really we probably could have stopped at the first sentence.

2022/04/10

How about that Kalam argument?

While we’re talking about philosophical arguments for the existence of God, we should apparently consider the so-called Kalam argument.

In its simplest form it’s nice and short:

  1. Whatever begins to exist has a cause of its beginning.
  2. The universe began to exist.
  3. Therefore, the universe has a cause to its beginning.

This is, obviously, an argument for the existence of God, only if God is defined as “A cause of the beginning of the universe” and nothing further, which doesn’t seem all that significant, but still. There are further associated arguments attempting to extend the proof more in the direction of a traditional (i.e. Christian) God, being “personal” and all, but let’s look at the simple version for now.

I think it’s relatively straightforward that the conclusion (3) follows from the premises (1) and (2), so that narrows it down. Now, are (1) and (2) true?

First we should figure out what we mean by “the universe”, because that matters a lot here. Three possible definitions occur immediately, in increasing size order:

(U1) All of the matter and energy that’s around, or has been around as far back in time as we can currently theorize with any plausibility. All of the output of the Big Bang, more or less.

(U2) Anything transitively causally connected in any way to anything in U1. Everything in the transitive closure of past and future light-cones of me sitting here typing this (which is, at least arguably, the same as everything in the transitive closure of past and future light-cones of you standing there reading this).

(U3) Anything in any of the disjoint transitively-causally-connected sets of things that are picked out in the same way that U2 is, starting from different seed points that aren’t transitively-causally-connected. The “multiverse”, if you will, consisting of all those things that aren’t logically (or otherwise) impossible.

It’s interesting to note here that “the universe” as used in the Kalam can be at most U1. This is because nothing outside of U2 can be causally connected to, can create or cause or otherwise have any effect at all on, anything in U2. Anything that claims to be a cause of U2 or U3 is either not actually a cause, or is part of U2 or U3 by virtual of being causally connected to it.

This works, I think, via (2) in the argument above; U1 might plausibly be said to have begun to exist, but it’s hard to see how U2 or U3 could.

Or, I dunno, is that true? We can certainly imagine U2, that is, this universe right here, somewhat broadly construed but still undeniably this one, did just start up at some time T0. That it could, I suppose, turn out to be a fact that at all times T >= T0 there are some facts to write down about this universe, but at times T < T0 there simply aren’t.

The reaction of Kalam proponents to that suggestion seems to be just incredulity, but in general I don’t see anything wrong with the idea; a universe simply coming into being doesn’t seem logically contradictory in any way. We can certainly write down equations and state transitions that have a notion of time, and that have well-defined states only at and after a particular time; it’s not hard.

So I guess, even if the Kalam must mean U1 by “universe” even in its first premise, (1), there’s no strong reason to think that (1) is true even then. This universe right here, this collection of matter and energy, could have just sprung into existence eight billion years ago or whatever, without any particular cause. Why not?

Premise (2) is less ambitious, and therefore more plausible. Did this particular batch of matter and energy, U1, begin to exist at some time? Could be. I mean, I can’t prove it or anything, and neither can anyone else, but I might be willing to stipulate it for the sake of argument.

(Even U2 might have, although the Kalam proponent probably has to disagree with that: since they want to have a backwards-eternal God creating U1, that means that that God counts as part of U2, which means that U2 is backwards-eternal, and never came into being. So the Kalam folks are still stuck with U1.)

U3 has the interesting property that it doesn’t have a common clock, even to the limited relativistic extent that U1 and U2 have common clocks. Since U3 contains disjoint sections that have no causal connections to each other, it’s not really meaningful to speak of the state of U3 at “a” time, so referring to it beginning to exist (i.e. at “a” time) turns out not to really mean anything. I think that’s neat. :)

If we’re willing to stipulate (1) and (2) as long as “the universe” means only U1, the conclusion isn’t very powerful; we find out only that this particular batch of matter / energy that existed shortly after the Big Bang (or equivalent) must have been caused by something. And fine, maybe it was, but if it was that something was just some earlier and likely quite ordinary piece of U2. Calling that “God” just because it happens to be so long ago that we can’t theorize about it very well seems very far removed from what “God” is usually supposed to mean.

I’ve read various things on the Kalam argument, including the Craig piece linked above, and the counterarguments both don’t seem to actually understand physics and cosmology very well, and are mostly of the “proof by incredulity” variety; Craig writes, for instance,

To claim that something can come into being from nothing is worse than magic. When a magician pulls a rabbit out of a hat, at least you’ve got the magician, not to mention the hat! But if you deny premise (1′), you’ve got to think that the whole universe just appeared at some point in the past for no reason whatsoever. But nobody sincerely believes that things, say, a horse or an Eskimo village, can just pop into being without a cause.

— William Lane Craig

“Worse than magic” is hardly a logical argument, it’s just ridicule. And to state as a raw fact that no one seriously believes the argument one is attacking is, again, content-free. (The bit about Eskimo villages is a silly evasion; what may have come from nothing is for instance an unimaginably hot and dense ball of energy, not a horse. But even for a horse, expressing incredulity that one might appear spontaneously is not a logical argument; more work is required!)

This reminds me of the rather popular fundamentalist Christian statement that everyone knows deep down that God exists, and atheists are simply in denial. This is of course false and silly.

This also reminds me, now that I think of it, of an excellent lecture that I saw the other week, “God is not a Good Theory“. Among other things, the speaker here makes a similar move to my positing a universe that simply springs into being and seeing no contradiction in it; he describes various simple universes and shows that they can be explained perfectly well with no reference to any external God. “All I need to do is invent a universe that God does not play a role in” (a bit before the 10 minute mark). He also talks about the issue of causes with respect to the universe, and briefly mentions the Kalam. Definitely worth a listen.

On the Kalam in general, then, I find it extremely non-compelling. It doesn’t even have a sort of verbal paradox in it to have fun with, the way the Ontological argument does; it’s just weak. So I do wonder why it’s so popular. Thoughts in the comments are most welcome.

2022/04/02

Why the Ontological Argument doesn’t work

Back in the Rocket Car posting, we (following ol’ Gaunilo) showed, via a kind of reductio ad absurdum, that the Ontological Argument for the Existence of God doesn’t work (unless I have a really cool rocket car in my basement, which does not appear to be true).

Reductio arguments of this kind can be a little unsatisfying, because they just show that a thing is false, by showing that it being true would imply other things being true that we aren’t prepared to say are true. But they don’t tell us how the thing is false; in this case, the lack of a Z2500 Rocket Car in my basement doesn’t tell us how the argument fails, only that it fails.

But the other day, somewhere, I saw hints of an old refutation of the Ontological Argument that showed where it went wrong. I only glimpsed a few words of it, while looking for something else, and then forgot where or what it was, but a while later my brain said, “Hey look, I bet this is what that argument was saying!”, so here is that subconscious reconstruction. If anyone knows who made this argument, or an argument like it, anciently, do let me know!

Conversationally, the Ontological Argument goes something like:

A: Let’s define ‘God’ as that entity which has all perfections.

B: Okay.

A: Now, existence is a perfection, therefore since God has all perfections, God has existence, ergo God exists.

B: Wow!

The present argument against the argument changes the conversation, by having B point out problems in the underlying frame:

A: Let’s define ‘God’ as that entity which has all perfections.

B: We should be careful here, since there might not be any such entity. Let’s say instead that ‘God’ is defined as that entity which, if it exists, has all perfections.

A: Why do we have to do that? I can define ‘Humpty’ as a square circle, and that definition holds even though there are no square circles

B: Not really. If we define Humpty simply as a square circle, then if someone says “there are no square circles,” we can reply “sure there is; there is Humpty!”, and that’s wrong. It’s better to say that, strictly speaking, Humpty is a thing that, if it exists, is both a square and a circle. If it doesn’t exist, then of course it’s neither a square nor a circle, so we can hardly define it that way.

A: Hm, Oh. Well, if we define ‘God’ as something which … I guess … has all perfections if it exists, and then note that existence is a perfection —

B: We can conclude that God exists, if it exists! Much like everything else, really. :)

A: Wait, no…

The underlying observation here is that, strictly speaking, when we define or imagine something, we are defining or imagining the properties that that thing would have if it existed. If it doesn’t exist, of course, it has no properties at all. So when we imagine a seven-storey duck, we are imagining what one would be like if it existed. We aren’t imagining what it’s really like, because it doesn’t really exist at all, so it isn’t like anything; it isn’t a duck, doesn’t have seven storeys, and so on.

Therefore when we define God as having all perfections, we are actually saying that for any property which is a perfection, God would have that property if God exists.

And then the conclusion of the Ontological Argument will be just that God exists, if God exists; and that isn’t very interesting.

This isn’t an utterly formal (dis)proof, but I find it attractive.

2022/03/02

Wandering Dazed Through Everything

First of all, I’m sick. Three COVID tests over three days are all negative, so probably not COVID, but still. I’d rather not be sick. It started over the weekendish, and is gradually getting better.

Other than being sick, and therefore sleeping a lot, I’ve been doing not much more than generating more and more and more and more images on ol’ NightCafe. They have cleverly rate-limited the three-credit bonus for twittering (or, as it turns out, Instagramming) creations to one per hour, so I no longer have an infinite number of credits, but they are well worth a dime or two each.

Oh, I also reviewed another book. It was… well, you can read the review. :)

Otherwise I have been generating lots and lots and lots and lots more images, and wondering at them. I feel like I want a huge coffee-table book of them to page through, or a vast gallery of them arranged by a thoughtful curator. And on the other hand I also feel that I’m plateauing slightly in my fascination, in a way, and that they haven’t been … surprising me as much lately. We’ll see how that goes!

There are lots of gorgeous complex maximalist images from it in the Twitter (and my own Yeni Cavan is quite maximalist for all of that), but what I’ve been most struck by lately are the small and simpler things, in the spirit of Pencil on White the other day. So here are some of them, pretty much randomly. Some of them result from prompts that are only about style, not content, so the AI is free to use whatever content it thinks is likely. Some are from vaguely suggestive prompts, some abstract, some in French. :) I’ll see if I can get WordPress to lay them out more interestingly than one per row…

I observe that (1) this WordPress “gallery” control is kind of awkward and non-optimal, and (2) this particular “Illustration, pencil on paper” prompt tends to produce odd African faces sometimes; I wonder what that tells us about the AI and the training set.

Part of the reason, I think, that I want to wander among these gazing for many hours, is the feeling that there must be a whole story, probably an interesting one, or several interesting ones, behind each of the images. If that turns out not to be true, or one comes not to believe that it’s true, it might significantly reduce the fascination. Or can one just gaze and make up one’s own back stories for each and every image?

Those four are “Monochrome print of…” but you’ll have to click through to see the individual titles; the captions on the WordPress gallery control were overlaying too much of the images.

See that rather creepy result from “Colored pencil on paper” up there? Well, that’s the least creepy result I’ve gotten from that prompt all by itself. I don’t know what that means at all. Is there a whole bunch of creepy colored pencil on paper body-horror stuff in the training set? Or is it some strange local maximum that happened to form in the neural net? Mysteries!

The captions interfere with the images there a bit, at least in this view, but YOLO, eh? I feel really torn by these pictures, between being fascinated by the thought of the artist looking out over the valley from their shack on a cloudy afternoon, and then feeling betrayed because there was no artist, and then feeling that they come from an amalgamation of all the artists who created the AI’s training set in all their separate times and places, and finally that they are as fascinating as the accidental (or not!) patterns in the water threading between the rocks and barnacles as the tide comes and goes.

To finish up for tonight, we just show off that it knows some French, as generative AIs trained on as much as possible of what was lying around tend to be casually multi-lingual by accident.

(I don’t know why it’s made that last image so gigantic; apologies if it does that for you also and is disturbing.)

For some other time, I’ve also generated some sets like images from Leonard Cohen lyrics (there’s a crack in everything, that’s how the light gets in), from the World Tree (all sepia), book covers (did I already post some of those?), the wonders of Xyrlmn, cute Xenobots, and some other things. I feel like I should post all of them! And also that they can as easily be allowed to slip away relatively unrecorded.

In the meantime, we wander between the pictures, turn the pages, stroll the galleries, and let the patterns touch our minds.

2021/10/23

Can even an omniscient, omnipotent God have certain knowledge?

Or even an omniscient, omnipotent, omnibenevolent one, if that isn’t a contradiction (it seems to me to be, but I’m not certain).

First, we should establish what we mean by “certainty” or “certain knowledge”. An initial (but wrong) attempt would be that a subject S knows a proposition P “with certainty”, or “certainly”, or “has certain knowledge of P”, if S knows that P, and P cannot be false, in the sense that it is necessarily true, or true in all possible worlds, or in general that P in some relatively ordinary modal logic.

This is wrong, because it implies that all knowledge of necessary truths is certain knowledge. But that’s not the case; for instance I know that Khinchin’s theorem (the one with Khinchin’s constant in it) K, “for almost all real numbers x, the coefficients of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x”, is true. But my knowledge is not at all certain; my only evidence for it being true is basically the Wikipedia page and a tweet, and that is far from sufficient for certainty.

One could describe all sorts of situations in which the theorem might be false and still be in Wikipedia and on Twitter: the known proofs of the theorem might have a subtle flaw, the whole thing could be an internet prank, etc. Now if the theorem is actually true, it’s necessarily true, so the worlds described by those situations aren’t possible worlds (in the rather strong sense of mathematical or logical possibility), but they are “for all that I know possible”, and that’s the relevant property here.

In my undergraduate thesis I called this superset of possible worlds “conceivable worlds”, and that seems like a good-enough term for this little essay. The basic idea is that S knows P (for reasons R) iff S believes P, and P is true in all reasonably nearby conceivable worlds in which S believes P for R. (“Reasonably nearby” isn’t directly relevant here yet, but you get the idea.)

Note that here I’m saying only that I can’t be certain that K is true. I can definitely know that it’s true, and I claim that I do know that; it’s an assumption in this discussion that knowledge doesn’t require certainty.

So, if it’s not enough for certainty that S knows P, and that P is true in all possible worlds, what does give certainty? Is it that S knows P, and P is true in all conceivable worlds? That seems plausible. Perhaps equivalently, the criteria could be that S knows P, and that S is not required, as a responsible epistemic agent, to entertain any suggestion that P might be false. S would be reasonable, to put it another way, to risk absolutely anything at all on the bet that P is true.

To say that we are never certain of anything (and I don’t think we are), is to say that it’s never responsible to completely ignore any possibility that any given belief is false. We can certainly ignore contrary evidence in some circumstances; if I get spam about exciting new evidence that the Earth is flat, I can go phht and ignore it, because there are so many false claims like that around, and comparatively little rides on the issue. But there are other circumstances where this would be irresponsible; if some being asks me to bet a dollar against the lives of every person in Maine, that the Earth is not flat, it would be irresponsible of me to accept. That is, perhaps equivalently, my Bayesian prior on the Earth being flat is not quite 100%; if it was, it would be irrational not to take that bet and win the dollar.

For simpler necessary truths, like “two plus two is four”, it seems that one might claim certainty. But would it be responsible to bet a dollar against 1.3 million human lives, on the assumption that one is not somehow mistaken, or has been hypnotized into using the wrong names for numbers, or something so peculiar that one hasn’t thought of it? I think it’s pretty clear that the answer is No; weird stuff happens all the time, and risking lives that there is no sufficiently weird stuff in this case isn’t warranted. So, basically, we can never be certain.

(Another possible reading of “S knows P with certainty” or “S is certain that P” would be something like “S knows that P for reasons R, and in every possible / conceivable world in which S knows P for R, R is true, and furthermore S knows those preceding things”. I think it’s relatively clear that the “something weird might still be going on” argument applies in this case as well, and since something really weird might be going on for all we know, and mere knowledge is not certainty, we also never have certainty under this modified definition.)

Having more or less established what certainty is, and that we ordinary mortals don’t have it, we can now ask whether an omniscient, omnipotent (and optionally omnibenevolent) being G can be certain of anything. At first blush is seems that the answer must be Yes, because being omnipotent G can do anything, and “being certain” is a thing. But this is also like “creating a boulder so heavy that G cannot lift it”, so we ought to think a bit more about how that would work.

G is omniscient, which we can safely take to mean that for all P, G believes P if and only if P is true. I think we can also safely grant that for all P, G knows P if and only if P is true. That is, for all and only true propositions P, G believes P, and in every reasonably nearby conceivable world in which G believes P, P is true. (Note that I’m mostly thinking these thoughts as I go along, and have just noticed that I have nothing in particular to say about the reasons R for which G believes P. We’ll see if we need to think about that as we proceed.)

Since G knows P, and G is omniscient, G also knows that G knows P, and so on. But can G be certain of one or more propositions P? Is God’s Bayesian prior 100% for all, or any, true propositions?

Just what is G’s evidence for any given P? How does God’s knowledge work? (Ah, apparently R is coming up almost immediately; that’s good!) G knows “I am omniscient,” so that’s a start. It seems that, given that, G could go from “I believe P” to “P” more or less directly. That feels rather like cheating, but let’s let G have it for the moment.

If G’s evidence for P is typically “I believe P” and “I am omniscient”, can G get certainty from that? We are often somewhat willing to grant “incorrigibility” to beliefs about one’s own beliefs, and while normally I’d say that only gets one to the level of knowledge, let’s stipulate for the moment that G can be certain “I believe P” for any P for which that’s true.

But what about “I am omniscient”? How does G know that? What is G’s evidence? Can they truly be certain of it?

There is a set of possible beings Q, each of which believes “I am omniscient”, but is mistaken about that. Some of them are quasi-omniscient, and know everything except for some one tiny unimportant detail D of their universe, of which they are unaware, and for the fact that they are not omnipotent due to not knowing D. They believe two false propositions: not-D, and “I am omniscient”. They also believe an infinite number of other false propositions, including those of the form “not-D and 3 > 2” and so on, and perhaps some that are more interesting (depending among other things on just what D is).

Other members of Q are simply deluded, and believe “I am omniscient” even though their knowledge of their universe is in fact extremely spotty, even less than the average human’s, and they have just acquired the belief “I am omniscient” through trauma or inattention or that sort of thing.

Now, how does G know, how is G certain, that G is not in the set Q? This seems like a hard question! For any reasoning that might lead G to believe that G is not in Q, it seems that we can imagine a member of Q who could reason in the same way, at least in broad outline. G can do various experiments to establish that a vast number of G’s beliefs are in fact true, but so can a member of Q. The quasi-omniscient in Q can do experiments not involving D, and the deluded in Q can do experiments but then be mistaken about or misinterpret the results, or unconsciously choose experiments which involve only members of their comparatively small set of true beliefs. G can follow various chains of logic that imply that G is not in Q, but so can many members of Q. The latter chains of logic are invalid, but how can G be certain that the former aren’t?

One conventional sort of being G is one that not only knows everything there is to know about the universe, but also created it in the first place, and in some cases resides outside of it, sub specie aeternitatis, observing it infallibly from the outside. Such a G has a comparatively easy time being omniscient, but can even this G be certain that they are? Again, we can look at various members of Q who also believe that they created the universe, exist outside of it able to see and comprehend it all at once, etc. They have, at least in broad terms, the same evidence for “I am omniscient” that G does, but they are mistaken. The quasi-omniscient created D by accident, perhaps, or created it and then forgot about it, and when looking in from the outside fail to notice it because it is behind a bush or equivalent. The deluded in Q are simply mistaken in more ordinary ways, stuck in a particular sort of psychedelic state, and so on. Those in Q who are in-between, are mistaken for in-between sorts of reasons.

There may be many ways that G can determine “I am not in Q” with relatively high reliability. One can imagine various checks that G can do to confirm “I am not like that” for various subsets of Q, and we can easily grant I think that G can come to know “I am not in Q” and “I am omniscient.” But the question remains of how G can become certain of either of those. It seems that any method will fall victim to some sort of circularity; even if G attempts to become certain of a P by exercising omnipotence, we can imagine a member of Q (or an enhanced Q-prime) who believes “I am omnipotent and have just exercised my omnipotence to become certain of P”, but who is mistaken about that. And then we can ask how G will determine “I am not like that” in a way that confers not merely belief or knowledge, but certainty.

We can imagine a skeptic’s conversation with G going from a discussion of G’s omniscience, to questions about the certainty of G’s knowledge of G’s omniscience, to questions about how G knows that G is not in some particular subset of Q, and ultimately to how G knows that G is not in some subset of Q that the discussants simply haven’t thought of, and to how G can be certain of that. It would seem that every “since I am omniscient, I know that there is no part of Q that we haven’t covered” can be called out as circular, and that “I know that I have certain knowledge of P” can always be countered by observing that knowledge does not imply certainty.

Another consideration (pointed out by a colleague who asked if I wasn’t anthropomorphizing God) is that I am assuming here that G forms beliefs and qualifies for certainty in roughly the same way(s) that we humans do. But shouldn’t we expect G’s ways of knowing and being certain to be very different from our mortal ways? Why should these human-based considerations apply to G at all? The response, again, is that members of Q, and in particular deluded human members of Q, could make exactly that same claim: that they come to know things with certainty through special means that aren’t accessible or understandable to mere humans, and to which none of this logic applies. And, again, if a deluded human member of Q could make this claim without in fact having certainty of anything, then G’s making this claim has no useful traction either, unless G can separately demonstrate not being in Q, which is exactly the point at issue.

To put it roughly and briefly, perhaps, the question comes down to how even G can be certain that G is not just a deeply deluded mortal human, who (in addition to lots of ignorance and false beliefs) is very good at self-deception. Even if G truly is omniscient, can G be certain of that? If so, how? Is it perhaps fundamental to the nature of knowledge and evidence, that no knowledge is ever certain knowledge? Is certainty, more or less by definition, a property that no knowledge can ever in fact have?

And is there anything more to this question than the perhaps rather casual and vaguely-defined (and/or perhaps very important) idea that however confident some being might be in their evidence for a belief, there might always be something weird going on?

That was fun!