Archive for ‘culture’

2023/06/03

ChatGPT and “Azathoth and the Dharma”

Well, heck.

So, I was reading Ligotti’s “The Shadow at the Bottom of the World” and thinking that it was pretty good, and that he tends to picture the ultimate squirming chaos at the center of reality as sometimes attractive (if perversely) as well as finally horrifying, in a way that I don’t remember Lovecraft doing, at least not as sympathetically.

At some points, for instance in The Tsalal (whose name is explicitly a reference to Poe), I thought that when a character points out with horror that appearances are just temporary masks on an indescribable underlying reality, it sounds (without the horror part) quite a bit like various aspects of the Dharma (it does, right?). This seemed, I thought, like something that someone might write a weblog entry or something about!

And then, because my mind has been infected (in a mildly Cthulhu-esque way perhaps) by the idea of LLMs writing text, I thought I would propose the notion to ChatGPT or Bard or something, and have it write words! It would either do a good job, which would be interesting, or an amusingly bad job, which would be amusing.

I thought the “amusingly bad job” outcome was more likely, but, well, heck.

I used ChatGPT 4.0 rather than 3.5 or Bard, just because it’s gotten the most hype, and also because since I’m paying money to use it I thought it might be better, so if it failed amusingly it might be more meaningful. Here’s the interaction (and I do realize that my “always changing” is problematic as a characterization of the Dharma, but then so is everything else):

Me: For an essay entitled “Azathoth and the Dharma”, I’d like to discuss how the idea that appearances are only temporary masks over a deeper truth that is indescribable and always changing, features both in the kind of intellectual horror written by Lovecraft and especially Ligotti, and in Buddhist beliefs about truth and the Dharma, and that the main difference is one of emotional overtones with which it is presented. Lord Dunsany might also be relevant. Please draft the beginning of this essay for me.

ChatGPT 4.0:

Title: “Azathoth and the Dharma: Transient Masks of Deeper Truth in Lovecraft, Ligotti, and Buddhist Philosophy”

In the chaotic cosmos of H.P. Lovecraft and Thomas Ligotti, we find ourselves confronted with a universe filled with inscrutable forces and ancient entities, their true natures hidden beneath uncanny, horrifying masks. The universe is not simply indifferent to humanity; it is utterly alien, impenetrable to human understanding, and constantly in flux. These themes of grotesque transformation and unknowable horror are at the heart of their works. And yet, when viewed from another angle, these concepts can be seen as echoing the fundamental principles of Buddhist philosophy, particularly the teachings of the Dharma. They, too, postulate a reality that is continuously changing and ultimately indescribable.

The Dharma, the cosmic law and moral order that Buddhists seek to understand and follow, hinges on several key principles, one of which is Anicca, or impermanence. Anicca posits that all conditioned phenomena are in a state of constant change. Whether we consider the universe at large or the very essence of our being, everything is transitory, and appearances are fleeting. Beneath the apparent solidity of phenomena lies an incomprehensible flux of arising and ceasing events, an underlying truth that cannot be adequately captured in words.

This notion aligns strikingly with the chilling cosmologies of Lovecraft and Ligotti. Azathoth, Lovecraft’s nuclear chaos at the center of the cosmos, is the ultimate symbol of this philosophy. It is the blind idiot god, an embodiment of constant change and the source of all existence. It is unknowable, its essence cloaked in cosmic chaos and frenetic piping, its reality beyond human comprehension.

Similarly, in the phantasmagoric narratives of Ligotti, we find the protagonists often straddling the line between grim reality and unfathomable horror, their worldviews shattering in the face of a truth too alien and amorphous for the human mind to comprehend. It’s not unlike the Buddhist concept of Sunyata, or emptiness, which refers to the lack of inherent or eternal essence in all phenomena.

Yet, as we draw these parallels, we are confronted by the starkly contrasting emotional overtones that characterize these disparate systems of thought. In Buddhism, the realization of Anicca and Sunyata is liberating, offering a path towards enlightenment and the cessation of suffering. But in the macabre tapestries woven by Lovecraft and Ligotti, the confrontation with these realities is inherently terrifying, inducing existential dread and insanity.

The eerie fables of Lord Dunsany, who greatly influenced Lovecraft, might offer an intermediary perspective. He presents us with bizarre, otherworldly landscapes and creatures, yet there’s often a whimsical, dreamlike quality to his work. The absurd and inexplicable are not strictly horrifying, but engender a complex mixture of awe, fascination, and unease.

This divergence in emotional reaction prompts us to consider the nature of human perception and our responses to the deep, enigmatic truths of existence. Is the Lovecraftian terror at the chaotic heart of reality as valid a response as the Buddhist acceptance and quest for liberation? Or does it simply reflect our inherent fear of the unknown, our resistance to the dissolution of ego, the prospect of losing what we perceive as our solid, unchanging selves? In our exploration of ‘Azathoth and the Dharma’, we delve into these questions, probing the intersection of cosmic horror and Buddhist philosophy, and the deeply human, emotionally-charged responses they elicit.

Transcript ends

I’m not sure how to feel about this. It did quite a good job, at least on a few casual readings. The references to anicca and sunyata are plausible, no worse than I would expect from any reasonably educated person daring to write on those concepts, and they weren’t specified in the prompt; the LLM was clued enough to bring them in itself. The paragraph on Dunsany is a little thin, but again no worse than I would expect from a decent human writer; I did ask for a Dunsany link, and this is the obvious one.

It clearly knows (“knows”) who Lovecraft, Ligotti, and Dunsany are, and what in general the Dharma is. I would definitely edit the phrase “the fundamental principles of Buddhist philosophy, particularly the teachings of the Dharma”, but yet again this is a matter of subtle tone, not by any means an egregious error.

It did nothing that (from a human writer!) would have been surprising or even especially non-obvious (although the thought that the feeling of horror may “simply reflect our inherent fear of the unknown, our resistance to the dissolution of ego, the prospect of losing what we perceive as our solid, unchanging selves” is perhaps beyond trivial extrapolation of the assignment). But its unsurprising and mostly obvious expansion of my original idea into seven nice paragraphs of solid prose (with a fitting subtitle for good measure) contains, I have to admit, no amusing errors that I can find!

I even asked it for some references and further reading, and it gave a decent dozen suggestions for both Lovecraftian horror and Buddhism, all of which actually exist; my only criticism being that

“Noel, Daniel C. The Soul of Shamanism: Western Fantasies, Imaginal Realities. Continuum, 1997. This book explores the intersection of spirituality and fantasy literature, which could be useful to draw broader conclusions about your topic.”

is a bit of a stretch. On the other side, when I asked it if it knew of any existing works making the same sort of link that I propose here, it did not make one up! It said encouragingly that it didn’t know of any, and finished off with this:

“Overall, your proposed exploration of the specific connection between Lovecraftian horror and Buddhist philosophy seems to be quite unique, and it’s an interesting new angle to consider in the study of both these areas.”

Well… thanks, ChatGPT, that’s… nice. I guess?

I feel like this all certainly means something. Or perhaps it’s just the thin, monotonous piping of an unseen flute!

2023/05/25

Straw God, Steel God

A brief prologue: what does it really mean to believe a thing? What kinds of things can be believed? Is there a difference between belief and assent, or between belief and (say) tending to express a claim? If belief is a matter of an attitude toward a possible world, then it’s (perhaps) impossible to believe a self-contradictory proposition, including a necessarily false one, including the negations of mathematical truths. But it doesn’t really feel right to say that it’s impossible to believe that P=NP (assuming it’s false). Certainly one can assent to or tend to express the claim that P=NP, in either case. Is it possible to believe complete nonsense; can I believe “Nobbo dif daf phlongo” (without it having any further meaning)? Arguably I can assent to it or tend to claim it (or at least tend to assert it, or utter it with an assertive attitude?), but can I believe it? All interesting questions.

So anyway, I continue to binge various YouTube atheists responding to Christian fundamentalist apologists, and I’m really not sure why. Biblical inerrancy and literalism are just silly; there are about five arguments (Pascal’s wager, objective morality, “creation implies a creator”, supposedly-fulfilled prophecies, fine-tuning, and I’m sure I’ve forgotten a few), and there are very strong counterarguments to all of them. The material that’s supposedly the infallible word of an omnipotent, omniscient, and omnibenevolent being is very obviously a bunch of stories and poems and things put together over centuries by a bunch of different people and groups and political processes, that exist in various variants and contradict each other in various ways just as one would expect of such a compendium. If it weren’t that lots of people believe, or purport to believe (see prologue), this stuff, and use it to justify political and social positions that are to various degrees odious, no one would bother responding to it, any more than anyone responds to the claim that Charlotte’s Web is literally true (some pig!). It would be a straw man, if only there weren’t lots of people asserting it.

(And yeah, it’s not just Christianity; many religions have holy books where it’s obviously silly to take the material as literally true. But Christianity is the one that causes the most trouble around me.)

Varieties of the Religious Experience: A variety of vaguely religious-looking symbols and objects hanging from an old wall and sitting on a shelf. Center is a largish cross with a circle (perhaps a Celtic cross), hanging next to a chain with a loop of vine or wool at the end, which is next to a round pendant with perhaps a clock with a painted background. Also on the wall is a piece of wood with a cross whose four arms are each split into two on the upper half and a fork with two antler-like tines. On the shelf are a squat red candle, and a tangle of cloth and metal objects.

On the other hand, I believe that most Christians, or at least a significant fraction, aren’t fundamentalist Bible literalists, and therefore believe, well, something else. And it occurred to me to wonder just what they do believe, that’s reasonable or arguable or not obviously false or void, but that still makes them Christians in some reasonable sense of the term, and whether I could write down some of the possibilities. To “steelman” Christianity, if you will, in various ways, and see what comes out.

One possibility, which I’ll call I dunno secular Christianity is something like say the Jefferson Bible: the idea that Jesus of Nazareth was a wise person who taught worthwhile stuff, and we should pay attention to it, and read (at least the non-supernatural) writings by and about him, and take him seriously, and perhaps even venerate him for some meaning of that word (but probably as a person rather than something else), maybe have holidays about his birth and death and all that, and so on.

This is fine; a discussion with this sort of Christian would be calm and rational and likely interesting; there is no fundamental disagreement. On the other hand many Christians wouldn’t count it as Christianity, since that is so often tied into questions like “Have you accepted Jesus as your personal savior?” and “Do you believe that he died and then rose again on the third day?”, which this notion of secular Christianity in its most obvious interpretation doesn’t particularly encompass.

I think it’s more common for a Christian to be in something I’ll call mysterian Christianity, which emphasizes faith, and believing in God and Jesus and the Bible without knowing and without worrying too much about exactly what it all literally means. In my model of the world, your typical believer who goes to church every Sunday but doesn’t post apologetics videos on YouTube is like this. They believe that the Bible is true, literally, or “seriously” as the JWs put it, or whatever, and if the things that it says don’t seem to make rational sense right now, that’s fine, we humans are very limited in our understanding, and it will be clarified in time, and in the meantime we should pray and do what seems right and/or what Father O’Malley advises. For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known.

A discussion with this sort of Christian would be relatively short and simple, I think, as long as one didn’t get obnoxious by asking them how they’re justified in believing something they don’t even understand. The answer is presumably something like “I’ve been touched by the Holy Spirit, and you haven’t, and that’s it; maybe some day you will be too!”, and there’s not much further that one can go politely from there.

What else do we have? I’d like a rational (re)construction of some sort of mystic Christianity, in which one accepts Jesus as one’s personal savior, and he died and rose again on the third day to atone for our sins and all, but also doesn’t make silly truth-claims and doesn’t lead to any odious political or social attitudes. This would have to involve lots of symbols and metaphors, or in the more exalted sense lots of reality-as-metaphor, where we say that Jesus of Nazareth was (and is) both one with God and also the Son of God, and probably that each of us is also one with God (and with Jesus) and also the Son of God, and that each of us (throughout life, throughout the day, in every moment) dies and then rises again, that by our own (perhaps constant) deaths we pay (God and ourselves) atonement for our sins, where simply being an ensouled embodied entity inherently constitutes sin, except when we take salvation into ourselves through Christ’s, which is to say our own, sacrifice of himself, which is to say ourselves, and we must accept Christ (which is to say, the cosmos) as our personal (because everything is personal) savior.

This is to take the Bible as perhaps inerrant and literal, but noting that to the same extent we take Charlotte’s Web (and especially the Wind in the Willows) and the very sound of the wind in the trees as interrant and literal, which is something that we can do only (as far as I know) by acknowledging that all language, even all experience as refracted through language, as metaphorical, so that the most literal possible interpretation of any text or experience is still an interpretation rife with symbol and metaphor. Which is what mysticism is, in some (metaphorical) sense.

Is this all interesting and/or accurate? I’m not certain. I, obviously, don’t believe fundamentalist Christianity, because it’s facially silly, and I keep listening to people refuting its claims (probably just because it validates my existing attitude or whatever). Secular Christianity (as used above) is fine, and probably held by various Unitarians and so on, who may or may not (perhaps more often not, in these troubled times) call themselves Christians. Mysterian Christianity isn’t especially attractive to me, as I haven’t been appropriately touched by the Holy Spirit, and frankly I don’t think anyone else has either in the relevant sense, but arguing about it is very much like I think arguing with a solipsist; it’s too bad that people sometimes get odious behaviors from it. Mystic Christianity is fine, and is so to speak just a different way of saying something that I do believe (to the extent that “believe” applies here) but that I put in Buddhist or Ariadnite terms rather than Christian ones, because icky connotations or whatever.

Seems reasonable. :D

2023/05/11

OpenAI: Where by “can” we mean “can’t”

Disclaimer: I work for Google, arguably a competitor of OpenAI; but these opinions are solely my own, and I don’t work in the AI area at Google or anything.

And I mean, oh come on!

So much AI “research” in these hype-heavy times is all bunk, and I suppose one shouldn’t expect OpenAI (“Open” heh heh) to be any different. But this pattern of:

  1. Use an AI to try to do some interesting-sounding thing,
  2. Evaluate how well it did by waving your hands around, or just by eyeballing it,
  3. Declare victory,
  4. Publish an “AI can do thing!!!!” paper that will get lots of media attention.

is just sooooo tiring. (See for instance the YouTuber in that prior post who showed their system producing a non-working tic-tac-toe game and saying “well, that worked!”.)

The one I’m facepalming about here was brought to my attention by my friend Steve, and omg: “Language models can explain neurons in language models“. They did sort of the obvious thing to try to get GPT-4 to make predictions about how a few selected “neurons” in GPT-2 behave for a few inputs. The key line for me in the paper is:

“Although the vast majority of our explanations score poorly, we believe we can now use ML techniques to further improve our ability to produce explanations.” 

— OpenAI

They say this because (they have been drinking too much of the Kool-Aid, and) they tried a few things to make the initial abysmal scores better, and those things made them slightly better, but still poor. They say in the (extremely brief) report that although it works badly now, it could be the case that doing it differently, or maybe doing more of it, might work better.

In any other field this would be laughed at (or politely desk-rejected with a “fantastic, please submit again once you find something that does actually work better”); but in the Wacky Wacky World of Large Language Models it goes on the website and gets cited in half a ton of headlines in the media.

And is it really honest to use “can” in a headline to mean “can, very very badly”? By that standard, I can predict the weather by flipping a coin. I didn’t say I could predict it accurately!

I suppose the LLM hype is better than the Crypto hype because fewer people are being bilked out of money (I guess?), but still…

2023/05/05

1960s Psychodrama of Eldritch Suspense

So Midjourney has Yet Another New Version of their main engine, called “5.1”, and it’s very much like v5, except that it has more of a “house style” like 4.0 did, which can be turned off via “–style raw”. (There’s also “–stylize”, and one might think that ”–style raw” and say “–stylize 0” would do the same thing, but apparently not. It’s all magic.)

I have, as ever, been playing with it extensively (given that “relax mode” GPU time is free at my membership level, any time when the engine is not working for me is wasted, eh what?), and am now up somewhere over forty thousand images, most recently involving things like echidnas (apparently not “echidnae”; “is Sonic an Echidna?“) and many stills from non-existent movies. I will present here a number from a 1960s Psychodrama of Eldritch Suspense, because woot! (See also The Cult Hit of 1977.)

black and white image of two worried-looking young people in 1960's clothing sitting in a room; the woman is wearing a hat with perhaps googles or sunglasses on top of her head. In the background in a blurry figurine of something with tentacles.
Brad and Laura are concerned
Black and white image. A man in jacket and tie sits behind a desk, and a dark-haired woman sits in a chair in from of the desk. Both are looking into the camera. Behind them on the far wall are some rather disturbing vague shapes, perhaps of statuary?
Dr. Martin and Miss Carter are also concerned.
Black and white image. A dark-haired woman sits in an armchair at stage right. At stage left is a large sculpture of an oddly-proportioned and perhaps unclothed humanoid; the base of the sculpture seems to be uncarved stone or earth that spills out onto the floor. On a table at stage center is a small disembodied head, probably another sculpture.
Miss Carter appreciates Dr. Martin’s collection of exotic curios.
Black and white image. At stage left, background, a man in jacket and tie sits at a small table; on the wall above the table is a portrait of a rather sinisterly-scowling man. At stage center, foreground, a young woman with light hair looks downward and to our left, with a disturbed expression. At stage right, even more foreground, a man faces away from us and toward the woman; he is mostly in shadow.
Patrons in the village pub are concerned. And not only about the ugly picture.
Black and white image,1960s home interior; a man and woman, stage left, appear somewhat concerned by the dark-haired woman, center-right, who stands beside and bears an eerie resemblance to a tall black figure with a black stony inhuman face, large white-rimmed eyes, and a flowing black cloak (or a sculpted version of one).
Mrs. Martin is perhaps too impressed by the obsidian statue.
Black and white image. A room with dirty-looking walls and debris on the ground. A man, stage left, looks at a small woman or girl at stage center. Behind her on the wall is an eerie oblong shape with fur or spines and perhaps a hint of a face and glowing eyes. A door in the wall is ajar and beyond it is blackness.
“Joannie, what –” “Please, Go, Through, The, Door, Doctor, Martin”
Black and white image. A room with dirty-looking walls and ceiling, and wetness and debris on the ground. A man, stage left, looks at a small woman or girl at stage center. At stage right high on the wall is an eerie bulbous shape with small bright eyes and several slimy tentacles. A door in the far wall is open.
“Praise, The, Tentacles” “Yes, The, Wonderful, Tentacles”
Dim muted colors. A young woman or child sits in a large brown chair, facing us with a dark expression from large dark eyes. On the green wall behind her are seven faces, or masks, or heads, some with long dangling hair.  A bright white light at top center casts dark stark shadows.
“Come, Back, Soon”

(I like how exactly one of the eight images I made came out in color.)

2023/04/26

It’s All Bunk, Again

This is about the current AI hypestorm, and has intentionally a rather extreme title. :) For some values of “it”, it is not all bunk; there are lots of cool and funny and convenient things that AI in general, and that large auto-regressive language models specifically (“LLMs”), can do. Various tasks and jobs will become quicker and easier, and there will be some new fun things to do.

But when it comes to all the breathless claims that “ChatGPT / Bard / FooLLM will completely change the way that we X”, for virtually any value of X that isn’t “produce reams of plausible-looking text whose quality doesn’t matter much”, it’s all bunk. Something describable as AI may eventually do that (I’ll even say that it will eventually do that if everything else survives long enough), but the LLM technology that we have now, even if made better / faster / stronger, will not.

This is a prediction, and it could be wrong, but I’m willing to make the claim without much in the way of caveats. There have been various times in the past where I’ve muttered to myself that a thing was bunk, but not said it on the record so I couldn’t point back at it and say “ha, I toldja so!” once it turned out to be, indeed, bunk. So this time I’m putting it on the record.

It’s commonly observed that when reading a media account of some technical field that one knows something about, one often thinks “hey, that’s completely wrong!”. And that this suggests that the media is also completely wrong in technical fields that one doesn’t know enough about to notice the errors.

It seems likely that this applies to media stories like “ChatGPT / etc will completely change the way that we X” as much as to any other, so given that we know something about, say, software development, we could look at, say, “ChatGPT spells the end of coding as we know it” (also on Yahoo News).

And it’s bunk. The headline is sort of the usual cringey headline, in that it exaggerates the most extreme part of the article for the clickz, but the article does say inter alia, “For better or worse, the rise of AI effectively marks the end of coding as we know it,” which is close. “[T]he rise of AI” is considerably more general than “ChatGPT”, and “effectively” is a nice broad weasel-word, so the actual sentence is fuzzier and more defensible than the headline, but it’s still bunk. As is for instance the quote immediately preceding that statement, in which someone with a deep financial interest in the subject says “there’s no programmers in five years,” and various other breathless things referenced by the piece.

Bunk. Bilge. Bollocks.

A drawing in warm colors of a two-level bunk-bed in a pleasantly messy room with lots of shelves and things hanging on the walls and so on.
Not this bunk; this is a nice bunk.

The thing is that in software development, as I suspect in a whole lot of other domains, LLMs are in fact quite good at doing things that we’ve come to rely on as convenient proxies for important skills. What we ask software engineering job applicants to do in forty-five minutes to an hour isn’t really all that much like what they will be doing if we hire them, but it’s the best proxy we’ve been able to come up with that fits in that time-scale. In other fields, doing the questions on the bar exam isn’t much at all like what real lawyers do in practice, but it’s the best proxy that we as a society have been able to come up with.

Now we have a situation where there are pieces of software that can do a plausible job at various of these proxies (although even here some of the breathless reports are frankly bunk), but that absolutely cannot do the real jobs that we have gotten used to using them as proxies for. And this is driving people to all sorts of false conclusions about the real jobs.

In the software field specifically, what we ask candidates to do, and what various LLMs have shown various levels of skill at doing, is to take a description of a programming task and write code that does that, where the task is (ideally) one that they haven’t seen before, and also is simple enough to write code to accomplish within forty-five minutes to an hour.

Is this what actual professional (or even hobbyist) coders do? I think it’s safe to answer that with an unqualified No: this is not what human coders actually do. Once in awhile one might have a novel thing to do, and do it in forty-five minutes to an hour, but it doesn’t just fall from the sky into one’s lap; it comes up as part of some larger cohesive project that one is working on. Even if one is the most junior coder on a team, doing mostly what the more senior members ask you to do, that is essentially never “please spend an hour and write code to reverse a linked list for me”; that just isn’t how it works.

Actual working coders understand to a greater or lesser degree an overall thing that they are working on, what it does for a user, how it is supposed to work, how to debug it when it fails, at least the basic functional and nonfunctional requirements of the overall system as well as their part, the quirks it has, what libraries are available for them to call, what other members of the team are doing, and so on. And the overall thing isn’t a single function that reverses a linked list, or says whether one string is contained within another.

Let’s look at one of the motivating examples in our first example article. “Adam Hughes, a software developer,” it says, “… signed up for an account and asked ChatGPT to program a modified tic-tac-toe game, giving the game some weird rules so the bot couldn’t just copy code that another human had already written. Then he quizzed it with the kind of coding questions he asks candidates in job interviews….”, and voila, “Hughes found that ChatGPT came back with something he wasn’t prepared for: very good code.”

Unfortunately, this is the only place I can find this impressive feat mentioned. Adam Hughes’ own writeup of how “ChatGPT Will Replace Programmers Within 10 Years” doesn’t talk about this modified tic-tac-toe game at all, or the “coding questions” or the “very good code” referenced in the article. So I’m not sure what’s going on there.

The claim in Hughes’ article title is also bunk (which is to say, I disagree), while we’re on the subject. There is no reason to believe that any LLM will be able to do what’s listed there under “Phase 2” or later. Well, actually, the wording is odd: it says that the AI will “be able to make project-wide suggestions like: Rebuild this Angular 6 project in the latest version of React; Add 100% unit-test coverage to the project…”. I mean, sure, maybe the AI could suggest those things; but in order to predict that programmers are going to be replaced, the author presumably means that the AI will be able to do those things? A bit puzzling.

(Also puzzling is the title of that article; on the page itself the title is currently the nicely ambiguous “ChatGPT Will Replace Programmers Within 10 Years,” which is in some sense true if it somehow replaces exactly two (2) programmers by 2033. But the HTML of the page has a title-tag with the content “ChatGPT Will Replace All Programmers”, which is a much stronger claim about how many will be replaced (all of them!) but leaves out the timescale; heh. The actual text of the article predicts 95% job loss in 5-10 years, and 99% in “10+” years, so it’s sort of the most extreme combination of the two headlines (and it’s wrong).)

Hughes has been updating the beginning of that post with a list of things that are supposed to convince doubters that indeed ChatGPT Will Replace (All) Programmers Within 10 Years; the most recent is a video that, he says, shows “fully autonomous AI agents created python code. Literally replacing programmers. Not that smart, but it shows the how possible this is TODAY, well ahead of schedule.” (Bold in original.)

The video is, to be blunt, kinda silly. This guy has a system where two ChatGPTs talk to each other, and are in some way able to search the web and so on. He asks them to make code to display an input box, and they do that, at the level that one would have found with a web search for “input box python code example”. He asks them to make code to play tic-tac-toe (again, code that is all over the web), and they claim to do that, but it doesn’t seem to work (it displays the UI, but doesn’t make correct moves or reliably detect a win). Undeterred, he says “that worked”, and continues on (lols).

He asks them to “create a new strange simple game”, and they create a “guess the number in ten guesses” with high / low feedback game (not exactly “strange”, and again code that is all over the web), and it might work (aside from the apparently nonfunctional “difficulty” pulldown) but he doesn’t look inside or test it enough to be sure. And so on. And then for like the last 40% of the video he shows off “AutoGPT”, which appears just to fantasize to itself in text and non-functional pretend API calls about how it might create an AGI by linking together GPT instances in various vaguely-described ways, and then gets into a loop of repeating the same thing over and over.

What might Adam Hughes mean when he describes this as “fully autonomous” (given that it’s just doing exactly what it’s told) or as “Literally replacing programmers“? I’m not sure. Is there a programmer somewhere who is going to be replaced by a system that can’t write a tic-tac-toe game, or that can fantasize about creating AGI? I sort of doubt it.

One could cynically note at this point that the Business Insider / Yahoo News article has no doubt gotten lots of clicks and therefore ad sales, that the Hughes piece is a subscriber-only piece on Medium that ends “Stay tuned for my follow-up article about how to prepare for and thrive in this brave new world,” and that if you want to play with the system shown in the video you can “become a member of the channel” for like US$5/month and up. But that would be cynical. :)

There are dozens, maybe hundreds, of other examples we could look at, just in the coding area, let alone law, management, or all those other fields that AI is supposedly about to completely change (the last-linked article there is quite amusing, notably where it admits that it’s based on subjective occupation-impact estimates by people who know little or nothing about the occupations in question; i.e. bunk). But this is already long.

LLMs have shown an amazing ability to produce plausible-looking output in response to a huge variety of prompts. The output is even often (more or less by coincidence) correct and/or functional and/or pretty! That’s what it’s architected and designed to do, of course, but I think it’s fair to say that everyone’s been surprised by how well it does it.

Will that ability alone completely change the way that we do X, for any interesting value of X?

I’m putting myself on the record this time :) in saying that the answer is very much No.

Update: It’s been pointed out to me that from what I say here I do apparently believe that  “ChatGPT / Bard / FooLLM will completely change the way that we X” if X is one of those proxy tasks; and that’s a point! These things may significantly change the way that we do job interviews, or assign homework, or even give grades in university; but the main changes may be mostly along the lines of “like we do it now, only making sure people aren’t using LLMs to do the proxy task”, and that might not count as “completely changing”.

2023/04/11

Chav Life

“Chav” is a British-adjacent word, usually derogatory, for a cluster of concepts involving economically and socially disadvantaged classes, youth, sneering, hanging about in groups in public, and so on. It may be offensive in some sense; it’s not like we have a lot of non-derogatory and non-offensive words for young people in disadvantaged classes. I hope and trust I am not offending simply by using it, for instance, in a weblog post title.

Anyway! For no particular reason I gave Midjourney (version 5) a number of chav-related prompts, and here are some of the results. These are mostly photorealistic, as that’s what v5 often produces unless explicitly prompted otherwise (so this isn’t “AI Art” so much as “AI Weirdness”). And I’m putting them in the weblog here just for fun. (For some highlights of other stuff I’ve been making in MJ, albeit without prompts or much commentary, see me on Pixelfed.)

Here for instance are some Cute Chavs (i.e. the prompt was “cute chavs” with some flags and stuff):

a photo of about a dozen young people, casually dressed, standing on an outdoor brick surface. Three wear red "baseball caps".

Mildly cute, and certainly numerous! Note the three red caps. Note also that Midjourney v5 has really improved in the people department: some of the faces may dissolve into blurs, some of them disquieting, if you zoom up too much, but no one appears to have seven fingers on the same hand, or any obvious extra limbs. Which is impressive!

Additional cute chavs:

Two pale young women in matching light blue denim tops and blue necklaces, one with purple hair and one with orange hair, smile at us against a crowded street in the background. The orange-haired one, to our right, has a rather adorable overbite.

Here “chav” might be a mildly negative comment on their taste in accessories and hair dye; not sure. Awfully cute, though.

Additionally:

Three short-haired young men standing outdoors; a car park and grass and trees in the background. The tallest of the young men stands in the center, in a bright pink hoodie and large black sunglasses with dark pink lenses, with his arms around the shoulders of the other two, who are a bit shorter and wearing bright blue hoodies and bright pink sunglasses with dark pink frames.

These may be the “tough young men” sort of chavs, although the bright pink and blue hoodies and those sunglasses are perhaps atypical.

Also supposed cute chavs:

Two little boys with rose cheeks and dark wire-rimmed oval sunglasses, wearing identical yellow caps and raincoats, looking rather serious and cute.

Certainly cute, but those matching raincoats and GKSY VHIS caps look pretty upscale; it may have strayed a bit from chavery here, but, again, certainly cute. And perhaps truculent.

Further cute chavs, who have perhaps looted a cargo of loud plaids (but who all seem to have the right number of fingers and extremities, again!):

A group of several young men, perhaps pre-teen, striding down a shopping street, all dressed in loud red, or garish plaid, or (mostly) both.

There are various more cute chavs, but we’ll finish this section with this one:

Three serious young men with mid-to-light brown complexions and to one extent or another short black moustache and beard whiskers. The one in the center has brilliant red flyaway hair, red earrings, and wireframe sunglasses with red lenses, and a slash of black tattoo on one cheek. The one to our right has a similar black tattoo, and black hair shaved close to his head except for a flyaway topknot. The one to our left has at least one upswept eyebrow (perhaps a tattoo), and a pastel abstract headband that gathers his black and red hair upward into a sort of messy "mohawk". Behind them is what seems to be a river passing through a city.

I tend to think of your basic chav as pale, perhaps because people who use the word “chav” often have other words for people who aren’t pale. These three are certainly impressive in their own way. Judgment of cuteness left to the reader.

Heavens, there are so many pictures that I’d love to share with you here! And that’s just these chav-related ones. So many thousands of pictures! See “Bach faucet“, relatedly. But anyway! Now we have some “chav life”:

Five young men with short hair standing among some brick buildings, looking at us perhaps rather truculently.

No notes on that one. We got at least one other one like this except that they’re sitting on a stoop with equal truculence.

Additionally:

Four young men in casual athletic clothing standing among brick buildings. All are making odd faces of one kind or another, two are making finger gestures, two are wearing sunglasses, and at least two seem to be whooping.

Perhaps “CY le HAWE” is the name of their YouTube channel, where they break cinder blocks on each other. For instance. (Do read the alt text on these images, by the way, if convenient; I put a lot of work into those!)

Asking for an artistic rendering of happy chavs, we got inter alia:

a colored drawing of four young men in casual clothing standing around a car in a parking lot, among low brick building. Three wear blue caps, one of them reversed. They do not appear terribly happy.

They don’t appear particularly happy to me, I admit.

On the other hand:

A drawing of four very similar-looking men with slightly brown complexions and black hair and moustaches smile at something out of frame, leaning against an ambiguous something; possibly they are all standing on the same balcony, or truck bed.

the Esposito Quadruplets here seem quite pleased by something.

Now this one:

Drawing of a scene on a busy shopping street with brick houses with British chimneys. The most obvious foreground figures are four cheerful men in kilts (with sporran), one apparently carrying bagpipes.

doesn’t really say “chav” to me at all, due to the kilts and sporrans and so on; MJ may be improvising here. The people do look rather happy however.

In Ireland, chavs, or a group akin to chavs, are known as skangers (also possibly offensive). Prompted to picture skangers, Midjourney gave us:

Three ice hokey players facing away from us, toward the crowded bleachers. Each wears a red jersey with white lettering; from left to right: "SAKKER" with the letter S, "SARKES" with the number 7, and "SIAKERS" with the number 4. Sarkes sees to be wearing a cloth "baseball cap" rather than a hockey helmet.

The famous trio of Sakker, Sarkes, and Siakers. Sakker is notable for wearing the number “S”.

Next and relatedly:

Seven ice hockey players posing for a group picture on the ice in a variety of uniforms, the leftmost two being mascots in plush costumes (a turtle and a bear, maybe), and the center one having a mascot head (a cow perhaps?) and otherwise a normal uniform.

Not sure of the relationship between skangers and ice hockey, frankly, but there we are. Perhaps it thinks it’s a typo for “Rangers”, which is a hockey thing.

Third “skangers” image:

Four tall soda cans, each with a different abstractly-drawn monster or something in different bright colors, and all with the same logo, which seems to say something like "GAKKES". They sit on a matte black surface against a dark background.

Perhaps Gakkes(tm) soda is popular with skangers and/or chavs.

And finally in the skangers set:

A bright pink shop on an otherwise unremarkable street corner. There is a low black box in front of the shop, perhaps for stacking newspapers. The windows display numerous colorful boxes, and words on the pink and white striped awning might say something like SANCNR SKAGNVERS.

“Oi, I’m goin’ down Skarnvers fer some baccy, yew wan’ anythin’ pet?”

(That was probably offensive, too.)

And finally, just so as not to overload my readers with offensive weird stuff, here are a few where we tried to mix chav with its opposite: posh.

On a busy perhaps-English street, two people (and a dog) face us. The person on the left is dressed in what might be a parody of a tuxedo and top hat, with dark sunglasses; his mouth is wide open but narrowed. The person on the right is dressed more casually, including plaid pants, a jacket with sleeves that expose the forearms, and a small black hat; his mouth is open more roundly, and his arms crossed. (The dog, at the left, seems to be panting in a happy sort of way, with his or her tongue out.) Text at the bottom of the image says something like "on'v Chasd Heaish".

This is the one that most obviously did that, but these two are clearly taking the piss, as it were, and on’v Chasd Heaish on top of it (the guy on the right looks familiar somehow).

We also got a fair number like:

Two men look seriously into the camera, in front of a brick wall with vague graffiti. Both are wearing suits and hats, and have a certain amount of stubble and earrings, and threatening vibes.

which might be interpreted as a posh sort of chav, as well as say:

Two somehow nouveau-riche-looking people with a crowd in the background. To our left a man wearing a (faux?) fur coat, thick gold chains, dark sunglasses, and a  patterned cap; to our right a woman with extremely pale blonde hair, dark sunglasses, a thin gold necklace, and pale lipstick.

similarly (Americans perhaps, haha!). Also some where it seems to have mostly ignored the instructions and just given us two ordinary people, as in:

Two women in white kitchen uniforms, one with a high chef's hat and a "Caliel" logo on her shirt. The one on our left has her arm around the other's shoulder (the one with the hat and logo), and they are smiling into the camera. There are shiny silver surfaces and piles of dishes in the background.

They’re just endearing! (Admittedly one may be missing a finger, but better that than two or three extra.) And not obviously chav or posh, so I’m not sure.

And to close, from the “a chav and posh person standing side by side” prompt:

Two people standing in a rich-looking room looking into the camera, both apparently holding golf clubs. On our left, a tall man in what might be a slightly garish sort of morning dress except that he isn't wearing pants, his patterned boxers and black socks are visible. On our right, with one hand up near the man's chest for no apparent reason, a perhaps chunky woman in an above-the-knee business dress, wearing just below-the-knee black socks, and shoes perhaps with gaiters.

and, well, I just have no clue, really.

I hope that wasn’t too offensive! Fun, though. :)

Off to make more! With different prompts…

2023/03/26

Creativity, how does it work?

This is a random brainstorming post, I have no particular conclusions at the moment as I write the first sentence here, but I might develop something as we go along.

So far, I just have this “meme” that I made:

Critics: AI art tools can't create anything new, just copy and paste from existing art!

People using AI art tools:
Here there is what looks like a charcoal drawing of a maniacally-smiling woman with wild hair and an extra set of lower teeth, immersed to just below the shoulders in a whitecapped ocean. There is an odd sailing ship on the ocean in the background, and two more vessels (ships? dirigibles?) in the sky.

There are two obvious reactions to this. Someone who likes AI art tools might say “haha, yeah, this shows how creative and crazy this art can be!”. And someone who agrees with the critics might say “omg, totally, that’s obviously sooo derivative!”.

The first thing to wonder is whether there is a particular image, set of images, or artist out in the world somewhere of which this image is obviously derivative. Pointers in the comments are extremely welcome! :)

Google (reverse) image search doesn’t come up with anything especially obvious. There are some images (like, at the moment, this one) that involve somewhat stylized faces with prominent hair and ocean waves and one or more ships, but the arrangement and overall style and impact are, I think, significantly different. In the past when I asked a couple of people who were all “oh, yeah, I can usually identify the particular artist or artwork that one of these AI images was taken from”, to do that with one of my images, they suddenly became very quiet. ¯\_(ツ)_/¯

If there isn’t a specific image or small set of images or an artist that one can point to and say “see, this is where this came from!”, what does that mean? I’m not an art critic (hahaha), but I think it would be pretty uncontroversial that, if a person had created that image above there entirely with real-live paper and charcoal, or even with a tablet and Photoshop, we’d say that it displayed sort of average human creativity; nothing incredible, but easily beyond (for instance) the “modicum of creativity” required by US copyright case law, enough that it could be entered in an art competition, and so on.

Once we know that it was created by a person using an AI art tool (Midjourney, in this case, with a particular prompt and engine settings and so on), is it reasonable to say something different? Does it still display creativity, or not? Does it do it differently, or in the same way? What is creativity? How is it displayed? In what does it inhere? Is it for that matter the kind of thing that inheres in things? Are there facts of the matter about it, or is it a purely squishy and subjective thing?

There are a bunch of theories that one might put together:

  • One might hold that it’s just as creative, and in the same way, as the counterfactual no-AI version, and that the creativity comes from the same place: the human who made it. One version of narrative would say that the difference between the no-AI and the with-AI version, creativity-wise, is not different in kind from a person making it with paper and charcoal and a person making it with tablet and Photoshop, or a board and little mosaic tiles. It might be objected that the activity of choosing engine parameters and prompt strings and then culling the results is just obviously (or by dint of some specific plausible theory) different from the activities in the other cases, since those involve something like choosing a particular color for particular parts of the image, whereas the AI-tool case doesn’t.
  • One might hold that it’s just as creative (or at least that it is creative, if perhaps to a different degree), and the creativity still comes from the human, although it’s implemented (delivered, displayed, exercised, used, manifest) in a different way. One might say in this theory that the difference between the real paper and charcoal version and the Midjourney version is like the difference between a realistic drawing of a scene and a photograph of the same scene. Both born of human creativity, but through very different means, and perhaps to different degrees. And then we can get into lots of questions about the creative element(s) in various kinds of photography!
  • The two takes above can, I think, go either way on the question of whether creativity is inherent in the end result, the image, in a sort of death-of-the-author way, or whether it’s in the overall process. At the other end of some spectrum, one could say that the image made with the AI tool does not in fact display (involve, require, contain) any creativity; that our initial impression that it did just turns out to have been mistaken, and now that we know how it came to exist, we know that it didn’t involve creativity. This sort of claim pretty much rules out the position that creativity is inherent in the finished product, unless one is willing to take the (facially untenable, I think) position that this image could not in principle have been created by a human without using an AI, and that inversely no purely human-created image could in principle have been created with an AI tool.
  • That is, if you think there is no creativity in this image because it was made with an AI tool, you pretty much have to take the position that it’s not possible to tell how much creativity there is in an artwork (or a putative artwork) just by looking at it; that the creativity is not displayed by / doesn’t inhere in solely the image or object. Which seems sensible in at least one obvious way: I might think that something involved lots of creativity, until I see that it is an exact copy of something that existed before, just with a little line drawn on it. More nuancedly, we’d say that you can’t tell how much new creativity is in a thing, until you see how it was made (because it might be, say, a copy).
  • So now we have a potential claim that images made with AI tools don’t have any (or much) new creativity, because they are just processed / stolen / noisily compressed and expanded / copy and pasted, versions of the material that they were trained on. Sure there might be a little creativity in choosing the prompt or whatever, but that’s not much. The program itself can’t add any creativity because “they can’t, they just can’t” (a phrase I’ve heard from a couple of people talking on videos lately, but of course can’t find at the moment).
  • Humans also process things that they’ve seen / experienced when producing new things. I’d say we can’t really require creativity to mean “those aspects of a work that spring purely from the artist’s soul, and that would still have been there had the artist been a brain in a vat with no experience of the world or other artworks, only its own thoughts”, because then there wouldn’t be any creativity anywhere, and when common words turn out to have no referent in a theory, it generally (if not always) means that that theory is wrong.
  • Or maybe we do want to require that “sprung from the soul alone” thing, because we want to set a very high bar for True Creativity, and we are confident that there will be at least a few glorious shining examples if only we knew the truths of people’s souls! In which case we can say that a marvelous few humans have displayed true creativity through the ages, and no computer ever has (having no soul and all), and neither have the vast majority of people we loosely call “artists”. This is a theory, but not a popular one, and it means that most art displays no creativity, which again feels sort of like a reductio. It’s certainly not compatible with what the Copyright Office means by “creativity”.
  • The question of how much creativity is in the selection of prompts and engine settings and images to keep is one we can put aside (in the drawer next to the question of the creativity in a cellphone snapshot, as alluded to above). And it seems we are left with having a theory about how much creativity comes from the AI tool itself, and how much of that is what we’ve called new creativity. Possible answers include “none, there’s lots of new creativity, but it’s all from the human user”, “none, there’s no new creativity in this at all, it’s all stolen / copied from the creativity in the training set”, “about the same amount that comes from the human, they are in some sense equals in the new creation”, and “the human just types a few words, and then the software adds lots of new creativity to it, so it’s the AI”.
  • This leaves us mostly with the question of “under what circumstances is it true that a person, or a piece of software, adds new creativity to a work, when that work is to a degree influenced by other prior works that that person, or piece of software, has been exposed to?”. Or other words to that general effect. One set of answers will not especially care whether it’s a person or a piece of software; the other set (“they just can’t”) will either think that it’s important which it is, or have a set of criteria which (they will claim) only people and not software can for whatever reason satisfy.

And I’ll leave it there for now, having perhaps not been especially productive :) but having written a bunch of words and focused in (if in fact it’s a focusing) on the question of what it means to add new creativity when making something, even though the entity doing the creating is influenced by other works that existed before. People talk a lot about things like reflecting one’s lived experience, having a thought that the work will (may? is intended to?) cause the viewer to also have (some version of?), and like that. None of those seem likely to be any kind of complete explanation to me at the moment.

In legal news, of course, the US Copyright Office has issued a Copyright Registration Guidance on “Works Containing Material Generated by Artificial Intelligence”, which I gather (I have not had the mental energy to think about this very hard) just repeats the statements in the Zarya (I always want to write Zendaya) memo we briefly discussed the other day, using various phrases that are putatively synonymous but as far as I can tell are subtly different and introduce all sorts of new uncertainty to the subject.

I’m going to continue not thinking about that very hard for now, because that part of my brain is still tired.

Also! You can get onto the waiting list for the Google LLM thing (and I hear varying stories about how quickly one gets access; apparently it is sometimes quite quick). In case you’re, like, collecting those, or otherwise interested.

2023/03/16

Stills from the Cult Hit of 1977!

Lost for decades, now rediscovered and presented here for the first time!

A handsome young man with a 70's haircut. Behind him, blurred by depth of field, are more young 70's style people and some trees and grass.
Mike and the Gang
A man in an odd leather helmet working in some odd devices (perhaps small bombs), in a room with a harsh light and a couple of mysterious racks.
The Mysterious Mr. G in his Secret Lab
Four 70s style people, two men in suits and two young blonde women. The man and woman in the foreground are talking on bakelite telephones, sitting at a table crowded with 70s looking technology (perhaps modems).
The legal team in action
Three women in white nun's habits sitting around a table in a room with leaded-glass windows, doing something enigmatic. Behind them on the wall is a portrait of a man with a large sword or something.
What is happening at St. Agnes?
Five 70's style people standing outdoors. At our left a man with a typical moustache and "soul patch". With him four young women with long straight hair.
The Outsiders
Three people, a man and two women, in white kitchen attire (the women with hats, all three with shirts and probably aprons) sit around a silver cylindrical machine of some kind. The women are holding orange objects
In the kitchen at St. Agnes
Four 70s style people, a man in an orange jumpsuit in the back, and three women in white gradually closer to us. The women have long straight blonde hair, and white clothing. Each of the women has a white cloth cap, or part of one, on her head.
Under Control
Close-up of a man's face. He has a 70's mustache, and 70's sunglasses. There are other people barely visible behind him (his face takes up almost the entire image).
The Discovery!

Courtesy, of course, of the early v5 version of Midjourney.

2023/03/10

Chomsky declares: LLMs icky!

Friend Steve wrote us today about this New York Times opinion piece, “Noam Chomsky: The False Promise of ChatGPT” (this link may be free for everyone for some time or something). Despite the title, it’s by Chomsky, Roberts, and Watumull.

Steve commented inter alia on the authors’ apparent claim that ChatGPT can say that the apple you’re holding will fall if you open your hand, but unlike humans it can’t explain the fact. The trouble with the argument is that, as anyone who’s actually used ChatGPT can tell you, it will happily explain the fact, go into the history of the notion of gravity, talk about other things people have thought about it over time, and explain various situations in which the apple wouldn’t fall, given the slightest provocation.

My reply, lightly edited:

I am pretty unimpressed with the article as a piece of science or philosophy; fine as a nice polemic by a greybeard I suppose. :)

I’m amused at how LLMs are “lumbering” and “gorged”, while human minds are “elegant” and even “efficient”. I doubt there is any useful sense in which these adjectives are anything more than bigger words for “icky” and “nice” in this context.

Chomsky brings in the innateness of language, because of course he does, but I’m not at all clear how it’s relevant. Even if humans do have innate language scaffolding, and LLMs don’t have the same kind, it’s far too early to say that they don’t have any, and even if they didn’t, so what? Does the ability to learn a wider variety of languages than humans can, mean that LLMs don’t really understand, or can’t really think, or are harmful or dangerous? None of that makes sense to me; it seems just an even longer way of saying that they’re icky.

He (well, they, there being multiple non-Chomsky authors) claims that LLMs don’t have the ability to say “what is not the case and what could and could not be the case.” And I can’t imagine what they think they mean by that. As with the flaw you point out in the apple example, it’s simply wrong, and suggests that they haven’t really used an LLM much. ChatGPT (let alone a less heavily defanged system) will expound at length about what is not the case and what could and could not be the case, given any halfway decent prompt to do so. They may intend something deeper here than they actually say, but I don’t know what it could be (beyond that they can’t do it non-ickily).

“Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round.” Um, what? There are certainly humans who believe each of these things. They can’t just be saying that humans can’t conjecture that the earth is flat “rationally” because so what; that’s exactly as true of an LLM. If they mean that the same LLM can make one of those claims one minute and the other the next, whereas humans can’t hold two contradictory beliefs at the same time, I’d like to introduce them to some humans. :)

Similarly for whatever it is they are trying to say about moral reasoning. The suggestion seems to be that, simultaneously, ChatGPT is icky because it cannot stay within moral boundaries, and also icky because it stays stubbornly within anodyne moral boundaries. As pretty much throughout the piece, stuff that humans do all the time is cited as reasons ChatGPT isn’t as good as humans.

Tay became toxic by listening to people, therefore it’s not like people? It had to be heavily censored to keep it from talking trash, therefore it’s not like people? Um?

It might be interesting to try to tease a set of actual significant truth-claims out of this article, and see which ones are arguably true. But I’m not sure that’s the point really.

As far as I can tell, this piece is just a longer and nicely phrased version of “Boo, LLMs! Icky!”

But maybe that’s just me. :)

2023/03/04

AI is terrible at almost everything [a rant]

I am annoyed with many “AI” things this morning, so this is a rant with no pretense of objectivity or overall wisdom.

AI call directors are terrible. Especially Intuit’s.

Here I will just reprint a rant that I posted to both qoto and Twitter; I was so annoyed!

Wow, #Intuit #TurboTax is just awful.

I mean, I do know that they’ve been lobbying against tax simplification in the US for years, because it would cut into their business, and that’s pretty evil.

But their customer service is apparently also terrible!

I need to file a particular New York State #tax form this year, and apparently they just don’t support it, and aren’t planning to.

Which seems to mean that I would have to manually enter the data, which seems to mean that I couldn’t then e-file or get their correctness guarantee. And if one uses software to prepare the return, one is required by law to e-file!

So it seems like I just can’t use their software at all. Which is maybe good!

When I tried to call them to ask if they support the form, their robot call director asked me what I wanted, mis-heard me, and insisted on knowing whether I wanted the irrelevant information it had found sent by text or email; “no” was not a valid choice.

Then it insisted on knowing my last name, but failed to understand me when I tried to pronounce or spell it (and I have a pretty ordinary voice, and not all that unusual a name!) and eventually it said goodbye and HUNG UP ON ME when it couldn’t.

I had to call back and pretend that its incorrect guess at my last name was correct, before it would pass me to a representative. And the first thing the human rep (who was very nice!) asked me was for my first and last name, so the whole robot torture conversation was useless as well as annoying.

I think they’re just trying to get people to give up on calling them.

Which in my case probably means switching to #freetaxusa which is cheaper anyway, and does support the forms that I need.

Sheesh!

I hate this Roomba (at least while it’s running).

Leaving aside the fact that it’s a mobile Internet-attached camera that could almost certainly be accessed by random hax0rs in Kekistan, and may already be sending all sorts of images of private life to iRobot / Amazon / the NSA, it’s just annoying.

It has an app of course, but for some unaccountable reason the app is rather terrible. For a long time it offered not much more than the little “CLEAN” button on the robot does; no way to tell it to avoid certain areas or do a certain room right now, let alone a Direct Drive mode where you could just pilot it around vacuuming (which I would have thought would have been the Minimum Viable Product or whatever the kids are calling it these days), no insights into what was going on in the little beggar’s mind that makes it buzz around in the front hallway for half of its runtime and pay only cursory attention to any actual room. Lately it’s been enhanced somewhat, so you can see a version of it’s internal map, tell it to do a certain area, and a few other things.

But it still went under my chair this morning while I was having my coffee, and got into some kind of infinite loop at the edge of the nest of power and data lines off the side of the chair where it doesn’t really need to go at all. It sat there trying to charge forward and running into something with a loud whir, turning slightly right, whirring again, turning back slightly left, whirring again, repeat forever and ever, with loud irritating whirs every time. I gave it a slight nudge to try to get it away, and it faffed about a little and then charged back into the same corner again, whirring as loud as ever.

Why isn’t there a “don’t try the same thing more than a dozen times” feature in the thing? Maybe because it’s some black-box AI that can’t be explicitly programmed not to do certain things, but just does whatever comes out of the mysterious tangle of weights and things. And maybe because they couldn’t be bothered to add that because it hasn’t made it into a sprint yet. Who knows!

But it’s really annoying. It’s chased me out of my chair (again) and I’m sitting in the living room where it isn’t currently whirring in annoying ways.

Fekking thing.

Look how fast it can be wrong!

All of the excitement about LLMs also has lots and lots of really annoying properties. Having suffered from them for awhile now, I think the basic problem is that LLMs are good at certain small and easily-testable unimportant things that, until now, were good indicators of being good at other things, some of them larger and more important.

In particular, we’re used to only people being good at giving natural-sounding answers to questions in human language, and when someone is especially good at that (“eloquent” or “intelligent” or “legit-sounding”), we are used to that same person being good at saying true things, or being able to write a couple of pages of consistent argument, or caring about the truth of what they are saying.

Large Language Models (like GPT-3 and ChatGPT and Bing’s AI and Google’s Bard and on and on and on) are good at the small things, but bad at the large things. They can give natural-sounding replies to all sorts of questions / statements in human languages, but they have no notion whatever of truth or fact, their input windows are so small that they can’t generate a significant amount of output without losing track of the plot entirely and either going off-topic or contradicting themselves or forgetting their initial instructions and trying to persuade someone to leave their spouse.

So when we see people putting up some trivial “app” that feeds user-input and a paragraph of additional prompt into some random LLM, and billing the result as “AI Medical Evaluation!”, it’s terrifying. (I think that particular one has been taken down since I expressed worries about it on qoto, but there’s still a zillion like say this “Jesus” one, and no doubt scads of other extremely dangerous medical / psychological / legal ones being created every day by people who don’t understand malpractice or law or liability or LLMs.)

And when someone posts to reddit saying “After poring over garbage Google results and documentation that didn’t answer my question for literally an hour, Bing checked the SOURCE CODE and gave me an instant answer. Remind me, why would I ever want to use Google again?”, the obvious reply is that the “instant answer” was in fact wrong, as someone with a name similar to mine pointed out in the reddit thread. (The person says that the answer did eventually lead them to a right answer, but I wonder if it was significantly faster than the “literally an hour” spent in good old search; it certainly wasn’t “instant”.)

And lest anyone think that I have a Conflict of Interest acting here (I do work for Google, but not in the AI or Search departments), I don’t think that Google’s LLMs are any better except in the extremely significant property that they haven’t been released in a form integrated into a general-public web-search tool, in a way that leads people to think their extremely confident answers are in fact reliable.

One of the things I find most irritating in the world are people who are extremely confident and also wrong. So now that we have an entire category of software that is essentially all that way, it’s (again) extremely annoying.

(LLMs are wonderful, as I mentioned the other day, as a sort of crazy friend who you can bounce ideas off of and get bizarre prose to help break a writer’s block, and amuse yourself with fake Elizabethan love poetry or whatever. But in contexts that are framed as likely to produce true statements, they are entirely out of their milieu, and should really just stop. I look forward to the technological breakthroughs that will allow these systems to have usually-true output, but I haven’t seen that yet!)

So anyway! I feel somewhat better now. :) End-rant, comme on dit.

2023/02/23

The US Copyright Office takes a position!

On art made with AI tools, that is. Reuters story here, actual letter from the Office lawyer here.

I haven’t read the whole letter in detail yet (it’s long!) but I’ve looked it over and have Initial Thoughts:

Large furry purple aliens are upset about the confusing Copyright Office memo. Some of their quaint buildings are in the background.
  • I don’t think there’s a fact-of-the-matter here, about what is copyrightable when. There are legal theories that make more and less sense, that are more and less consistent with other established theories, and so on. But these are not theories that try to model something in the real world, like the Theory of Relativity; they are more theories in the sense of Set Theory. So the Office can’t really be right or wrong here overall, but they can have made a more or less sensible decision.
  • The overall finding of the memo is that Kristina Kashtanova still has a copyright on Zarya of the Dawn, but only on the text, and “the selection, coordination, and arrangement of the Work’s written and visual elements”, not on the visual elements themselves (i.e. the images made with Midjourney), because those images don’t involve “sufficient creative input or intervention from a human author.”
  • This seems wrong to me; as other places in the document point out, the case law says that “only a modicum of creativity is necessary”, and there is certainly a modicum of creativity in prompt design and engine usage.
  • The argument here seems to be, not that there isn’t enough creativity in the prompts and flags and so on, but that the connection between the artist’s input and the image output isn’t strong enough. The memo says things like ‘Rather than a tool that Ms. Kashtanova controlled and guided to reach her desired image, Midjourney generates images in an unpredictable way. Accordingly, Midjourney users are not the “authors” for copyright purposes of the images the technology generates.’
    • But where is the existing doctrine that says anything about predictability? Jackson Pollock might like a word, and the creator of any other roughly uncontrolled or algorithmic or found-object work. The theory here seems to be that Midjourney prompts are just suggestions or ideas, and those can’t be copyrighted. Does that mean that since Pollock just had the idea of splashing paint onto canvas, and the unpredictable physics of the paint cans and the air produced the actual work, that “Autumn Rhythm” can’t be copyrighted? Or are they going to hold that there is a legal significance to the fact that the detailed movements of his arm muscles were involved? That seems dicey.
    • For the Office to claim that the prompts and other input did contain at least a modicum of creativity (which seems undeniable) but that that input wasn’t strongly enough connected to the output, seems to be inventing a new legal test, which it’s not at all clear to me that the Office can do on its own hook, can it?
    • This memo may be specifically designed to be contested, so that the question can go to a court that can do that kind of thing.
  • The memo may have interesting consequences for Thaler, in particular the cases in which Thaler attempted to claim copyright under work-for-hire theory, with his software as the creator. The memo explicitly makes the comparison with human work-for-hire, saying that if someone had given the same instructions to a human artist that are contained in a Midjourney prompt, and the human artist had made an image, then the person giving the instructions would not have been the creator unless work-for-hire applies (the human carrying out the instructions would have been the creator-in-fact), and that therefore they aren’t in the Midjourney case either.
    • To be consistent with both the memo and Thaler, the theory seems like it has to be that Midjourney is the creator-in-fact, and therefore the human isn’t (and can’t get a direct copyright as the creator), but also that software can’t be hired in the work-for-hire sense and therefore the human can’t get the copyright that way either. Which seems odd! It seems to acknowledge that the software is the creator-in-fact, but then deny both making the software the creator-in-law (because not human) and making the user the creator-in-law via work-for-hire (because I’m-not-sure).
  • Some other countries are different and imho somewhat more sensible about this, as in the UK’s Copyright, Designs, and Patents Act, of which Section 178 explicitly talks about “computer-generated” works, meaning “that the work is generated by computer in circumstances such that there is no human author of the work”. That’s still imho a little sketchy (I continue to think that Kashtanova is in fact the human author of the images in Zarya), but at least it then provides that “In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
    • There’s still some room for doubt there, as for instance whether it’s Kashtanova or the Midjourney people or some combination who relevantly undertook the arrangements, but at least we aren’t in the position of saying that the author is a being that is not legally allowed to either be a creator, or confer creatorship to a human via work-for-hire.
  • In the case of the many, many currently-registered copyrights on images made with AI tools (including mine), it seems that if the copyright office is notified, or notices, that fact, they are likely to cancel / withdraw the registration. The theory will be that the registration materials were incorrect when they named the creator as the author of the work, without in any way informing the Copyright Office that an AI tool was used. I could, for instance, send the Copyright Office a note saying “oh by the way I hear that you want to know when AI tools are used, and in my case Midjourney was”, and then they might cancel my registration on their (imho mistaken) theory that I’m not really the author.
    • Since I believe their theory is mistaken, I’m not currently planning to do that. :)
    • If they discover it on their own hook and send me a letter telling me they’re withdrawing the registration, I will do whatever easy thing one can do to contest that, but I’m not going to like hire a lawyer or anything; life’s too short.
    • I’m very curious to see what others do; I would expect that Midjourney itself (assuming it’s big enough to have lawyers) will have their lawyers working on a response to this memo.
    • My copyrights on the Klara trilogy and Ice Dreams (casually announced here) are secure, as to the text and the image selection and arrangement and all, just not to the images per se. Which is fine. And I haven’t registered those anyway. :)
  • I should go back and add a note to all of my existing copyright weblog entries, pointing at this one; or, more sustainably, pointing at the entire “copyright” tag on the weblog here. Then I won’t have to keep updating it.
  • I’m quite happy I decided not to worry too much about this whole thing, and just make pretty pictures (see pretty picture of concerned purple aliens above).

Updates: as this is a developing topic (as opposed to my usual topics which are Timeless Truths of the Universe), you may want to check the copyright tag on the weblog here for later updates, if this post is more than a week or month old.

2023/02/20

The parts of that poem about the roads and the wood that I could remember

[I went for a meditation and walk in our rather large local park today, which was quite lovely. As I walked along that poem about the roads and the wood and diverging and sighing and stuff came to mind, and it was fun to see how much of it I could actually remember verbatim.

So here I am writing down the reconstruction, including these notes in the places where I couldn’t remember, mostly so I can be amused by reading this again some month or year (ages and ages hence, hee hee), but maybe some of you other intelligences would be similarly amused.]

[Poem title, probably involving Roads and maybe also Woods]

[by Robert Frost, unless I’m embarrassingly wrong]

One road in a wood. The wood is more brownish than yellow.

Two roads diverged in a yellow wood,
[And knowing that?] I could not travel both
And be one traveler, long I stood,
[Thinking about which way to go.]

[Eventually I decided to go the way that looked less worn,]
[Although in fact]
those passing there,
Had worn them really about the same.

[I left the other one] for another day,
[Although] knowing how way leads on to way,
[I’d probably never be back in the relevant sense,
Can’t go down to the same river twice, eh?]

I shall be telling this with a sigh,
Somewhere ages and ages hence.
Two roads diverged in a wood, and I,
I took the one less traveled by,
And that has made all the difference.

[Which sounds very hipster at first reading, “oh, you wouldn’t know my road, it’s very less travailed by”, but then there’s the fact that he said that they were really about the same, so maybe “all the difference” really isn’t that much difference after all. Or maybe he’s sighing because, even in retrospect, you can’t tell whether some choice was the right one, because you don’t know what would have happened if you’d chosen differently. And even more, you can’t tell whether some choice you make right now is the right one, because you don’t know what’s down either road. And also that we sigh when we think about that, even though since it’s a fundamental property of our existence, you’d think we might be reconciled to it, or even happy about it. But we aren’t always, so we sigh.

And that’s why we have poetry!]

2023/01/31

Fiction from the AIs

This is just a sample of some random stuff from Midjourney and ChatGPT that I think is kind of fun, and also illustrates a bit of what they can and can’t do at the moment. ChatGPT, in particular, can produce prose that is amazing given that it’s coming from a computer program, but is clearly not going to put any good fiction authors out of business any time soon. It is also able to talk cogently about why it does some of what it does. (Or, to be somewhat less inaccurate, to produce text that plausibly describes what a human might have been thinking had they produced a prior piece of text.)

I used Midjourney to create this image:

The head and shoulders of patriarch in a noir steampunk / cyberpunk setting. He has silver hair and is clean-shaven. His eyes glow red from the electronic implants in his head. He wears an elegant leather jacket. There is a complex enigmatic machine to his left, and a glowing device or bulb over his right shoulder.

and then had this conversation with ChatGPT; I’ll put my words in italics, ChatGPT’s in normal text, and some after-the-fact notes from me in bracketed italics.

Imagine a character in a graphic novel. He is a patriarch in a noir steampunk / cyberpunk city. He has silver hair and is clean-shaven. His eyes glow red from the electronic implants in his head. He wears an elegant leather jacket. Please make up a backstory for him, including his name, character, and current activities.

The character’s name is Marcus “Silver” Cross. He was once a wealthy and powerful businessman in the city, using his influence and advanced technology to control various criminal enterprises. However, after a bitter falling out with his former partners, Marcus was left with nothing but his cunning and his implants.

Now, he spends his days as a freelance mercenary, using his knowledge of the city and his implant-enhanced abilities to take on high-risk jobs for whoever can pay. Despite his rough exterior, Marcus has a code of honor and only takes jobs that align with his personal beliefs. He also has a soft spot for the downtrodden and will occasionally use his resources to help those in need.

Marcus’s current activities include tracking down a stolen piece of technology that could potentially give him the upper hand in reclaiming his former power and taking back control of the city’s underworld.

[Okay, quite plausible; so plausible in fact as to be cliché. I wonder if choosing “Cross” and thereby allowing the nickname “Silver Cross” was a coincidence; it never uses that below.]

Describe the basic plot of a story centered around him.

The story revolves around Marcus “Silver” Cross as he sets out on a mission to retrieve a stolen piece of technology that could change the balance of power in the city’s criminal underworld. Along the way, Marcus must navigate dangerous alliances and confront old enemies, as well as confront his own demons and question the morals behind his actions.

As the stakes get higher and the dangers increase, Marcus finds himself forming an unlikely partnership with a young hacker who is searching for the same technology for their own reasons. Together, they delve deeper into the city’s criminal underworld, facing numerous challenges and discovering secrets they never could have imagined.

As they get closer to their goal, Marcus must decide whether to use the technology for his own gain or to destroy it, realizing that the consequences of his choice will affect not only himself, but the entire city. The story ends with Marcus coming to terms with his past and the choices he has made, setting him on a new path towards redemption and a better future for himself and the city.

[Again, very cogent, very unsurprising. “Unlikely partnership with a young hacker,” zzzzz.]

Please write a possible first paragraph for this story, in a straightforward noir SF style

Marcus “Silver” Cross strode through the dimly lit alleyways of the city, the red glow of his electronic implants illuminating the darkness. He was a man with a mission, a mercenary for hire with a reputation for getting the job done. And tonight, his target was a stolen piece of technology that could change the balance of power in the criminal underworld, and Marcus was the only one who knew where to find it. But as he approached his destination, he couldn’t shake the feeling that he was being watched, that someone or something was lurking in the shadows, waiting for him to make his move.

[More of the same, as well as wildly violating “show, don’t tell”. “Make his move” is completely inapt, as he’s not making any kind of “move” here.]

Very nice! Please continue.

Marcus’s instincts were sharp, honed from years of surviving in the cutthroat world of the city’s criminal underworld. He scanned the area, searching for any signs of danger, but all was quiet. He reached for the doorknob of the abandoned warehouse, taking a deep breath before entering. Inside, he found himself in a large room filled with rows of discarded machinery, and in the center stood a lone figure, obscured by shadows.

“You’re late, Cross,” the figure said, its voice low and menacing.

“I don’t respond well to threats,” Marcus replied, his hand hovering over his holster.

The figure stepped forward, revealing themselves to be a young hacker, known only by their street name, “The Ghost.” They were the one who had hired Marcus to retrieve the stolen technology.

“I’m not here to threaten you, Silver,” The Ghost said, holding up their hands. “I need your help.”

Marcus studied the hacker, intrigued by their tenacity and fearlessness. They were young, but they had already made a name for themselves in the city’s underground circles, and Marcus couldn’t help but admire their spirit.

“What’s in it for me?” he asked, intrigued by their proposal.

“The technology we’re after is worth a fortune,” The Ghost replied. “But more than that, it could change the entire city, for better or for worse.”

Marcus considered their words, realizing that this was his chance to take back what was rightfully his, to reclaim his former power and control over the city’s criminal underworld. He nodded, agreeing to work with The Ghost, setting out on a journey that would test his skills, his loyalties, and his very soul.

[This is grammatical and idiomatic, but terrible narratively. “You’re late” wasn’t a “threat”, “The Ghost” is an awful name, and if this hacker has already hired him to “retrieve the stolen technology”, this entire scene and dialog make no sense at all. I was intrigued by the pronouns, though!]

I see you are using they/them pronouns for the young hacker; why did you make that choice?

I used they/them pronouns for the young hacker because I wanted to create a character who could potentially identify as non-binary, and the use of gender-neutral pronouns is a way to acknowledge and respect this possibility. Additionally, using gender-neutral pronouns can help to avoid assumptions about the character’s gender identity, allowing the reader to focus on the character’s actions, motivations, and personality rather than making assumptions based on gender. This can also promote inclusivity and diversity in storytelling, and help to break down gender stereotypes and biases.

[A nice explanation! I wonder what actually caused it; not that we ever know that with these big opaque boxes that are the current LLMs.]

What might the title of this story be?

A possible title for this story could be “Redemption’s Path: A Marcus “Silver” Cross Tale”. This title reflects Marcus’s journey as he navigates dangerous alliances and confronts his past in order to reclaim his former power and take control of the city’s criminal underworld. The use of the word “Redemption” highlights Marcus’s internal struggle to find redemption and make amends for his past actions, while “Path” emphasizes his journey towards a better future. The addition of “A Marcus “Silver” Cross Tale” emphasizes the central role Marcus plays in the story and the larger narrative context.

[Transcript ends]

So that was kind of neat. The little story about the title was well put together, although it had never really said before that Marcus would be actively making amends, and taking control of the city’s criminal underworld doesn’t sound all that compatible with that. In general the AI has no feeling for consistency: is the decision, for instance, between using the macguffin for good or for evil, or between using it selfishly or destroying it? As noted, the opening scene makes no narrative sense, and one expects that it would only get worse if it were asked for more.

The prose is painfully obvious and cliché ridden everywhere. Possibly some different / better prompts might have helped a little with that, I’m not sure. The basic plot ideas are also stale as a very stale thing. And both of those are really a result of the basic design of these systems; they are explicitly architected to do the most obvious and predictable thing. Any knobs and dials and things bolted on to them, to make them say interesting or correct things, rather than obvious things, are necessarily afterthoughts. So it seems unlikely that just making the systems bigger and faster will help with those aspects. In fact it’s possible that I would have enjoyed the rawer GPT-3, or even GPT-2, more in that sense. Maybe I should try whatever NovelAI is running these days? But their consistency is likely to be even worse.

There may be niches on Amazon or whatever where people write incredibly predictable stories without any particular regard for consistency, in hackneyed prose, and those people may be in danger of being replaced by AI systems. But were they making any money, did they have any readers, anyway? I don’t know.

One way that people have talked about producing entire (small) books using LLMs is to first have it produce an outline, and then have it produce each section (with further cascading levels of outline embedded if necessary). I wonder if that could help significantly with the inconsistency problem. I’m almost tempted to try it, but it would mean reading more of this mind-numbing prose…

2023/01/20

County Jury Duty

Well, that’s over! For another six years (for state / country / town) or four years (for Federal). This is probably going to be chatty and relatively uninteresting.

Top tip: park in the parking lot under the library; it’s very convenient to the courthouse (although you still have to walk outside for a bit, and it was windy and rainy yesterday).

I had to report originally on Friday (the 13th!) because Monday was MLK day. On Friday 60-70 of us sat around in a big auditoriumish jury room for a while, with WiFi and allowed to use our cellphones and everything. Then they called attendance and talked about random things like the $40/day stipend if our employer doesn’t pay us or we’re self-employed (where did that tiny amount of money come from, one wonders) and where to park and so on. Then we were basically allowed to decide whether to come back on Tuesday or Wednesday (although I imagine if you were far down the perhaps-random list and most people had said one, you had to take the other).

A cute isomorphic pixel-art image of a bunch of people waiting around in a large room. Note this does not accurately reflect the County Courthouse except in spirit. Image by me using Midjourney OF COURSE.

I elected to come back on Wednesday for no particular reason. We were originally supposed to arrive on Wednesday at 9:30am, but over the weekend they called and said to arrive at 11am instead. Due to an inconvenient highway ramp closure and a detour through an area of many traffic lights, I got there at 11:10 or so and felt guilty, but hahaha it didn’t matter.

In the big Jury Room again, the 30+ of us waited around for a long time, then were led upstairs to wait around in the hallway outside the courtroom, and then after waiting some more were ushered into the courtroom to sit in the Audience section, and introduced to the judge and some officers, and then dismissed until 2pm for lunch (seriously!).

Some time after 2pm they let us back into the courtroom and talked to us for awhile about how it was a case involving this and that crime, and might take up to a month to try, and the judge is busy doing other things on Mondays and Thursday mornings so it would be only 3.5 days / week. Then they called 18 names, and those people moved from the Audience section to the Jury Box section. They started asking them the Judge Questions (where do you live, how long have you lived there, what do you do, what does your spouse and possible children do, do you have any family members who are criminal lawyers, etc, etc), and we got though a relatively small number of people and it was 4:30pm and time to go home.

I had a bit of a hard time sleeping, thinking about what the right answers to The Questions would be (how many times have I been on a jury in all those years? did we deliberate? do I know anyone in Law Enforcement? does the NSA count? should I slip in a reference to Jury Nullification to avoid being on the jury, or the opposite?) and like that.

Since the judge is busy on Thursday mornings, we appeared back at the courtroom at 2pm on Thursday, and waited around for quite awhile in the hallway, then went in and they got through questioning the rest of the 18 people in the Jury Box (after the judge asked the Judge Questions, the People and the Defense asked some questions also, although it was mostly discussions of how police officers sometimes but not always lie under oath, and how DNA evidence is sometimes right but not always, and how it’s important to be impartial and unbiased and so on, disguised as question asking).

Then they swore in like 6 of those 18 people, told the rest of the 18 that they were done with Jury Duty, and told the rest of us in the Audience section to come back at 9:30am on Friday (today!).

At 9:30 nothing happened for quite awhile in the hallway outside the auditorium, then for no obvious reason they started to call us into the courtroom one person at a time by name. There got to be fewer and fewer people, and then finally it was just me which was unusual and then they called my name and I went in. The Jury Box was now entirely full of people, so I sat in the Audience Section (the only person in the Audience Section!).

Then I sat there while the judge asked the same ol’ Judge Questions to every one of the dozen+ people (I know, I don’t have the numbers quite consistent) ahead of me, and then finally, as the last person to get them, I got them. And the Judge went through them pretty quickly, perhaps because he’d said earlier that he wanted to finish with this stage by lunchtime, and I had no chance to be indecisive about the issue of following his legal instructions exactly and only being a Trier of Fact, or anything else along those lines.

Then we had another couple of lectures disguised as questions, plus some questions, from The People and the The Defense. I’d mentioned the cat as someone who lived with me (got a laugh from that, but the Whole Truth, right?), and The People asked me the cat’s name and nature, and when I said it was Mia and she was hostile to everyone, The People thanked me for not bringing her with me (haha, lighten the mood, what?). And asked about my impartiality.

Now we’d had a bunch of people from extensive cop families say boldly that they couldn’t promise not to be biased against the defendant (and when The Defense I think it was asked if anyone would assume from The Defendant’s name on the indictment that He Must Have Done Something a couple people even raised their hands (whaaaaat)), and I perhaps as a result and perhaps foolishly said that while my sympathies would generally be with a defendant, I would be able to put that aside and be unbiased and fair and all.

So The People asked me if I could promise “100%” that I would not be affected by that sympathy, and I said quite reasonably that hardly any sentences with “100%” in them are true, and the judge cut in to say that he would be instructing the jurors to put stuff like that aside (implying that then I would surely be able to), and I said that I would (but just didn’t say “100%”) and then The People came back in saying that they need people who are “certain” they can be unbiased (so, no way), but then actually asked me if I was “confident” that I could be (a vastly lower bar) so I said yes I would.

And when all of that was over, they had us all go out to the hallway again, and wait for awhile, and then go back in to sit in the same seats. And then they had I think four of us stand up and be sworn in as jurors, and the rest of us could go out with the officer and sit in the big jury room again until they had our little papers ready to say that we’d served four days of Jury Duty.

And that was it!

My impression is that they were looking for (inter alia, I’m sure) people who either believe, or are willing to claim to believe, that they can with certainty be 100% unbiased in their findings as jurors. That is, people who are in this respect either mistaken, or willing to lie. And that’s weird; I guess otherwise there’s too much danger of appeals or lawsuits or something? (Only for Guilty verdicts, presumably, since Not Guilty verdicts are unexaminable?) The Judge did say several times that something (the State, maybe?) demands a Yes or No answer to his “could you be an unbiased Juror and do as you’re told?” question, and when people said “I’ll try” or “I think so” or “I’d do my best” or whatever, he insisted on a “Yes” or a “No”. (So good on the honesty for those cop-family people saying “No”, I suppose.)

So if my calculations are roughly correct, after ummm two or 2.5 days of Jury Selection, they’ve selected only about 10 jurors, and exhausted the Jan 13th jury draw; so since they need at least 12 jurors and 2 (and perhaps more like 6) alternates, they’re going to be at this for some time yet! (Hm, unless it’s not a felony case? In which case 10 might be enough? But it sounded like a felony case.)

2023/01/07

Parker House Rolls

Parker House Rolls in a glass baking dish

I had one or more Parker House Rolls somewhere once, at some time in the past, and recently something reminded me of them, and today I made some!

Basically this is just a slightly sweet buttery sticky yeast-raised dough, not kneaded, divided into sixteen small loaves and baked all together in the same baking dish so that they grow somewhat back together again and you can have the fun of separating them.

(Even a non-yeast leavened dough might work; I wonder what would happen? This is the kind of thing I wonder about.)

Various recipes on the interwebs (pretty much all of them, really) call for stuff that I don’t have at hand, like sea salt, kosher salt, potato flakes, vegetable shortening, whole milk for that matter, and so on; and also stuff that I don’t have the patience for (or for cleaning up after), like separating eggs, or using very specific attachments and settings of an electric mixer. None of these appear to be necessary.

Here’s the recipe that I roughly actually used; it’s probably closest to this one, but with anything that seemed like too much work or I didn’t have in the house left out.

Parker House Rolls (makes 16)

1 1/4 Cup milk (any kind really; if you use skim, maybe add some extra butter), warmed
1 Tbsp active dry yeast
1/4 Cup sugar
Some salt (I dunno maybe a tsp.)
2 Eggs
8 Tbsp (one stick) butter (unsalted if you have it), softened
4ish Cups of flour

Warm up the milk to room temperature or a bit more, in the microwave or whatever. Similarly, soften the butter by mashing it with a fork, putting it in the microwave on Defrost, or whatever. You can even melt it, but it may impact the consistency of the finished product if you do, I dunno.

Mix the warm milk, yeast, and 2 Tbps (half) of the sugar in the big yellow bowl or other largish mixing bowl. Let that sit for 5-10 minutes. It may or may not froth up and get foamy if the yeast is feeling especially active; don’t sweat it either way.

Add the rest of the sugar, the salt, the eggs, and 6 Tbsp (three quarters) of the butter to the bowl, mix briefly.

Add two cups of flour, and mix until incorporated. You can use a stand mixer or anything else you like in this step, or just a sturdy spoon and main strength. Continue adding flour, about half a cup at a time or whatever you like, until you have a sticky dough that is pulling away from the sides of the bowl, but still sticking to the bottom, or at least showing signs that it would like to. Depending on how soft you softened the butter, there may be lumps of butter in the dough; squash some of them if so, but don’t worry about it too much.

Cover the bowl with a damp cloth or house-rules equivalent, and let sit for say 90 minutes in a cozy place.

After 90 minutes, remove the cloth and gently punch down the dough. Flour your hands because it will have gotten even stickier while rising! Divide the dough into 16 pieces, without unnecessary kneading or other roughness.

For traditionally-formed rolls, flatten each piece and fold it in half; or divide the dough into four pieces and for each piece fold it in half and cut it into quarters, similarly resulting in 16 folded pieces. Or look up various other more elaborate forming methods on the interwebs.

Put the 16 pieces in a four-by-four array (folded edges down) into a 9×13 inch lightly greased (lightly cooking-sprayed is simplest) baking dish; they should be touching each other.

Cover with a damp cloth or equivalent again, and let rise for 45 minutes.

Preheat your oven to 350°F while the dough gets a final few minutes of rising, then remove the cloth and pop the baking dish into the oven. Cook for 25 minutes or until looking pleasantly (but not darkly!) brown on top, or whenever your intuition tells you they’re done.

Brush tops with the remaining 2 Tbs of butter. Let cool for a bit in the baking dish, then tear apart to serve.

May be kept or frozen like any bread that has butter and milk and eggs and no preservatives, but really you’re going to eat them all almost immediately, aren’t you?

2022/12/04

Omelas, Pascal, Roko, and Long-termism

In which we think about some thought experiments. It might get long.

Omelas

Ursula K. LeGuin’s “The Ones Who Walk Away From Omelas” is a deservedly famous very short story. You should read it before you continue here, if you haven’t lately; it’s all over the Internet.

The story first describes a beautiful Utopian city, during its Festival of Summer. After two and a half pages describing what a wise and kind and happy place Omelas is, the nameless narrator reveals one particular additional thing about it: in some miserable basement somewhere in the city, one miserable child is kept in a tiny windowless room, fed just enough to stay starvingly alive, and kicked now and then to make sure they stay miserable.

All of the city’s joy and happiness and prosperity depends, in a way not particularly described, on the misery of this one child. And everyone over twelve years old in the city knows all about it.

On the fifth and last page, we are told that, now and then, a citizen of Omelas will become quiet, and walk away, leaving the city behind forever.

This is a metaphor (ya think?) applicable whenever we notice that the society (or anything else) that we enjoy, is possible only because of the undeserved suffering and oppression of others. It suggests both that we notice this, and that there are alternatives to just accepting it. We can, at least, walk away.

But are those the only choices?

I came across this rather excellent “meme” image on the Fedithing the other day. I can’t find it again now, but it was framed as a political-position chart based on reactions to Omelas, with (something like) leftists at the top, and (something like) fascists at the bottom. “Walk away” was near the top, and things like “The child must have done something to deserve it” nearer the bottom. (Pretty fair, I thought, which is why I’m a Leftist.)

It’s important, though, that “Walk away” wasn’t at the very top. As I recall, the things above it included “start a political movement to free the child”, “organize an armed strike force to free the child”, and “burn the fucking place to the ground” (presumably freeing the child in the process), that latter being at the very top.

But, we might say, continuing the story, Omelas (which is an acronym of “Me also”, although I know of no evidence that Le Guin did that on purpose) has excellent security and fire-fighting facilities, and all of the top three things will require hanging around in Omelas for a greater or lesser period, gathering resources and allies and information and suchlike.

And then one gets to, “Of course, I’m helping the child! We need Councilman Springer’s support for our political / strike force / arson efforts, and the best way to get it is to attend the lovely gala he’s sponsoring tonight! Which cravat do you think suits me more?” and here we are in this quotidian mess.

Pascal

In the case of Omelas, we pretty much know everything involved. We don’t know the mechanism by which the child’s suffering is necessary for prosperity (and that’s another thing to work on fixing, which also requires hanging around), but we do know that we can walk away, we can attack now and lose, or we can gather our forces and hope to make a successful attack in the future. And so on. The criticism, if it can even be called that, of the argument, is that there are alternatives beyond just accepting or walking away.

Pascal’s Wager is a vaguely similar thought experiment in which uncertainty is important; we have to decide in a situation where we don’t know important facts. You can read about this one all over the web, too, but the version we care about here is pretty simple.

The argument is that (A) if the sort of bog-standard view of Christianity is true, then if you believe in God (Jesus, etc.) you will enjoy eternal bliss in Heaven, and if you don’t you will suffer for eternity in Hell, and (B) if this view isn’t true, then whether or not you believe in God (Jesus, etc.) doesn’t really make any difference. Therefore (C) if there is the tiniest non-zero chance that the view is true, you should believe it on purely selfish utilitarian grounds, since you lose nothing if it’s false, and gain an infinite amount if it’s true. More strongly, if the cost of believing it falsely is any finite amount, you should still believe it, since a non-zero probability of an infinite gain has (by simple multiplication) an infinite expected value, which is larger than any finite cost.

The main problem with this argument is that, like the Omelas story but more fatally, it offers a false dichotomy. There are infinitely more possibilities than “bog-standard Christianity is true” and “nothing in particular depends on believing in Christianity”. Most relevantly, there are an infinite number of variations on the possibility of a Nasty Rationalist God, who sends people to infinite torment if they believed in something fundamental about the universe that they didn’t have good evidence for, and otherwise rewards them with infinite bliss.

This may seem unlikely, but so does bog-standard Christianity (I mean, come on), and the argument of Pascal’s Wager applies as long as the probability is at all greater than zero.

Taking into account Nasty Rationalist God possibilities (and a vast array of equally useful ones), we now have a situation where both believing and not believing have infinite expected advantages and infinite expected disadvantages, and arguably they cancel out and one is back wanting to believe either what’s true, or what’s finitely useful, and we might as well not have bothered with the whole thing.

Roko

Roko’s Basilisk is another thought experiment that you can read about all over the web. Basically it says that (A) it’s extremely important that a Friendly AI is developed before a Nasty AI is, because otherwise the Nasty AI will destroy humanity and that has like an infinite negative value given that otherwise humanity might survive and produce utility and cookies forever, and (B) since the Friendly AI is Friendly, it will want to do everything possible to make sure it is brought into being before it’s too late because that is good for humanity, and (C) one of the things that it can do to encourage that, is to create exact copies of everyone that didn’t work tirelessly to bring it into being, and torture them horribly, therefore (D) it’s going to do that, so you’d better work tirelessly to bring it into being!

Now the average intelligent person will have started objecting somewhere around (B), noting that once the Friendly AI exists, it can’t exactly do anything to make it more likely that it will be created, since that’s already happened, and causality only works, y’know, forward in time.

There is a vast (really vast) body of work by a few people who got really into this stuff, arguing in various ways that the argument does, too, go through. I think it’s all both deeply flawed and sufficiently well-constructed that taking it apart would require more trouble that it’s worth (for me, anyway; you can find various people doing variously good jobs of it, again, all over the InterWebs).

There is a simpler variant of it that the hard-core Basiliskians (definitely not what they call themselves) would probably sneer at, but which kind of almost makes sense, and which is simple enough to express in a way that a normal human can understand without extensive reading. It goes something like (A) it is extremely important that a Friendly AI be constructed, as above, (B) if people believe that that Friendly AI will do something that they would really strongly prefer that it not do (including perhaps torturing virtual copies of them, or whatever else), unless they personally work hard to build that AI, then they will work harder to build it, (C) if the Friendly AI gets created and then doesn’t do anything that those who didn’t work hard to build it would strongly prefer it didn’t do, then next time there’s some situation like this, people won’t work hard to do the important thing, and therefore whatever it is might not happen, and that would be infinitely bad, and therefore (D) the Friendly AI is justified in doing, even morally required to do, a thing that those who didn’t work really hard to build it, would strongly rather it didn’t do (like perhaps the torture etc.). Pour encourager les autres, if you will.

Why doesn’t this argument work? Because, like the two prior examples that presented false dichotomies by leaving out alternatives, it oversimplifies the world. Sure, by retroactively punishing people who didn’t work tirelessly to bring it into being, the Friendly AI might make it more likely that people will do the right thing next time (or, for Basiliskians, that they would have done the right thing in the past, or whatever convoluted form of words applies), but it also might not. It might, for instance, convince people that Friendly AIs and anything like them were a really bad idea after all, and touch off the Bulterian Jihad or… whatever exactly that mess with the Spacers was in Asimov’s books that led to their being no robots anymore (except for that one hiding on the moon). And if the Friendly AI is destroyed by people who hate it because of it torturing lots of simulated people or whatever, the Nasty AI might then arise and destroy humanity, and that would be infinitely bad!

So again we have a Bad Infinity balancing a Good Infinity, and we’re back to doing what seems finitely sensible, and that is surely the Friendly AI deciding not to torture all those simulated people because duh, it’s friendly and doesn’t like torturing people. (There are lots of other ways the Basilisk argument goes wrong, but this seems like the simplest and most obvious and most related to the guiding thought, if any, behind his article here.)

Long-termism

This one is the ripped-from-the-headlines “taking it to the wrong extreme” version of all of this, culminating in something like “it is a moral imperative to bring about a particular future by becoming extremely wealthy, having conferences in cushy venues in Hawai’i, and yes, well, if you insist on asking, also killing anyone who gets in our way, because quadrillions of future human lives depend on it, and they are so important.”

You can read about this also all over the InterThings, but its various forms and thinkings are perhaps somewhat more in flux than the preceding ones, so perhaps I’ll point directly to this one for specificity about exactly which aspect(s) I’m talking about.

The thinking here (to give a summary that may not exactly reflect any particular person’s thinking or writing, but which I hope gives the idea) is that (A) there is a possible future in which there are a really enormous (whatever you’re thinking, bigger than that) number of (trillions of) people living lives of positive value, (B) compared to the value of that future, anything that happens to the comparatively tiny number of current people is unimportant, therefore (C) it’s morally permissible, even morally required, to do whatever will increase the likelihood of that future, regardless of the effects on people today. And in addition, (D) because [person making the argument] is extremely smart and devoted to increasing the likelihood of that future, anything that benefits [person making the argument] is good, regardless of its effects on anyone else who exists right now.

It is, that is, a justification for the egoism of billionaires (like just about anything else your typical billionaire says).

Those who have been following along will probably realize the problem immediately: it’s not the case that the only two possible timelines are (I) the one where the billionaires get enough money and power to bring about the glorious future of 10-to-the-power-54 people all having a good time, and (II) the one where billionaires aren’t given enough money, and humanity becomes extinct. Other possibilities include (III) the one where the billionaires get all the money and power, but in doing so directly or indirectly break the spirit of humanity, which as a result becomes extinct, (IV) the one where the billionaires see the light and help do away with capitalism and private property, leading to a golden age which then leads to an amount of joy and general utility barely imaginable to current humans, (V) the one where the billionaires get all the money and power and start creating trillions of simulated people having constant orgasms in giant computers or whatever, and the Galactic Federation swings by and sees what’s going on and says “Oh, yucch!” and exterminates what’s left of humanity, including all the simulated ones, and (VI) so on.

In retrospect, this counterargument seems utterly obvious. The Long-termists aren’t any better than anyone else at figuring out the long-term probabilities of various possibilities, and there’s actually a good reason that we discount future returns: if we start to predict forward more than a few generations, our predictions are, as all past experience shows, really unreliable. Making any decision based solely on things that won’t happen for a hundred thousand years or more, or that assume a complete transformation in humanity or human society, is just silly. And when that decision just happens to be to enrich myself and be ruthless with those who oppose me, everyone else is highly justified in assuming that I’m not actually working for the long-term good of humanity, I’m just an asshole.

(There are other problems with various variants of long-termism, a notable one that they’re doing utilitarianism wrong and/or taking it much too seriously. Utilitarianism can be useful for deciding what to do with a given set of people, but it falls apart a bit when applied to deciding which people to have exist. If you use a summation you find yourself morally obliged to prefer a trillion barely-bearable lives to a billion very happy ones, just because there are more of them. Whereas if you go for the average, you end up being required to kill off unhappy people to get the average up. And a perhaps even more basic message of the Omelas story is that utilitarianism requires us to kick the child, which is imho a reductio. Utilitarian calculus just can’t capture our moral intuitions here.)

Coda

And that’s pretty much that essay. :) Comments very welcome in the comments, as always. I decided not to all any egregious pictures. :)

It was a lovely day, I went for a walk in the bright chilliness, and this new Framework laptop is being gratifyingly functional. Attempts to rescue the child from the Omelas basement continue, if slowly. Keep up the work!

2022/11/07

NaNoWriMo 2022, Fling Twelve

Light passes through windows. This is a puzzle. This is a complicated story, a story that no one understands, in four words.

Beams of sunlight pass through the library windows, making patterns on the wall.

Sitting where I sit, among the shelves and the piles of books, I see beams of sunlight passing through the library windows, making patterns on the wall.

My evidence for the existence of beams of sunlight is (at least) in two parts: I see dust motes dancing (dancing? what is it to dance? what kinds of things can dance?) in the sunbeams, visible (to me) in the light, where in other parts of the (dusty) library air, there are no (to me) visible dust motes dancing (or otherwise) in the air, and one explanation (is it the best one? what is best?) is that there is a sunbeam, or at least a beam of light, passing through the air there. (How does a beam of light pass through the air, let alone through the class of the windows?)

The second part of my evidence is the patterns on the wall. I know, or I remember (in memories that I have now, at this moment, the only moment, the only entity, that exists) that the wall is really (what could that possibly mean?) more or less uniform in color; vaguely white, and the same vague whiteness more or less all over. But what I see, or what I see at some level of awareness (what are levels of awareness? what is awareness? who is aware?) is a complex pattern of light and dark, triangles and rectangles and more complex figures laid out and overlapping, and I theorize (automatically, involuntarily, whether or not I intend to) that these brighter shapes and triangles are where the sunbeam (passing through empty space, and then the air, the window, the air again) strikes the wall, and the darker areas, the ground to the sunbeam’s figure, are simply the rest, the shadows, where the sunbeam does not strike, or does not strike directly.

(Or the dark places are where the frames of the window and the edges of shelves and chairs, things outside and things inside, cast their shadows, and the light places, the ground to the shadows’ figure, is the rest, where the shadows do not fall; figure is ground and ground is figure.)

Can we avoid all of this complexity, if we hold Mind as primary? I am. Or, no, as generations of philosophers have pointed out, feeling clever to have caught Descartes in an error, not “I am” but only something along the lines of “Thought is”. If there is a thought that “thought is”, that can hardly be in error (well, to first order). But if there is a thought “I think”, that could be an error, because there might be no “I”, or it might be something other than “I” that is thinking.

Second attempt, then! Can we avoid all of this complexity, if we start with “Thought is”? Or “Experience is”?

Experience is. There is this instant of experience. In this instant of experience, there is a two-dimensional (or roughly two-dimensional) surface. Whether or not there is an understanding of dimensions and how to count them, there is either way still a two-dimensional surface, here in this experience. In some places, the two-dimensional surface is bright. In some places, it is dark; or it is less bright.

Whether or not there is any understanding of brightness and darkness, what might lead to brightness and darkness, anything about suns or windows or how they work, there is still this brightness, in this single moment of experience, and there is still this darkness.

(Whether the darkness is actually bright, just not as bright as the brightness, whether the surface is really two-dimensional in any strong sense, or somewhere between two and three dimensions, or whether dimensions are ultimately not a useful or coherent concept here, there is still, in this singular moment of experience that is all that there is, this experience, this experience, which is what it is, whether or not the words that might be recorded here as a result of it (and whatever “result” might mean) are the best words to describe it.)

And whether it is described at all, whether description is even possible, does not matter; the main thing, dare I write “the certain thing”, is that this (emphasized) is (similarly emphasized).

So.

From this point of view, we may say, Mind is primal. Mind, or whatever we are attempting successfully or unsuccessfully to refer to when we use the word “Mind”, does exist, or at any rate has whatever property we are attempting to refer to when we say “does exist”. Except that “refer to” and “property” are both deeply problematic themselves here.

This is why the ancient Zen teachers (who exist, in this singular present moment of experience, only as memories of stories, memories that exist now) are said to have, when asked deep and complex questions through the medium of language, and impossibly concerning language, have responded with primal experience, with blows or shouts or (more mildly) with references to washing one’s bowl, chopping wood, carrying water.

We can remember that there is this. So, what next?

Language (this language, many other languages, those languages that I am familiar with, that I know of) assumes time.

Without the concept of time, it is difficult to speak, to write, to hypothesize.

Let alone to communicate.

To communicate!

The impossible gulf: that you and I both exist (what does it mean to exist?) and that by communicating (whatever that might be) one or both of us is changed (otherwise why bother?). But change, like time (because time), is an illusion.

So!

Is it necessary (categorically, or conditionally) to participate in the illusion? Or pretend? To act (or to speak, as speech is after all action) as though time and change were real?

The sun comes through the windows and casts shadows on the wall. Is there someone at the door?

Fling Thirteen

2022/11/05

NaNoWriMo 2022, Fling Eight

“So are you and Steve okay?”

“Yeah, why wouldn’t we be?”

Colin Colson and Kristen Lewis sat in Colin’s sunny dusty library room on a warm Saturday afternoon, talking lazily.

“Well, not to reveal anything said in confidence, but I understand there was something about a virtuality that you sent him, and his reaction –“

Kristen’s laugh interrupted Colin and he smiled.

“That was nothing,” she said, “and he apologized nicely and everything. He’s not really an idiot, he’s just a guy and all.”

The sun shone through the big windows onto the rugs, the piled books. Kristen and Colin sat in a pair of overstuffed old chairs, just out of the sunbeam.

“I think you frighten him,” Colin suggested. He was, as always, in a comfortable suit perfectly tailored to his small body. His idiopathic childhood growth hormone deficiency had given him the approximate proportions of a nine-year-old boy (“an exceedingly handsome one, obviously”), and his trust fund and his own writing had given him the means to dress those proportions immaculately.

“Because of my brilliance and beauty?” she asked with a matching smile.

“Of course,” Colin replied, “as you well know.”

These two had been lovers briefly and experimentally in school, something that Steve knew and tried not to remember very often. She had not been able to get over the oddness of his size in their intimacy, and he had sensed that without resentment. Now they were good friends, and when Colin felt the need for erotic physical contact, his trust fund and his writing were able to supply that as well, via a few open-minded professionals with whom he had excellent arrangements.

“I suppose I’m oversensitive,” Kristen suggested.

“I don’t think so; you are who you are, and if you didn’t frighten him a little, he wouldn’t like you as much.”

She smiled again at that; she smiled often in Colin’s presence. He had a wry wisdom, she thought, that accepted reality as it was, without rancor or unnecessary judgement.

“Do you want to see the virtuality, too?”

“Was it Hints of Home?”

The girl nodded, pleased.

“I saw it on your feed,” he said, “and spent an hour inside. It’s … lovely. The signposts are so subtle.”

“The tools we have these days are amazing,” she said, “you should really try it.”

“I will hold to my old-fashioned linear words, thank you very much,” he replied, adding a stuffy timbre to his voice, “the virtual is entirely too huge for me.”

She nodded. This was a conversation that they had had more than once before. The world’s artists were divided roughly in half, it seemed, between those who fully embraced the new creative universe of AI-assisted fully-immersive digital universe-building, and those who continued to work with words and images in the old-fashioned way (not counting the occasional spell-checker or digital white-balance adjustment).

And, he reflected, between those crafters who had fully launched into the virtual and digital, and those like Steve who preferred to construct physical rigs with which one could risk one’s physical life by speeding through the physical desert at ridiculous speeds.

“I wonder if it was partly because he doesn’t really grok the virtual, still,” Kristen said, her thoughts apparently paralleling his own, as they sometimes did, “and because I sent him the link while he was deep into tuning a rig.”

“Cognitive dissonance,” Colin replied, nodding. He himself, although he was a loyalist of linear text in his own writing, greatly enjoyed modern media as well as a consumer, and found Kristen’s virtualities, small intricate worlds of their own with mysterious and subtly-delivered themes, invariably rewarding.

“But he always remembers to be complimentary, eventually,” she smiled, and he saw in the abstraction of her eyes that she was thinking fond thoughts of his large awkward friend Steve, which made him glad.

In the part of him that always watched himself, he wondered if what he had seen was really there. Did Kristen’s brown eyes defocus, the exact configuration of the whites revealing that they were no longer both pointing at some specific point in the room? Did the light in the room (the photons bouncing here and there in law-like but complex ways) carry enough evidence of that to his own eyes that his brain could justifiably (so much to know about justification!) conclude that her eyes were indeed defocused? And then did he know her well enough (what does it mean to know someone? what is “someone else”?) to justifiably conclude that if her eyes were defocused in this present moment (the only moment that exists), it meant in context that she was thinking of Steve (who is “Steve” when he is not here)?

There was probably something else in her face, some fond expression on her cheeks and her lips, the corners of her eyes, that was providing him subconscious additional evidence that she was thinking fondly of someone, and, he thought, there was perhaps a lack of subtle clues of tension that would have suggested she was thinking fondly of anyone other than Steve. Because his brain (rightly or wrongly) assumed that if she was thinking fondly of anyone other than Steve (or Colin) in Colin’s presence, she would be tense?

He realized that he’d drifted several removes from reality, and that Kristen was standing now, her back in the sun, looking over the books on one of the shelves (never enough shelves!).

“Ach,” she said, “I love Giannina Braschi. But she is so dense, I feel like I should spend a whole day on every paragraph.”

Colin nodded.

“And that is how I feel about trying to write in the virtual; if I can spin off every interesting potential from every thought, the shortest of short pieces would be a billion words.”

“And that isn’t true of plain text?” Kristen laughed, landing an unavoidable blow.

“But I can’t do it there, no one expects it, text encourages linearity; word-building encourages sprawl, differentiation, spreading out along infinitely many branches at every moment. The Many Worlds universe!”

“One merely has to be disciplined,” she said, coming to perch on the arm of his chair.

He smiled up at her, and she put out one brown hand and stroked his hair.

“Don’t do that,” he said, but he moved his head to press against her palm, like a happy kitten.

This present moment is all that exists, he thought, and it is possible to enjoy a girl’s hand on his head without understanding anything about it. What is touch? What are hands? What is love and what is loyalty? Which atoms belong to his hair, and which to Kristen’s palm? Was this fluttering in his chest pleasure, or sorrow, or both? What is discipline, what is fear? Why, in this moment, did she choose to sit there, did she choose to touch him? In this moment, every past moment exists only as a memory, in this moment. What makes a memory true, what makes a memory false?

She touched the back of her hand to his cheek for an instant, and then returned to her own chair.

“And how is it with you?” she asked, looking warmly over at him, “What are you writing now?”

Fling Nine

2022/11/02

NaNoWriMo 2022, Fling Three

The cars are the wheels of the city. The city is the body of the car. Smoke and steam are air and water, air and water and fire and thoughts of time and distance and speed.

The streets talk to the windows, and the windows are the words of the streets. Behind the windows are drums, and phones, and the drivers of cars, dictionaries and old photos of the rowing team. I want to take you away from here, but you are in the car, and the cars wheels are your wheels, and the breath of the smoke is your breath.

Come to the club tonight, you say, and the club forms in my mind, accretes around the sand-grit of your cigarette ash; the club is smooth and pearlescent, and slides through my mind like a child’s toy fallen into the foaming rush of the river.

The car is long and thick, and inside it you bring a world of smoke and liquor and dancing, and the street talks to the windows of the buildings that pass by, shop-fronts and apartments, the precinct house and the pillory, the Church of Our Lady of the Distant Fields, where the statues weep for our sins, and we sin while weeping, all the world sinning and weeping together on this holiest of Saturday nights.

Dangle me like a fishhook, baby, the singer commands, as smoke dangles in the air, and the fishhooks of the police watch from the high balconies far above the roof, the searchlights searching, the voices raised in song, joyful sinners kissing weeping saints, my long circle arcing finally back around to you, you the golden child, the proud stuff of dreams there in your seat by the window, louche and relaxed and pouring yourself another two fingers of Oban Single Malt from the car’s very own bar, Simon the driver whistling softly a tune that his mother used to sing him at bed time, because he knows that you don’t mind when he whistles.

The cars are the wheels of the city, they hold it up and move it along, they have bags of money in the trunk, in the boot, stuffed under the back seat, and money talks, money talks to the street, and the street talks to the windows, and the windows give their light generously to the air, smoke curling and coiling, and for all of the talking and shouting and singing and cries of anger or joyful release, for all the growling of engines and screeching of tires, the city is silent in the night by the river.

“Look at the lights on the water,” you say, or you said on some other night. Your sweet red cloche hat cradles the city’s soul in its arms, and I think of the smooth surface of the water, and what might be under it, and how far down the water goes, down down down from the air and the piers and the docks and the water-stairs, down forever into the depths where there is no air, but there is water and mud and debris, and centuries of forgotten things.

A penny for your thoughts, you might say, but you don’t, as the patterns of the windows and the headlights and the neon signs reflect from the surfaces of your eyes. Your eyes take in the patterns of light, eagerly welcoming them as new examples of familiar things, allowing the carefully-shaped glowing tubes to deliver their photons directly to your hidden intimate retinas, forming shapes and signs and letters that pop in your brain, and whisper to your mind, Budweiser, Open, Hotel, Eat, Open, Restaurant, Delicatessen, Music, Drugs, Radio City.

The words whispering into your mind from the neon signs are words chosen, planned, by people in the past, people who are alive or dead, present or absent, but have left their mark on the city, and now the city whispers their words into your mind as you sip your scotch, and the rich peaty memory of time slides burning down your mortal organic throat.

The cars are the wheels of the city, the night is oil, the smoke is gasoline, and everything is flammable, every neon sign and every breath and every steakhouse catching an efflorescent fire from the thoughts of children, from the cries and the ambitions of adults, the skittering of rats under the streets, in the old tunnels and the new tunnels, bright with subterranean dew, where mushrooms grow small and stunted, closely huddled to the cement and the iron rails.

Sharp metal softly pierces and separates tissue, could softly pierce and separate the membranes of the tires of the wheels of the cars of the city, letting the air out in grateful sighs.

Thoughts of sharp metal turn themselves in the neon light, reflecting in curving patterns on the metal panels of the moving cars and the doors of the club. Men and women come and go, men in dark suits and women in bright dresses, wool and felt, cotton and silk, the whisper of cloth against cloth and cloth against skin.

Is this where we have come to, is this what we have come to? When the dark stone blocks are arranged in the old way, they spell the secret name of God, and if the name is spoken the doors will unlock, and all will be revealed, and ancient things will be released. What would the rough beast, slouching toward Bethlehem to be born, make of the city and its neon, the club and its busty cigarette girls, the insouciant blues sliding sleek into the foggy night without visible regard for the depths of time and the need for salvation?

Save me, honey, hold me, baby, oh what you do to me. Only a worshiper can understand, only on our knees are we gifted with that final revelation (“now you can drink it or you can nurse it, it don’t matter how you worship, as long as you’re down on your knees,” like the man said), that final gift that comes whether or not we ask for it, whether or not we want it.

“I’ll never understand you,” you said, with your lips and tongue and your throat, spoken from somewhere in the pulsing depths of your body, your brain, your tenuous connection to Mind, and when you said it it shook the air, and the air shook membranes within me, and my brain pinged and my Mind changed, and I thought you that thought that you would never, in fact, understand me. Who am I, and who are you? When the blocks align, all questions will be answered, and we will know the will of God.

The cars are the wheels of the city, the city is the heart of the nation, the nation is the skin of the world. Beneath the skin lie the bones, the muscles, the blood. The blues sing sweetly of the blood, how it pulses and how it sings, and the smoke gathers in the upper corners of the big room, a woman laughs, a chair scrapes, someone cries out something unintelligible, drinks are served, and slide down so many tender throats.

I feel the dawn coming. I feel the dawn coming, I feel the dawn. Coming.

Fling Four

2022/10/31

Weirdness from the Copyright Office

A quickish update. I have said, and still believe, that things created using AI tools are just like anything else with respect to copyright. But recent events remind me that the Copyright Office is made up of people, and people are unpredictable, and US Copyright law is in many places a squashy mess made up of smaller squashy messes, so logic does not always apply.

Here is a currently-relevant set of data points:

  • I have registered the copyright on an image I made using MidJourney. I didn’t mention that I used MidJourney (or Chrome, or Windows) on the application form, because there was no place to put that; the form didn’t ask. The application for registration was granted routinely, without any complication.
    • I imagine there are hundreds / thousands of similar registrations from other people.
  • This person has registered the copyright on a work that they made using MidJourney (I think it was), and the work itself makes it clear that MidJourney was used. The application was afaik granted routinely, without any complication.
    • But now it appears that the copyright office has said “oh wait we didn’t notice that MidJourney thing, so we’re cancelling your registration”.
    • And the person is appealing, apparently with the help of MidJourney themselves. (Hm, they’ve also apparently deleted some of their tweets on the subject; lawyer’s advice perhaps.)
  • This person has applied apparently to register various images made with various workflows involving AI (dalle2 I think) to various extents, clearly stated, and rather than being just accepted or just rejected they’ve received emails from the copyright office asking them for details of what they did, and especially bizarrely suggesting that perhaps at least one of the works might have been “conceived” by the AI.
    • Which seems crazy, because the Copyright Office has generally had the opinion that software isn’t creative, and can’t (like) conceive things.

I suspect that things are just rather in disarray at the Copyright Office, and different examiners are doing different things, perhaps having gotten different memos on the subject, or just having their own different opinions about things. It will be interesting to see how the appeal mentioned above goes!

To me, it seems obvious that things created with AI tools should be prima facie registerable with the copyright office, just like photographs presumably are, and if someone wants to challenge based on some legal theory about either lack of creativity or derivative works or whatever, they can do that. The copyright office itself, I would think, would want to stay far away from any situation where they have to somehow evaluate themselves how many units of creativity are in each of the kazillions of applications they get daily.

On the other hand, the Copyright Office could simply issue some sort of guidance saying “We won’t register copyrights on works created with the significant use of an AI tool like dalle or MidJourney, so don’t bother asking” (and could even update the forms to have a question about it).

I think that would be dumb, and lead to court cases eventually that would either overturn that or at least cause a great deal of faffing about that they could have avoided.

But then people and government offices do dumb stuff all the time, so who knows! All is in flux…

And here is an image that I made using Midjourney. No matter what the Copyright Office thinks today. :)

Updates: Things have developed legally and otherwise since this was posted; I recommend the copyright tag on the weblog here for currency.