Archive for December, 2022

2022/12/26

December 26th, 2022

We made just 106 dumplings this year, plus another eight filled with Extra Sharp Cheddar Cheese (that was the little boy’s idea; they’re pretty good!). This is a smaller number than usual (drill back into prior years here). The small number was probably mostly because single units of ground meat from FreshDirect tend to weigh just a pound, whereas single units from the grocery in prior years were more like 1.25 to 1.4 pounds. (Although, come to think of it, just where did we get the ground meat last year? Not sure.) And also because grownups tend to put more meat in each dumpling, perhaps. But in any case, we are now all pleasantly full, and the little daughter and her BF are safely back in the urbanity.

What has occurred? I feel like things have occurred, to an extent. I am more on Mastodon now than on Twitter, and if you want to keep up with the images I’ve been making in Midjourney and so on, you’ll want my Pixelfed feed. I listed lots of various of these pointers back the other week (and wow having every chapter of the novel as a weblog post makes it hard to scroll through the weblog). When Elon “facepalm” Musk briefly prohibited linking from Twitter to Mastodon, I actually set up a LinkTree page with my links.

Someone must have said “they can still link to Mastodon via Linktree” in his hearing, because he then briefly prohibited linking to LinkTree. That caused me to set up my own Links page over on the neglected (and in fact apparently pretty much empty) theogeny.com; I should put back all the stuff that used to be there sometime!

Note how ossum that Links page is! When you move the cursor over it, the thing that the mouse is over that you will go to if you click (if any) changes color (although I drew the line at having it bouncily change size the way Linktree does). You can look at the page source, and see the lovely hand-coded CSS and HTML. :) It even validates! (w3c seems to have a change of mind about validation badges, which makes me a little sad, so there’s no little “valid HTML 5!” badge on the page that links to the verification of the claim, but hey.)

That reminded me of the One-Dimensional Cellular Automaton that I make in hand-coded CSS and HTML and JavaScript the other year; it vanished for a long time, even from my personal backups of davidchess.com, and I’d almost given up on finding it until I thought of the Internet Archive‘s Wayback Machine, and discovered that it had snapshotted that page exactly once, in February of 2012.

So after a bit of fiddling around, I can once again present the One-Dimensional Cellular Automaton for your amusement. The page source there is also quite readable, I tell myself.

Note that many other things on davidchess.com are currently / still broken, although in the process of bringing that page back, I also brought the main page back, so you can see the extremely retro rest of the site (working and otherwise), including the entries in this (well, sort of “this”) weblog between 1999 and 2011.

Oh yeah, we had Christmas! That was nice. :) I got lots of chocolate, and the little (not little anymore) boy gave me a digital image of Spennix (my WoW main) dressed like the pioneer in the Satisfactory game, with a perfect “Spennixfactory” logo. And wife and daughter both got me books: “The Hotel Bosphorus” (a murder mystery set in Istanbul, my current Bucket List destination, and involving a bookshop, so what could be better?) from M, and “Klara and the Sun” (which I’ve been meaning to get, but never had) from the little daughter. (She thought that maybe I already had it and that’s why Klara is called “Klara” in the Klara stories, but it was as far as I know a complete coincidence.)

I’m working away at Part Three of Klara, after she leaves the clockwork world, but it’s slow going. I have an actual plot in mind that I want to illustrate, and I’m using a different graphical style which necessitates a different Midjourney workflow that I haven’t quite optimized yet. But it’ll get done! Probably! :)

We close with a Seasonal Image for the Solstice…

A disc with abstract shapes of fir trees, decorations, planets, and whatnot around the edge. In the center a round shape with small spiked protrusions, perhaps the sun, sits atop what may be a tree trunk that projects upward from what may be the ground and some roots at the bottom of the image. Branches stick out of the perhaps-sun, and some stars and planets and a few more enigmatic shapes inhabit the spaces between the branches.

Here’s to the coming of the longer days! Or the cooler ones, to those on the flipside… :)

2022/12/21

Best Buy queueing theory

Single-queue multiple-server is often a pretty optimal way to set up a system; there’s a single potentially large / unending bunch of jobs / customers waiting, and some comparatively small number of servers / staff to take care of them. When a server is free, some job is chosen and the server starts running / serving that job.

When the chosen job / customer is always the one that’s been waiting longest, that’s a FIFO (first-in first-out) queue, known to consumers in the Eastern US as a “line”. It’s easy to implement, sometimes pretty optimal under certain assumptions, and has a sort of “fair” feeling about it.

On the other hand, I have the feeling that when the customer set is highly bimodal, the whole setup might not be entirely optimal in some cases.

For instance, if some of your customers are just buying a 1Gb Ethernet switch (see below) and some Reese’s Peanut Butter Cups using a credit card, and it will take 45-60 seconds, and another set of customers are picking up something that’s being held for them somewhere in the back, and they aren’t quite sure what it is, and want the staff person to explain how to use it, and then want to purchase it using Latvian stock market futures that are actually in their brother-in-law’s name, and that will take 20-90 minutes, then some of those first set of customers are going to end up waiting (in some sense) an unnecessarily long time, waiting for education to complete or a brother-in-law’s marriage certificate to be found in an overcoat pocket.

One could assign a particular server to small jobs, or to small jobs if there are any such waiting, or always let a short job go before a long job if there are any waiting, or unless there’s a large job that’s been waiting more than a certain time, or…

All of these can be implemented in software systems, but most of them are too complicated or unfair-feeling for a Best Buy full of strangers. Allocating one server / staff member / desk to “customer service” (anything involving training, or stock market futures, for instance) and the rest to ordinary purchases is about as complex as it’s practical to implement. They weren’t doing even that at my Best Buy this morning, but then there were only three staff people on registers, and taking one-third of them away from the short-transaction customers might have been bad. Or just no one wanted to bother figuring it out.

Speaking of 1Gb Ethernet switches, I mean, WTF? I know I’m old, but I still think of these as costing thousands (tens of thousands?) of USD, requiring careful tuning, and taking up a significant part of a room (okay, a small room, or at least a rack slot). Now granted that was maybe for one with at least 24 ports and a management interface, but I mean! I can buy one for the price of two large pizzas, and it fits in the palm of my hand? Really? Where are the flying cars then??

A picture of a Netgear 1Gb Ethernet Switch.

That is a picture of a 1Gb Ethernet Switch. Possibly the one that I bought, I forget exactly. Might have been Linksys. Or something.

2022/12/17

This here Framework laptop

Hardware geekery! Extremely softcore hardware geekery, because I am very much not a hardware person. (I was once, about when a 10MB hard drive was an exciting new thing. That was awhile back!)

A couple of years ago, I bought a Lenovo Legion Y740 I think it was, laptop. This was after being rather disappointed by a Dell Alienware something-something laptop previously. After a couple of years (or really rather less than that) I was rather disappointed by the Lenovo Legion Y740 also:

  • A couple of the keys just broke, and turned out to be hard to obtain replacements for (because after a couple of years they were of course old and only obtainable from antiquarian key sellers, and because figuring out exactly what key one needs is more challenging than it ought to be, because not all Legion Y740s apparently use the same up-arrow key), and also hard to replace once one did have the (probably) right key. At least once I managed to break the replacement key while trying to replace the broken key. So I spent lots of time poking at the nib under the key, and that got old (especially for the up-arrow key).
  • It forgot how to talk to an Ethernet cable, in rather complicated ways that I couldn’t figure out: the cable provably worked fine with other devices, and in every connection method to this computer (direct Ethernet, Ethernet-to-USB-A, and Ethernet-to-USB-C), it worked very badly, occasionally working for a bit but then randomly dropping. Hardware, software? Who can tell. “Reinstalling” the “Windows network drivers” made no difference.
  • It began being very confused about its battery. After using it for some time on power, with it announcing the battery charge at 100%, there was a good chance that within a few seconds of being unplugged it would shut down in an emergency fashion (not that Windows seems to know of any other kind), and on being plugged in again elsewhere would claim that the battery is at 0%. Bad battery? Bad power driver? Bad something else? No idea. Also it would sometimes randomly shut down even while plugged in. Battery? Overheating? No idea.
  • And some other stuff I forget; at least one of the USB ports felt very loose and I was never confident that a thing plugged into it would continue working.

And then it started to not be able to see the hard drive and/or to just randomly not boot. So that was bad. (For what it’s worth, both the HDD and the SDD work fine when read via USB from an external enclosure, so probably it was something complicated inside.)

So as not to be entirely limited to my “cellular phone”, I bought a tiny little Samsung Chromebook of some kind, for the price of roughly a dinner delivered by DoorDash, and that was actually surprisingly acceptable. No playing WoW or Satisfactory or Second Life / OpenSim or anything like that, but pretty much everything else worked great, lots of Chrome windows, multiple displays, Zoom, etc. It did slow down when too loaded, but it was able to do more at once than I initially expected it to.

I did want to be able to play games and be virtual again eventually, though, so I was looking in a disconsolate way at various beefier laptops that would probably disappoint me before long, when I (along with lots of other people) came across Cory Doctorow’s “The Framework is the most exciting laptop I’ve ever used: Sustainable, upgradeable, repairable and powerful“, and I (along with lots of other people) thought “Hmmm!”.

I went to the website, used the configurator / designer thing to design a plausible-sounding one, noted that it was noticeably not all that expensive, and probably without thinking about it as hard as I should have, I ordered one. Initially there wasn’t much information about when it might arrive (the ETA was, as I recall, “November” or something like that), since Framework are a comparatively small outfit who have to do things like batching up a bunch of orders and obtaining the hardware and getting the manufacturing done only once they have enough, and like that. But I figured I could get by on the tiny Chromebook for a bit longer.

As it turned out, I got a notice that it was being put together, and then shipped, at the early side of the ETA, and it arrived days before it was predicted to; so that was all nice. The Unboxing Experience was great; it turned out (I’d forgotten this!) that I had ordered the “DIY” version, which meant I had to spend maybe 10 minutes, tops, plugging in the SDD and RAM. (Apparently some DIY instances also have to install the WiFi object, which looks a little more complex, but mine already had it.)

And it works great!

The video is not some fancy AMD or NVIDIA, but just some modern Intel video chipset, which Real Gamers look down upon, but it runs WoW and Satisfactory and the Firestorm viewer and stuff just fine, and that’s all I need. (Firestorm does crash sometimes, and that might be because of the chipset, or something completely different.) The hot-swappable ports are terrific! I do realize that it’s really just like four fast USB-C connections on the motherboard and then a variety of something-to-USB-C adapters that slip in and out of them easily, but the effect is that if you want your computer to have four USB-C connections one minute, and a USB-C, a USB-A, an Ethernet RJ45, and an HDMI or Display Port or whatever the next minute, that’s totally doable. (I generally have three USB-C and an RJ45, and don’t swap them to speak of, but it’s that I could.)

Which reminds me to be a little baffled about how whatever-to-USB-C adapters work, and how they can be so small and simple and all. I mean, isn’t there more than just some crossing of wires involved in mapping USB-C signals to Ethernet RJ45? That particular adapter does stick out of the computer more than the others (which are all USB-C to USB-C, so yeah that I understand being simple), and has some circuitry visible inside its rather cool transparent body. But still, the fact that there are relatively simple and relatively cheap wires that can connect USB-C to just about anything impresses me. I guess many of them have little tiny computers inside? And that that’s what the “U” stands for and all? Okay.

It’s quiet (no moving parts to speak of, no HDD, SDD being so cheap these days, and that must be a very quiet fan in there), it’s slim and light (feels about like the tiny Samsung in terms of heft), it gets hot but not too hot, and it looks nice. Simple and clean and efficient visual design. And it really is designed to be opened up and have parts replaced if they break. (If a key breaks, apparently the theory is that you should replace the keyboard, and that’s a bit wasteful, but at least it’s possible!) And unlike the Samsung, it has a backlit keyboard! (Oh, and an audio jack, too! That’s just chef’s-kiss-dot-bmp.)

The only things I dislike about the Framework at all are (I don’t even remember what I was going to write here), I guess, that I’m running Windows on it? :) Windows 11, in fact, which it came with, and which is annoying in various Windows ways, but livable-with, and WoW and Satisfactory don’t as far as I know run on ChromeOS.

(Possibly there’s some setup that would run Linux, or Linux-under-ChromeOs, and then either Windows under that and then the games, or Linux versions of the games, or something, but I’m not into that level of system fiddling these decades.)

Oh, and the other negative thing is that the WiFi signal is terrible over here where I sit when we’re all sitting around in the big bedroom at night being together and doing our internet things. But that is not the laptop’s fault, and I should just move the access point to a better place or get a repeater / booster / mesh, or just run another wire up from the basement and plug into that. It works well enough.

So I’m happy! I have a small and light and quiet but sufficiently muscular machine which does the things I want and has a good vibe (a great vibe for moving off Twitter and onto the Fediverse at the same time, but that’s for another post). It’s possible that it will wear out quickly, but I’m kind of hopeful. More than I would be with yet another generic supposedly-fancy corporate laptop, anyway.

2022/12/16

Some light infringement?

I think I have said on a few occasions that for instance a class-action copyright lawsuit against Copilot might not bear directly on AI art tools like Midjourney, to the extent that CoPilot apparently does tend to copy from its training set verbatim (and unattributed) whereas (I thought at the time) Midjourney doesn’t.

Well, it turns out that Midjourney does, maybe, to an extent. For maybe a few works?

The one that’s gotten the most attention is the 1984 photograph of Sharbat Gula by Steve McCurry, popularly known as “Afghan Girl“. The strings “afghan girl” and (haha) “afgan girl” are prohibited in Midjourney prompts at the moment. (“The phrase afghan girl is banned. Circumventing this filter to violate our rules may result in your access being revoked.”) And this is apparently because that phrase all by itself elicits what are arguably just slight variations of the original.

There’s a Twitter post that claims to show this, but I’m not certain enough it’s real to link to it. Also it’s on Twitter. But I can say that entering similar non-banned phrases like “young Afghan woman” also produce images that are at least quite similar to the photo of Gula, more similar than I would have expected. Given the size of the Midjourney training set, that image in association with those words must occur a lot of times!

(Update: it seems likely that the most widely-circulated image purporting to show Midjourney spontaneously generating close copies of the Gula “Afghan Girl” picture, is not actually that: it was made by giving the AI a copy of the original photo (!) and the prompt “afghan girl, digital art”. That the AI can make a copy of a work, given a copy of the work, is no surprise! Evidence, on a link probably usable only if you’re logged into Midjourney, is here. Given the further examples below, this doesn’t entirely undercut the point, but it’s interesting.)

The other example that I know of is “Starry Night”, which brings up variations of the van Gogh piece. This one’s out of copyright :) so I have no qualms about posting what I got:

Four variations on van Gogh's "Starry Night" ("De sterrennacht"), all with the swirly sky, tree to the left, buildings with lighted windows in the background, hills in the distance, crescent moon upper-right, blobby stars, etc.

Pretty obviously derivative in the usual sense. Derivative Work in the legal sense? I have no idea, and copyright law is sufficiently squishy and subjective that there is probably not a correct answer until and unless explicitly litigated, or the legal landscape otherwise changes significantly.

Are there other short phrases that will home in on a particular famous image? “Mona Lisa” (also out of copyright) certainly seems to:

Four variants of the Mona Lisa, all markedly worse than the original, but all very recognizable.

Interesting and/or hideous variations, but still instantly recognizable.

What else might we try? “Migrant Mother” produces images that I think are clearly not derivative works:

Four rather striking monochrome images of a woman and child, in various poses and garments, with variously creepy-looking hands.

Striking perhaps, ruined by the bizarre hands perhaps, in the same general category as the photo by Lange, but clearly of different people, in different positions, and so on. It’s not “plagiarizing” here, at any rate.

What if we tried harder? Let’s explicitly prompt with like “Migrant Mother photo, Dorothea Lange, 1936“. Whoa, yipes! Is this out of copyright? Well, if not it’s probably Fair Use in this posting anyway, so here:

Four slight variations of the famous Migrant Mother photo, showing a worried-looking woman with a child hiding its face on either side of her.

Definitely derivative, and possibly Derivative. How about “Moon and Half Dome, Ansel Adams, 1960“? Well:

Four pictures showing an oddly-distorted version of Half Dome, a very large moon, and some evergreens. One also has a reflecting body of water in the foreground, another is framed by a stone archway.

This is definitely not the picture that that search will get you in Google Images; if nothing else, the moon is way too large, and the top of Half Dome is a bizarre penguin-bill sort of shape. I’m guessing that this is because there are lots of other Ansel Adams pictures in the training set associated with words like “moon” and “half dome”, and mushing them all together quasi-semantically gives this set. The origin of the penguin-bill I dunno.

Maybe “Beatles Abbey Road cover, crossing the street“?

Crosswalk, front guy in white, roundish car to the left, check. Derivative in various senses, for sure. More specific prompting could presumably increase the exactness.

So I think we’ve established, to the extent of the tiny number of experiments I have the energy to do, that Midjourney (and, I would wager, other AI art tools, mutatis mutandis; I could get a Starry Night easily out of NightCafe, but not a Migrant Mother) can in fact produce images, the production of which arguably violates one or more of the rights of the copyright holder. It is most likely to do it if you explicitly try to do it (giving the most famous name of the image along with the artist and ideally the year and anything else that might help), but can also do it by accident (innocently typing “afghan girl”).

This doesn’t mean that these tools usually or typically do this; the fact that you can get a copy of an image from a tool that looks like it involves copyright laws doesn’t mean that other images made with it also involve copyright laws. To use the usual comparison, you can easily violate copyright using Photoshop, but that doesn’t suggest that there aren’t non-infringing uses of Photoshop, nor does it provide evidence that any particular image from Photoshop is infringing.

The easiest way to think about the blocking of “afg{h}an girl” from Midjourney prompts is that they have made a tool, realized that it could be used to violate copyright, and taken action to make it more difficult to use it that way in some cases.

This all bears on the question of whether images made with AI tools violate copyrights; the question of whether making the AI tools in the first place involves an infringing use is a somewhat different question, and we might talk about it some other time, although I’m still feeling kind of burnt out on the legal issues. But I did want to update on this one particular thing.

2022/12/13

Hemingway, by Midjourney

I now have like 190 images in the folder that Windows uses to pick desktop backgrounds from; building on the twenty-odd that I posted here the other day. They are fun! But I’m not going to post any more right now; right now, I’m going to post some images comparing the various Midjourney engines (which they have generously kept all of online). I’m going to use the prompt “Hemingway’s Paris in the rain”, because why not! We can do other prompts some other time.

For most of these (all but “test” and “testp” I think), it produced four images, and I chose one to make bigger. Otherwise (except as noted) these are all just one-shots on that prompt. I’m going to paste them in more or less full-size, and let WordPress do what it will. Click on an image might or might not bring up a larger version or something who knows.

Here is the quite historical v1:

A rather vague but definitely rainy image of Hemingway's Paris in the rain. There is a tall black tower to the left that may be inspired by the Eiffel Tower, but resembles it only vaguely.

Here, similarly, is v2:

Another vague and rainy, perhaps slightly less streaky, image of Hemingway's Paris in the rain. A possible bit of Eiffel Tower inspired tower shows over the buildings to the right.

I rather like both of these; they are impressionistic, which I like, and I suspect it’s mostly because that’s the best they can do in rendering things.

Here is “hd”, which may be the same thing as v1 or v2 I’m not sure; this particular image is more strongly monochrome and sort of vintage-looking photo-wise:

A somewhat blurry and rainy of an old city square with some people in it, some with umbrellas. Could be Hemingway's Paris; no towers evident.

Now v3, which is pretty much when I started using Midjourney; it’s interesting how impressionistic this is, given that we know v3 can also do rather more realistic stuff (all of this, for instance, was v3):

A rather impressionistic drawing, perhaps in charcoal, with a somewhat Eiffelish tower to the left. Definitely rain, likely Paris.

Between v3 and v4, we had this engine, lyrically named “test” (I used the additional “creative” flag, because why wouldn’t one?); one is getting a bit more photographic here:

A slightly less vague still image of Paris in the rain, black and white, umbrellas, and so on.

and here is the “testp” variant of “test”; the “p” is supposed to be for “photographic”; I used the “creative” flag here also. It’s not notably more photographic than “test” in this case; maybe it’s the rain:

Another rainy city street, monochrome, a few cars, shiny impressionistic pavement, townhouses.

Now brace yourself a bit :) because here is the first version of v4 (technically in terms of the current switches it’s “v 4” and “style 4a”):

A soft-edge realistic painting of a Paris street in the rain, in muted but glowing colors. A few people walking in the distance are vague but convincing shapes. The Eiffel Tower is visible in the distance.

Yeah, that’s quite a difference. We have colors, we have lanterns casting light, we have very definite chairs and awnings and things. But now, the current v4 (“style 4b” which is I think currently the v4 default):

A rather realistic painting of vintage Paris in the rain; a couple of old-style cards on the street, their headlights and the lights of the shops reflecting in the wet pavement. Shopfronts and awnings, people in identifiable clothing. There are words on a couple of the shopfronts, but they are unintelligible: something like PHASESILN for instance.

Yeah, that’s gotten rather realistic, hasn’t it? It’s even trying to spell out the signs on shopfronts, even if it hasn’t really mastered language. But those cars are extremely car-like and detailed compared to anything earlier.

Can this currently-fanciest engine give us something a bit more like the atmosphere of the older ones, if we want that? Basically yes, if we ask for it. Here is the latest v4 again, with “impressionistic” added to the prompt:

Yet another wet rainy city street scene, again in full convincing muted color, but more impressionistic than the last. Again we have people (and hats) and umbrellas and shopfronts, but no attempt at individual letters on signs.

I rather like that! And “monochrome” would make it monochrome, and so on.

It’s perhaps interesting that the more recent engines were less insistent that pictures of Paris include the Eiffel Tower. Possibly just the random number generator, given how tiny our sample is here, but possibly significant in some way.

So there we are, nine probably rather enormous pictures of Hemingway’s Paris in the rain, as conceived by various stages of development of the Midjourney AI, and with only very minimal human fiddling around (picking the prompt and the one to feature from each set of four, having the idea to compare the versions in the first place, and like that) by me.

Comments welcome as always, or just enjoy the bits. :)

2022/12/04

Omelas, Pascal, Roko, and Long-termism

In which we think about some thought experiments. It might get long.

Omelas

Ursula K. LeGuin’s “The Ones Who Walk Away From Omelas” is a deservedly famous very short story. You should read it before you continue here, if you haven’t lately; it’s all over the Internet.

The story first describes a beautiful Utopian city, during its Festival of Summer. After two and a half pages describing what a wise and kind and happy place Omelas is, the nameless narrator reveals one particular additional thing about it: in some miserable basement somewhere in the city, one miserable child is kept in a tiny windowless room, fed just enough to stay starvingly alive, and kicked now and then to make sure they stay miserable.

All of the city’s joy and happiness and prosperity depends, in a way not particularly described, on the misery of this one child. And everyone over twelve years old in the city knows all about it.

On the fifth and last page, we are told that, now and then, a citizen of Omelas will become quiet, and walk away, leaving the city behind forever.

This is a metaphor (ya think?) applicable whenever we notice that the society (or anything else) that we enjoy, is possible only because of the undeserved suffering and oppression of others. It suggests both that we notice this, and that there are alternatives to just accepting it. We can, at least, walk away.

But are those the only choices?

I came across this rather excellent “meme” image on the Fedithing the other day. I can’t find it again now, but it was framed as a political-position chart based on reactions to Omelas, with (something like) leftists at the top, and (something like) fascists at the bottom. “Walk away” was near the top, and things like “The child must have done something to deserve it” nearer the bottom. (Pretty fair, I thought, which is why I’m a Leftist.)

It’s important, though, that “Walk away” wasn’t at the very top. As I recall, the things above it included “start a political movement to free the child”, “organize an armed strike force to free the child”, and “burn the fucking place to the ground” (presumably freeing the child in the process), that latter being at the very top.

But, we might say, continuing the story, Omelas (which is an acronym of “Me also”, although I know of no evidence that Le Guin did that on purpose) has excellent security and fire-fighting facilities, and all of the top three things will require hanging around in Omelas for a greater or lesser period, gathering resources and allies and information and suchlike.

And then one gets to, “Of course, I’m helping the child! We need Councilman Springer’s support for our political / strike force / arson efforts, and the best way to get it is to attend the lovely gala he’s sponsoring tonight! Which cravat do you think suits me more?” and here we are in this quotidian mess.

Pascal

In the case of Omelas, we pretty much know everything involved. We don’t know the mechanism by which the child’s suffering is necessary for prosperity (and that’s another thing to work on fixing, which also requires hanging around), but we do know that we can walk away, we can attack now and lose, or we can gather our forces and hope to make a successful attack in the future. And so on. The criticism, if it can even be called that, of the argument, is that there are alternatives beyond just accepting or walking away.

Pascal’s Wager is a vaguely similar thought experiment in which uncertainty is important; we have to decide in a situation where we don’t know important facts. You can read about this one all over the web, too, but the version we care about here is pretty simple.

The argument is that (A) if the sort of bog-standard view of Christianity is true, then if you believe in God (Jesus, etc.) you will enjoy eternal bliss in Heaven, and if you don’t you will suffer for eternity in Hell, and (B) if this view isn’t true, then whether or not you believe in God (Jesus, etc.) doesn’t really make any difference. Therefore (C) if there is the tiniest non-zero chance that the view is true, you should believe it on purely selfish utilitarian grounds, since you lose nothing if it’s false, and gain an infinite amount if it’s true. More strongly, if the cost of believing it falsely is any finite amount, you should still believe it, since a non-zero probability of an infinite gain has (by simple multiplication) an infinite expected value, which is larger than any finite cost.

The main problem with this argument is that, like the Omelas story but more fatally, it offers a false dichotomy. There are infinitely more possibilities than “bog-standard Christianity is true” and “nothing in particular depends on believing in Christianity”. Most relevantly, there are an infinite number of variations on the possibility of a Nasty Rationalist God, who sends people to infinite torment if they believed in something fundamental about the universe that they didn’t have good evidence for, and otherwise rewards them with infinite bliss.

This may seem unlikely, but so does bog-standard Christianity (I mean, come on), and the argument of Pascal’s Wager applies as long as the probability is at all greater than zero.

Taking into account Nasty Rationalist God possibilities (and a vast array of equally useful ones), we now have a situation where both believing and not believing have infinite expected advantages and infinite expected disadvantages, and arguably they cancel out and one is back wanting to believe either what’s true, or what’s finitely useful, and we might as well not have bothered with the whole thing.

Roko

Roko’s Basilisk is another thought experiment that you can read about all over the web. Basically it says that (A) it’s extremely important that a Friendly AI is developed before a Nasty AI is, because otherwise the Nasty AI will destroy humanity and that has like an infinite negative value given that otherwise humanity might survive and produce utility and cookies forever, and (B) since the Friendly AI is Friendly, it will want to do everything possible to make sure it is brought into being before it’s too late because that is good for humanity, and (C) one of the things that it can do to encourage that, is to create exact copies of everyone that didn’t work tirelessly to bring it into being, and torture them horribly, therefore (D) it’s going to do that, so you’d better work tirelessly to bring it into being!

Now the average intelligent person will have started objecting somewhere around (B), noting that once the Friendly AI exists, it can’t exactly do anything to make it more likely that it will be created, since that’s already happened, and causality only works, y’know, forward in time.

There is a vast (really vast) body of work by a few people who got really into this stuff, arguing in various ways that the argument does, too, go through. I think it’s all both deeply flawed and sufficiently well-constructed that taking it apart would require more trouble that it’s worth (for me, anyway; you can find various people doing variously good jobs of it, again, all over the InterWebs).

There is a simpler variant of it that the hard-core Basiliskians (definitely not what they call themselves) would probably sneer at, but which kind of almost makes sense, and which is simple enough to express in a way that a normal human can understand without extensive reading. It goes something like (A) it is extremely important that a Friendly AI be constructed, as above, (B) if people believe that that Friendly AI will do something that they would really strongly prefer that it not do (including perhaps torturing virtual copies of them, or whatever else), unless they personally work hard to build that AI, then they will work harder to build it, (C) if the Friendly AI gets created and then doesn’t do anything that those who didn’t work hard to build it would strongly prefer it didn’t do, then next time there’s some situation like this, people won’t work hard to do the important thing, and therefore whatever it is might not happen, and that would be infinitely bad, and therefore (D) the Friendly AI is justified in doing, even morally required to do, a thing that those who didn’t work really hard to build it, would strongly rather it didn’t do (like perhaps the torture etc.). Pour encourager les autres, if you will.

Why doesn’t this argument work? Because, like the two prior examples that presented false dichotomies by leaving out alternatives, it oversimplifies the world. Sure, by retroactively punishing people who didn’t work tirelessly to bring it into being, the Friendly AI might make it more likely that people will do the right thing next time (or, for Basiliskians, that they would have done the right thing in the past, or whatever convoluted form of words applies), but it also might not. It might, for instance, convince people that Friendly AIs and anything like them were a really bad idea after all, and touch off the Bulterian Jihad or… whatever exactly that mess with the Spacers was in Asimov’s books that led to their being no robots anymore (except for that one hiding on the moon). And if the Friendly AI is destroyed by people who hate it because of it torturing lots of simulated people or whatever, the Nasty AI might then arise and destroy humanity, and that would be infinitely bad!

So again we have a Bad Infinity balancing a Good Infinity, and we’re back to doing what seems finitely sensible, and that is surely the Friendly AI deciding not to torture all those simulated people because duh, it’s friendly and doesn’t like torturing people. (There are lots of other ways the Basilisk argument goes wrong, but this seems like the simplest and most obvious and most related to the guiding thought, if any, behind his article here.)

Long-termism

This one is the ripped-from-the-headlines “taking it to the wrong extreme” version of all of this, culminating in something like “it is a moral imperative to bring about a particular future by becoming extremely wealthy, having conferences in cushy venues in Hawai’i, and yes, well, if you insist on asking, also killing anyone who gets in our way, because quadrillions of future human lives depend on it, and they are so important.”

You can read about this also all over the InterThings, but its various forms and thinkings are perhaps somewhat more in flux than the preceding ones, so perhaps I’ll point directly to this one for specificity about exactly which aspect(s) I’m talking about.

The thinking here (to give a summary that may not exactly reflect any particular person’s thinking or writing, but which I hope gives the idea) is that (A) there is a possible future in which there are a really enormous (whatever you’re thinking, bigger than that) number of (trillions of) people living lives of positive value, (B) compared to the value of that future, anything that happens to the comparatively tiny number of current people is unimportant, therefore (C) it’s morally permissible, even morally required, to do whatever will increase the likelihood of that future, regardless of the effects on people today. And in addition, (D) because [person making the argument] is extremely smart and devoted to increasing the likelihood of that future, anything that benefits [person making the argument] is good, regardless of its effects on anyone else who exists right now.

It is, that is, a justification for the egoism of billionaires (like just about anything else your typical billionaire says).

Those who have been following along will probably realize the problem immediately: it’s not the case that the only two possible timelines are (I) the one where the billionaires get enough money and power to bring about the glorious future of 10-to-the-power-54 people all having a good time, and (II) the one where billionaires aren’t given enough money, and humanity becomes extinct. Other possibilities include (III) the one where the billionaires get all the money and power, but in doing so directly or indirectly break the spirit of humanity, which as a result becomes extinct, (IV) the one where the billionaires see the light and help do away with capitalism and private property, leading to a golden age which then leads to an amount of joy and general utility barely imaginable to current humans, (V) the one where the billionaires get all the money and power and start creating trillions of simulated people having constant orgasms in giant computers or whatever, and the Galactic Federation swings by and sees what’s going on and says “Oh, yucch!” and exterminates what’s left of humanity, including all the simulated ones, and (VI) so on.

In retrospect, this counterargument seems utterly obvious. The Long-termists aren’t any better than anyone else at figuring out the long-term probabilities of various possibilities, and there’s actually a good reason that we discount future returns: if we start to predict forward more than a few generations, our predictions are, as all past experience shows, really unreliable. Making any decision based solely on things that won’t happen for a hundred thousand years or more, or that assume a complete transformation in humanity or human society, is just silly. And when that decision just happens to be to enrich myself and be ruthless with those who oppose me, everyone else is highly justified in assuming that I’m not actually working for the long-term good of humanity, I’m just an asshole.

(There are other problems with various variants of long-termism, a notable one that they’re doing utilitarianism wrong and/or taking it much too seriously. Utilitarianism can be useful for deciding what to do with a given set of people, but it falls apart a bit when applied to deciding which people to have exist. If you use a summation you find yourself morally obliged to prefer a trillion barely-bearable lives to a billion very happy ones, just because there are more of them. Whereas if you go for the average, you end up being required to kill off unhappy people to get the average up. And a perhaps even more basic message of the Omelas story is that utilitarianism requires us to kick the child, which is imho a reductio. Utilitarian calculus just can’t capture our moral intuitions here.)

Coda

And that’s pretty much that essay. :) Comments very welcome in the comments, as always. I decided not to all any egregious pictures. :)

It was a lovely day, I went for a walk in the bright chilliness, and this new Framework laptop is being gratifyingly functional. Attempts to rescue the child from the Omelas basement continue, if slowly. Keep up the work!