Posts tagged ‘artificial intelligence’


The Klara Trilogy is done!

The story of Klara, written by me channeling the Collective Unconscious, illustrated by me using Midjourney, and narrated and set to music and videographed by the talented Karima Hoisan, is finally finished!

I originally thought it was finished at the end of the first forty-frame thing; and then when I did Part Two at about the same length, I thought it was finished; and now having struggled for months on Part Three I’m pretty sure it actually is done. :)

Having just watched Karima’s videos of all three parts in order (playlist here!), I’m glad various viewers convinced me not to stop at one or two parts. It’s pretty good!

And I say this with all modesty; I feel like this story came through me, more than like it is something that I did. The comments over in Karima’s weblog, and her narration, have suggested various meanings and facets to me that I hadn’t thought of before.

In terms of the experience of creating it, it’s been interesting to see the various phases of interaction with the AI tool. I started out Part One by creating various variations of the prompt “detailed surrealism” on the v3 engine on Midjourney, and then weaving the story around pretty much whatever came out.

It happens that in v3, that prompt pretty reliably produces scenes from a stylistically coherent universe, including the MJ Girl, who plays the part of Klara in the first two parts. In Part Two, I had a bit more of an idea of what I wanted to happen, in a general way, but continued using v3 and the same prompt. This required somewhat more work, because it would produce images that didn’t fit with the story I wanted, so I had to put those aside and make more. But the style was at least not much trouble.

Part Three was quite different. For plot reasons, being in basically a different reality, the style needed to be different. It was relatively easy to do that, by using the “test” and “testp” engines, either alone or by “remastering” images made under v3. But the resulting images, while different from those of the first two parts, weren’t nearly as consistent among themselves as those of parts one and two. So I had to play around a lot more with the workflows and the prompts, and produce quite a few more pictures, to get a reasonably consistent style.

The style of Part Three still shifts around quite a bit; the flavor of the city, the color of Klara’s hair, the cat’s fur, and many other things change somewhat from panel to panel, but I wanted a nice mixture of consistent and in flux; and that took work!

Then there was the Story issue. The beginning “recap” part of Part Three was relatively easy that way, summarizing the story of the first two parts from a different point of view. But then I quickly got stuck; I wanted to do something more satisfying and less random than I would get by letting the AI’s raw output drive the action. For whatever reason, it took me quite awhile to find the story thread that I liked, and then about as long to create (or obtain, if you prefer!) the images to go with it.

(The images still drove the narrative to some extent; for instance the firefly line, which I adore, was inspired by the image that goes with it, not vice-versa.)

But finally I finished! :) And Karima made the video in record time, and there it is! Woooo!

I keep feeling like I should make it into good PDFs, or something (even) more readable, and officially post links to that; maybe even have it printed somewhere onto atoms. On the other hand, without the narrative and music and video, it would hardly be the same… :)


Little Imaginary Diagrams

I asked Midjourney for some simple proofs of the Pythagorean Theorem. The results make me happy. :)

(On the text side: GPT-2 and even GPT-3 might have hallucinated something interesting. ChatGPT would just error out a few times and then give a boring literal description of one in a condescending tone. My ability to be interested in ChatGPT as an interaction partner is severely limited by how boring it is. But anyway, back to the pictures!)

Presented without comment (beyond the alt text):

A geometric diagram with various lines and colored areas and illegible labels (some of which may be small integers). Amusingly, there do not appear to be any right triangles.
A geometric diagram with various lines and colored areas and labels. Some labels are illegible, but there is an 8, a 3, a 4, and a few 1's. Some of the colored areas contain brick patterns, and there is a random architectural arch and a few other map-like textures thrown in.
A comparatively simple geometric diagram of lines and colored areas. There is a right triangle labeled E textured in a pebbly pattern, a rectangle labelled with a G and a unfamiliar glyph, and various areas with fine blue stripes.
A relatively modern-looking flat geometrical diagram containing three triangles (two of them right triangles) in gradients of different colors, a large grey striped area, and various lines. There are labels that look vaguely numeric, but are basically unreadable.

I hope you find these at least as amusing, endearing, and/or thought-provoking as I do. :)


Some light infringement?

I think I have said on a few occasions that for instance a class-action copyright lawsuit against Copilot might not bear directly on AI art tools like Midjourney, to the extent that CoPilot apparently does tend to copy from its training set verbatim (and unattributed) whereas (I thought at the time) Midjourney doesn’t.

Well, it turns out that Midjourney does, maybe, to an extent. For maybe a few works?

The one that’s gotten the most attention is the 1984 photograph of Sharbat Gula by Steve McCurry, popularly known as “Afghan Girl“. The strings “afghan girl” and (haha) “afgan girl” are prohibited in Midjourney prompts at the moment. (“The phrase afghan girl is banned. Circumventing this filter to violate our rules may result in your access being revoked.”) And this is apparently because that phrase all by itself elicits what are arguably just slight variations of the original.

There’s a Twitter post that claims to show this, but I’m not certain enough it’s real to link to it. Also it’s on Twitter. But I can say that entering similar non-banned phrases like “young Afghan woman” also produce images that are at least quite similar to the photo of Gula, more similar than I would have expected. Given the size of the Midjourney training set, that image in association with those words must occur a lot of times!

(Update: it seems likely that the most widely-circulated image purporting to show Midjourney spontaneously generating close copies of the Gula “Afghan Girl” picture, is not actually that: it was made by giving the AI a copy of the original photo (!) and the prompt “afghan girl, digital art”. That the AI can make a copy of a work, given a copy of the work, is no surprise! Evidence, on a link probably usable only if you’re logged into Midjourney, is here. Given the further examples below, this doesn’t entirely undercut the point, but it’s interesting.)

The other example that I know of is “Starry Night”, which brings up variations of the van Gogh piece. This one’s out of copyright :) so I have no qualms about posting what I got:

Four variations on van Gogh's "Starry Night" ("De sterrennacht"), all with the swirly sky, tree to the left, buildings with lighted windows in the background, hills in the distance, crescent moon upper-right, blobby stars, etc.

Pretty obviously derivative in the usual sense. Derivative Work in the legal sense? I have no idea, and copyright law is sufficiently squishy and subjective that there is probably not a correct answer until and unless explicitly litigated, or the legal landscape otherwise changes significantly.

Are there other short phrases that will home in on a particular famous image? “Mona Lisa” (also out of copyright) certainly seems to:

Four variants of the Mona Lisa, all markedly worse than the original, but all very recognizable.

Interesting and/or hideous variations, but still instantly recognizable.

What else might we try? “Migrant Mother” produces images that I think are clearly not derivative works:

Four rather striking monochrome images of a woman and child, in various poses and garments, with variously creepy-looking hands.

Striking perhaps, ruined by the bizarre hands perhaps, in the same general category as the photo by Lange, but clearly of different people, in different positions, and so on. It’s not “plagiarizing” here, at any rate.

What if we tried harder? Let’s explicitly prompt with like “Migrant Mother photo, Dorothea Lange, 1936“. Whoa, yipes! Is this out of copyright? Well, if not it’s probably Fair Use in this posting anyway, so here:

Four slight variations of the famous Migrant Mother photo, showing a worried-looking woman with a child hiding its face on either side of her.

Definitely derivative, and possibly Derivative. How about “Moon and Half Dome, Ansel Adams, 1960“? Well:

Four pictures showing an oddly-distorted version of Half Dome, a very large moon, and some evergreens. One also has a reflecting body of water in the foreground, another is framed by a stone archway.

This is definitely not the picture that that search will get you in Google Images; if nothing else, the moon is way too large, and the top of Half Dome is a bizarre penguin-bill sort of shape. I’m guessing that this is because there are lots of other Ansel Adams pictures in the training set associated with words like “moon” and “half dome”, and mushing them all together quasi-semantically gives this set. The origin of the penguin-bill I dunno.

Maybe “Beatles Abbey Road cover, crossing the street“?

Crosswalk, front guy in white, roundish car to the left, check. Derivative in various senses, for sure. More specific prompting could presumably increase the exactness.

So I think we’ve established, to the extent of the tiny number of experiments I have the energy to do, that Midjourney (and, I would wager, other AI art tools, mutatis mutandis; I could get a Starry Night easily out of NightCafe, but not a Migrant Mother) can in fact produce images, the production of which arguably violates one or more of the rights of the copyright holder. It is most likely to do it if you explicitly try to do it (giving the most famous name of the image along with the artist and ideally the year and anything else that might help), but can also do it by accident (innocently typing “afghan girl”).

This doesn’t mean that these tools usually or typically do this; the fact that you can get a copy of an image from a tool that looks like it involves copyright laws doesn’t mean that other images made with it also involve copyright laws. To use the usual comparison, you can easily violate copyright using Photoshop, but that doesn’t suggest that there aren’t non-infringing uses of Photoshop, nor does it provide evidence that any particular image from Photoshop is infringing.

The easiest way to think about the blocking of “afg{h}an girl” from Midjourney prompts is that they have made a tool, realized that it could be used to violate copyright, and taken action to make it more difficult to use it that way in some cases.

This all bears on the question of whether images made with AI tools violate copyrights; the question of whether making the AI tools in the first place involves an infringing use is a somewhat different question, and we might talk about it some other time, although I’m still feeling kind of burnt out on the legal issues. But I did want to update on this one particular thing.


Hemingway, by Midjourney

I now have like 190 images in the folder that Windows uses to pick desktop backgrounds from; building on the twenty-odd that I posted here the other day. They are fun! But I’m not going to post any more right now; right now, I’m going to post some images comparing the various Midjourney engines (which they have generously kept all of online). I’m going to use the prompt “Hemingway’s Paris in the rain”, because why not! We can do other prompts some other time.

For most of these (all but “test” and “testp” I think), it produced four images, and I chose one to make bigger. Otherwise (except as noted) these are all just one-shots on that prompt. I’m going to paste them in more or less full-size, and let WordPress do what it will. Click on an image might or might not bring up a larger version or something who knows.

Here is the quite historical v1:

A rather vague but definitely rainy image of Hemingway's Paris in the rain. There is a tall black tower to the left that may be inspired by the Eiffel Tower, but resembles it only vaguely.

Here, similarly, is v2:

Another vague and rainy, perhaps slightly less streaky, image of Hemingway's Paris in the rain. A possible bit of Eiffel Tower inspired tower shows over the buildings to the right.

I rather like both of these; they are impressionistic, which I like, and I suspect it’s mostly because that’s the best they can do in rendering things.

Here is “hd”, which may be the same thing as v1 or v2 I’m not sure; this particular image is more strongly monochrome and sort of vintage-looking photo-wise:

A somewhat blurry and rainy of an old city square with some people in it, some with umbrellas. Could be Hemingway's Paris; no towers evident.

Now v3, which is pretty much when I started using Midjourney; it’s interesting how impressionistic this is, given that we know v3 can also do rather more realistic stuff (all of this, for instance, was v3):

A rather impressionistic drawing, perhaps in charcoal, with a somewhat Eiffelish tower to the left. Definitely rain, likely Paris.

Between v3 and v4, we had this engine, lyrically named “test” (I used the additional “creative” flag, because why wouldn’t one?); one is getting a bit more photographic here:

A slightly less vague still image of Paris in the rain, black and white, umbrellas, and so on.

and here is the “testp” variant of “test”; the “p” is supposed to be for “photographic”; I used the “creative” flag here also. It’s not notably more photographic than “test” in this case; maybe it’s the rain:

Another rainy city street, monochrome, a few cars, shiny impressionistic pavement, townhouses.

Now brace yourself a bit :) because here is the first version of v4 (technically in terms of the current switches it’s “v 4” and “style 4a”):

A soft-edge realistic painting of a Paris street in the rain, in muted but glowing colors. A few people walking in the distance are vague but convincing shapes. The Eiffel Tower is visible in the distance.

Yeah, that’s quite a difference. We have colors, we have lanterns casting light, we have very definite chairs and awnings and things. But now, the current v4 (“style 4b” which is I think currently the v4 default):

A rather realistic painting of vintage Paris in the rain; a couple of old-style cards on the street, their headlights and the lights of the shops reflecting in the wet pavement. Shopfronts and awnings, people in identifiable clothing. There are words on a couple of the shopfronts, but they are unintelligible: something like PHASESILN for instance.

Yeah, that’s gotten rather realistic, hasn’t it? It’s even trying to spell out the signs on shopfronts, even if it hasn’t really mastered language. But those cars are extremely car-like and detailed compared to anything earlier.

Can this currently-fanciest engine give us something a bit more like the atmosphere of the older ones, if we want that? Basically yes, if we ask for it. Here is the latest v4 again, with “impressionistic” added to the prompt:

Yet another wet rainy city street scene, again in full convincing muted color, but more impressionistic than the last. Again we have people (and hats) and umbrellas and shopfronts, but no attempt at individual letters on signs.

I rather like that! And “monochrome” would make it monochrome, and so on.

It’s perhaps interesting that the more recent engines were less insistent that pictures of Paris include the Eiffel Tower. Possibly just the random number generator, given how tiny our sample is here, but possibly significant in some way.

So there we are, nine probably rather enormous pictures of Hemingway’s Paris in the rain, as conceived by various stages of development of the Midjourney AI, and with only very minimal human fiddling around (picking the prompt and the one to feature from each set of four, having the idea to compare the versions in the first place, and like that) by me.

Comments welcome as always, or just enjoy the bits. :)


Omelas, Pascal, Roko, and Long-termism

In which we think about some thought experiments. It might get long.


Ursula K. LeGuin’s “The Ones Who Walk Away From Omelas” is a deservedly famous very short story. You should read it before you continue here, if you haven’t lately; it’s all over the Internet.

The story first describes a beautiful Utopian city, during its Festival of Summer. After two and a half pages describing what a wise and kind and happy place Omelas is, the nameless narrator reveals one particular additional thing about it: in some miserable basement somewhere in the city, one miserable child is kept in a tiny windowless room, fed just enough to stay starvingly alive, and kicked now and then to make sure they stay miserable.

All of the city’s joy and happiness and prosperity depends, in a way not particularly described, on the misery of this one child. And everyone over twelve years old in the city knows all about it.

On the fifth and last page, we are told that, now and then, a citizen of Omelas will become quiet, and walk away, leaving the city behind forever.

This is a metaphor (ya think?) applicable whenever we notice that the society (or anything else) that we enjoy, is possible only because of the undeserved suffering and oppression of others. It suggests both that we notice this, and that there are alternatives to just accepting it. We can, at least, walk away.

But are those the only choices?

I came across this rather excellent “meme” image on the Fedithing the other day. I can’t find it again now, but it was framed as a political-position chart based on reactions to Omelas, with (something like) leftists at the top, and (something like) fascists at the bottom. “Walk away” was near the top, and things like “The child must have done something to deserve it” nearer the bottom. (Pretty fair, I thought, which is why I’m a Leftist.)

It’s important, though, that “Walk away” wasn’t at the very top. As I recall, the things above it included “start a political movement to free the child”, “organize an armed strike force to free the child”, and “burn the fucking place to the ground” (presumably freeing the child in the process), that latter being at the very top.

But, we might say, continuing the story, Omelas (which is an acronym of “Me also”, although I know of no evidence that Le Guin did that on purpose) has excellent security and fire-fighting facilities, and all of the top three things will require hanging around in Omelas for a greater or lesser period, gathering resources and allies and information and suchlike.

And then one gets to, “Of course, I’m helping the child! We need Councilman Springer’s support for our political / strike force / arson efforts, and the best way to get it is to attend the lovely gala he’s sponsoring tonight! Which cravat do you think suits me more?” and here we are in this quotidian mess.


In the case of Omelas, we pretty much know everything involved. We don’t know the mechanism by which the child’s suffering is necessary for prosperity (and that’s another thing to work on fixing, which also requires hanging around), but we do know that we can walk away, we can attack now and lose, or we can gather our forces and hope to make a successful attack in the future. And so on. The criticism, if it can even be called that, of the argument, is that there are alternatives beyond just accepting or walking away.

Pascal’s Wager is a vaguely similar thought experiment in which uncertainty is important; we have to decide in a situation where we don’t know important facts. You can read about this one all over the web, too, but the version we care about here is pretty simple.

The argument is that (A) if the sort of bog-standard view of Christianity is true, then if you believe in God (Jesus, etc.) you will enjoy eternal bliss in Heaven, and if you don’t you will suffer for eternity in Hell, and (B) if this view isn’t true, then whether or not you believe in God (Jesus, etc.) doesn’t really make any difference. Therefore (C) if there is the tiniest non-zero chance that the view is true, you should believe it on purely selfish utilitarian grounds, since you lose nothing if it’s false, and gain an infinite amount if it’s true. More strongly, if the cost of believing it falsely is any finite amount, you should still believe it, since a non-zero probability of an infinite gain has (by simple multiplication) an infinite expected value, which is larger than any finite cost.

The main problem with this argument is that, like the Omelas story but more fatally, it offers a false dichotomy. There are infinitely more possibilities than “bog-standard Christianity is true” and “nothing in particular depends on believing in Christianity”. Most relevantly, there are an infinite number of variations on the possibility of a Nasty Rationalist God, who sends people to infinite torment if they believed in something fundamental about the universe that they didn’t have good evidence for, and otherwise rewards them with infinite bliss.

This may seem unlikely, but so does bog-standard Christianity (I mean, come on), and the argument of Pascal’s Wager applies as long as the probability is at all greater than zero.

Taking into account Nasty Rationalist God possibilities (and a vast array of equally useful ones), we now have a situation where both believing and not believing have infinite expected advantages and infinite expected disadvantages, and arguably they cancel out and one is back wanting to believe either what’s true, or what’s finitely useful, and we might as well not have bothered with the whole thing.


Roko’s Basilisk is another thought experiment that you can read about all over the web. Basically it says that (A) it’s extremely important that a Friendly AI is developed before a Nasty AI is, because otherwise the Nasty AI will destroy humanity and that has like an infinite negative value given that otherwise humanity might survive and produce utility and cookies forever, and (B) since the Friendly AI is Friendly, it will want to do everything possible to make sure it is brought into being before it’s too late because that is good for humanity, and (C) one of the things that it can do to encourage that, is to create exact copies of everyone that didn’t work tirelessly to bring it into being, and torture them horribly, therefore (D) it’s going to do that, so you’d better work tirelessly to bring it into being!

Now the average intelligent person will have started objecting somewhere around (B), noting that once the Friendly AI exists, it can’t exactly do anything to make it more likely that it will be created, since that’s already happened, and causality only works, y’know, forward in time.

There is a vast (really vast) body of work by a few people who got really into this stuff, arguing in various ways that the argument does, too, go through. I think it’s all both deeply flawed and sufficiently well-constructed that taking it apart would require more trouble that it’s worth (for me, anyway; you can find various people doing variously good jobs of it, again, all over the InterWebs).

There is a simpler variant of it that the hard-core Basiliskians (definitely not what they call themselves) would probably sneer at, but which kind of almost makes sense, and which is simple enough to express in a way that a normal human can understand without extensive reading. It goes something like (A) it is extremely important that a Friendly AI be constructed, as above, (B) if people believe that that Friendly AI will do something that they would really strongly prefer that it not do (including perhaps torturing virtual copies of them, or whatever else), unless they personally work hard to build that AI, then they will work harder to build it, (C) if the Friendly AI gets created and then doesn’t do anything that those who didn’t work hard to build it would strongly prefer it didn’t do, then next time there’s some situation like this, people won’t work hard to do the important thing, and therefore whatever it is might not happen, and that would be infinitely bad, and therefore (D) the Friendly AI is justified in doing, even morally required to do, a thing that those who didn’t work really hard to build it, would strongly rather it didn’t do (like perhaps the torture etc.). Pour encourager les autres, if you will.

Why doesn’t this argument work? Because, like the two prior examples that presented false dichotomies by leaving out alternatives, it oversimplifies the world. Sure, by retroactively punishing people who didn’t work tirelessly to bring it into being, the Friendly AI might make it more likely that people will do the right thing next time (or, for Basiliskians, that they would have done the right thing in the past, or whatever convoluted form of words applies), but it also might not. It might, for instance, convince people that Friendly AIs and anything like them were a really bad idea after all, and touch off the Bulterian Jihad or… whatever exactly that mess with the Spacers was in Asimov’s books that led to their being no robots anymore (except for that one hiding on the moon). And if the Friendly AI is destroyed by people who hate it because of it torturing lots of simulated people or whatever, the Nasty AI might then arise and destroy humanity, and that would be infinitely bad!

So again we have a Bad Infinity balancing a Good Infinity, and we’re back to doing what seems finitely sensible, and that is surely the Friendly AI deciding not to torture all those simulated people because duh, it’s friendly and doesn’t like torturing people. (There are lots of other ways the Basilisk argument goes wrong, but this seems like the simplest and most obvious and most related to the guiding thought, if any, behind his article here.)


This one is the ripped-from-the-headlines “taking it to the wrong extreme” version of all of this, culminating in something like “it is a moral imperative to bring about a particular future by becoming extremely wealthy, having conferences in cushy venues in Hawai’i, and yes, well, if you insist on asking, also killing anyone who gets in our way, because quadrillions of future human lives depend on it, and they are so important.”

You can read about this also all over the InterThings, but its various forms and thinkings are perhaps somewhat more in flux than the preceding ones, so perhaps I’ll point directly to this one for specificity about exactly which aspect(s) I’m talking about.

The thinking here (to give a summary that may not exactly reflect any particular person’s thinking or writing, but which I hope gives the idea) is that (A) there is a possible future in which there are a really enormous (whatever you’re thinking, bigger than that) number of (trillions of) people living lives of positive value, (B) compared to the value of that future, anything that happens to the comparatively tiny number of current people is unimportant, therefore (C) it’s morally permissible, even morally required, to do whatever will increase the likelihood of that future, regardless of the effects on people today. And in addition, (D) because [person making the argument] is extremely smart and devoted to increasing the likelihood of that future, anything that benefits [person making the argument] is good, regardless of its effects on anyone else who exists right now.

It is, that is, a justification for the egoism of billionaires (like just about anything else your typical billionaire says).

Those who have been following along will probably realize the problem immediately: it’s not the case that the only two possible timelines are (I) the one where the billionaires get enough money and power to bring about the glorious future of 10-to-the-power-54 people all having a good time, and (II) the one where billionaires aren’t given enough money, and humanity becomes extinct. Other possibilities include (III) the one where the billionaires get all the money and power, but in doing so directly or indirectly break the spirit of humanity, which as a result becomes extinct, (IV) the one where the billionaires see the light and help do away with capitalism and private property, leading to a golden age which then leads to an amount of joy and general utility barely imaginable to current humans, (V) the one where the billionaires get all the money and power and start creating trillions of simulated people having constant orgasms in giant computers or whatever, and the Galactic Federation swings by and sees what’s going on and says “Oh, yucch!” and exterminates what’s left of humanity, including all the simulated ones, and (VI) so on.

In retrospect, this counterargument seems utterly obvious. The Long-termists aren’t any better than anyone else at figuring out the long-term probabilities of various possibilities, and there’s actually a good reason that we discount future returns: if we start to predict forward more than a few generations, our predictions are, as all past experience shows, really unreliable. Making any decision based solely on things that won’t happen for a hundred thousand years or more, or that assume a complete transformation in humanity or human society, is just silly. And when that decision just happens to be to enrich myself and be ruthless with those who oppose me, everyone else is highly justified in assuming that I’m not actually working for the long-term good of humanity, I’m just an asshole.

(There are other problems with various variants of long-termism, a notable one that they’re doing utilitarianism wrong and/or taking it much too seriously. Utilitarianism can be useful for deciding what to do with a given set of people, but it falls apart a bit when applied to deciding which people to have exist. If you use a summation you find yourself morally obliged to prefer a trillion barely-bearable lives to a billion very happy ones, just because there are more of them. Whereas if you go for the average, you end up being required to kill off unhappy people to get the average up. And a perhaps even more basic message of the Omelas story is that utilitarianism requires us to kick the child, which is imho a reductio. Utilitarian calculus just can’t capture our moral intuitions here.)


And that’s pretty much that essay. :) Comments very welcome in the comments, as always. I decided not to all any egregious pictures. :)

It was a lovely day, I went for a walk in the bright chilliness, and this new Framework laptop is being gratifyingly functional. Attempts to rescue the child from the Omelas basement continue, if slowly. Keep up the work!


Free Desktop Wallpapers!

Haha, what a great title.

But yes, in fact I’ve been using good ol’ Midjourney to make some wallpapers, and figured out how to get Windows to permute among them as desktop backgrounds on this brand-new Framework laptop I have (I should write a long boring geeky entry about my old Windows laptop breaking and my replacing it with this lovely new thing whose only disadvantage is that I’m still running Windows on it ewww), and I thought I would share them here as the first of the promised (or threatened) posts with tons of images made with Midjourney.

I think I will just do it as a big WordPress Gallery thing? Which means WordPress will I dunno display them in some random layout, but I hope you can still get the actual images at full size by clicking through and rightclick-saving? Or whatever?


A Saturday Morning in November

Midjourney V4 (well, an “alpha” version thereof) is out! As if I didn’t already have enough to play with.

That house, floating above the sea with some balloons and things, is typical of the results of my old favorite “detailed surrealism” prompt. And this:

is from the prompt “neutral prompt”. We can tentatively conclude that it likes cute fantasy houses. :)

Here is a v4 (alpha) Yeni Cavan scene:

which is pretty cool.

In other news, I’m over 8500 words into NaNoWriMo 2022 as of yesterday (I haven’t written anything yet today). I’ve also make a cover page for the book, which links to the first Fling, and each Fling links to the next, so you can start at the cover, and go through the whole thing in the right order by just clicking obvious things. This may partially atone for posting it as a bunch of weblog entries in the first place. :)

I made the cover image in (obviously) Midjourney, and then fiddled a little and put on some titles (and my Government Name!) in the GIMP. I note that the skills of professional cover designers are subtle and profound; the titles on my cover are obviously in the wrong place, a professional designer would put them in places that were so obviously in the right place that one wouldn’t even notice, and I have no idea what makes the difference.

Okay! Now I am off to make the header image for Fling Seven, and start writing. I think it will be more of Alissa’s story.


Weirdness from the Copyright Office

A quickish update. I have said, and still believe, that things created using AI tools are just like anything else with respect to copyright. But recent events remind me that the Copyright Office is made up of people, and people are unpredictable, and US Copyright law is in many places a squashy mess made up of smaller squashy messes, so logic does not always apply.

Here is a currently-relevant set of data points:

  • I have registered the copyright on an image I made using MidJourney. I didn’t mention that I used MidJourney (or Chrome, or Windows) on the application form, because there was no place to put that; the form didn’t ask. The application for registration was granted routinely, without any complication.
    • I imagine there are hundreds / thousands of similar registrations from other people.
  • This person has registered the copyright on a work that they made using MidJourney (I think it was), and the work itself makes it clear that MidJourney was used. The application was afaik granted routinely, without any complication.
    • But now it appears that the copyright office has said “oh wait we didn’t notice that MidJourney thing, so we’re cancelling your registration”.
    • And the person is appealing, apparently with the help of MidJourney themselves. (Hm, they’ve also apparently deleted some of their tweets on the subject; lawyer’s advice perhaps.)
  • This person has applied apparently to register various images made with various workflows involving AI (dalle2 I think) to various extents, clearly stated, and rather than being just accepted or just rejected they’ve received emails from the copyright office asking them for details of what they did, and especially bizarrely suggesting that perhaps at least one of the works might have been “conceived” by the AI.
    • Which seems crazy, because the Copyright Office has generally had the opinion that software isn’t creative, and can’t (like) conceive things.

I suspect that things are just rather in disarray at the Copyright Office, and different examiners are doing different things, perhaps having gotten different memos on the subject, or just having their own different opinions about things. It will be interesting to see how the appeal mentioned above goes!

To me, it seems obvious that things created with AI tools should be prima facie registerable with the copyright office, just like photographs presumably are, and if someone wants to challenge based on some legal theory about either lack of creativity or derivative works or whatever, they can do that. The copyright office itself, I would think, would want to stay far away from any situation where they have to somehow evaluate themselves how many units of creativity are in each of the kazillions of applications they get daily.

On the other hand, the Copyright Office could simply issue some sort of guidance saying “We won’t register copyrights on works created with the significant use of an AI tool like dalle or MidJourney, so don’t bother asking” (and could even update the forms to have a question about it).

I think that would be dumb, and lead to court cases eventually that would either overturn that or at least cause a great deal of faffing about that they could have avoided.

But then people and government offices do dumb stuff all the time, so who knows! All is in flux…

And here is an image that I made using Midjourney. No matter what the Copyright Office thinks today. :)


Figure Three

Another “fun corners of the AI’s network” post. These are all pretty much unfiltered and unretried and unmodified results with the prompt “figure three” with the current “test” or “testp” engine (v4 said to be coming soon!) on MidJourney. I have no comment except that I find them all wonderful. :)

(There are, typically, various women’s faces, and perhaps the word “figure” got us more sort-of-bodies than we would have gotten otherwise?)


Klara, Part Two

Have you noticed, that sometimes one person is much more productive than another? :)

Due to my skilled collaborator on the first Klara video being one of those much more productive (than me) people, there is now a Part Two of Klara’s story, and that Part Two exists in the form of another amazing video on the You Tube!

detailed surrealism

Here is Karima’s post on the subject, and here is a direct pointer to the video itself (don’t forget to Like and Subscribe!). Images by me using MidJourney and the GIMP, words by me, voicing and everything else by Karima.

Given my comparatively relaxed productivity :) I may or may not put the largish (or even an edited smallish) pdf of Part Two up somewhere. Perhaps arranged with the one for Part One, in some organized way!

This is the end of Klara’s story for now, but one never knows; she may appear again, for Further Adventures, on other days. :)

I am still creating hundreds and hundreds of images; over fourteen thousand all together, MidJourney tells me. And NightCafe says I’ve done another “4.5K+” there. A handful in dalle2. Lots and lots in NovelAI because it is so fast, but it also doesn’t retain them or give any kind of count, so I don’t know! But let’s say around twenty thousand altogether. Rather a lot!

November is approaching, and I have no real idea what I might do NaNoWriMo-wise. Will I use Klara’s story in some way? Will I use MidJourney images? NovelAI words? Or just type a lot? Stay tuned! :) And enjoy these lovely videos in the meantime…


Klara by Dale Innis & Karima Hoisan

Well, this is just too much fun. :) Very good Second Life friend and collaborator liked the little Klara piece so much that she voiced it and set it to the perfect music and made it into a rather wonderful YouTube! Definitely more accessible :) and more of an experience this way than the 327MB pdf file. Wooot!

Digital Rabbit Hole

Very excited to share with you all, this off-beat, pretty long (almost 10 minutes) surreal video collaboration with Dale Innis
Those of you who read me regularly, know that Dale Innis is a scripter friend who has collaborated with me and also with Natascha & I for the last 10 years and lately has been dabbling in all sorts of AI Art, especially MidJourney, which is a veritable game-changer in this blossoming field.
He showed me a pdf file of slides and a story-line, that he had made and I fell in love…fell obsessed, is a better word, to try to bring this to a way more people could see it.
This is how the project was born. I found, what we both agree, is the perfect music   Meditative Music and I made a voice-over and edited the slides into what you’ll see below.
This is a very slow-…

View original post 45 more words


Klara’s Story (Part One)

So after I did “Ice Dreams” (50M pdf), as casually announced here, I did another graphic novel (to the extent that that phrase fits at all), or the first part of one, in a very different style and by a very different process.

For “Klara’s Story” (working title), I generated two-by-two grids of Midjourney images using the prompt “detailed surrealism” (a favorite of mine) and some variants thereof, and crafted some sort of story around the images (rather than using the AI to create images for a more-or-less known story).

I haven”t yet had the patience to pare it down at all, so here is the current like 327M pdf draft.

The huge size does make it a bit awkward and slow to deal with, but… there it is!


More Visions of Yeni Cavan

I first found Yeni Cavan as a story and art venue, based on a bunch of words used as prompts in the pre-Stable Diffusion NightCafe, way back in February. Since then I’ve tried to find it in various other engines and things, casually and without much luck. But after playing with the engine flows and prompts and things some, here are some images from MidJourney that I rather like; sufficiently Yeni Cavanish, I’d say, although so far I miss the little random patches of bright purple neon and such. (Maybe I’ll try some of the other venues as well eventually.)

Yeni Cavan; interior room (image started in the –hd engine)
Yeni Cavan; room interior (love the comfy couch with the … circuit board? sitting on it)
Yeni Cavan; room interior (I’d like to be there yes)
Yeni Cavan; room interior (pure v3 I think)
Yeni Cavan; room interior (pure –hd I think; intricate!)
Yeni Cavan; detailed surrealism (whee!)
Yeni Cavan; adorable surreal bots
Yeni Cavan; more detailed surrealism!
Yeni Cavan; upstanding citizen
Yeni Cavan; City Waterfront

AI Art and Copyright some more

I am losing track of the number of AI-based image-creation tools I have access to now. It’s not that huge a number, but it’s complicated! :) There’s at least:

  • good old ArtBreeder, which I haven’t used in ages, and which seems to have a potentially interesting new mode where you sketch a thing with a few shapes, and then type text telling the AI what to make it into,
  • MidJourney with the old V3 engine and the newer and lyrically named ‘test’ and ‘testp’ engines and mixmashes of those,
  • NightCafe, which was my main goto image tool quite some weeks, with the old Artistic and Coherent engines, but now also the new Stable Diffusion (SD) based “Stable” engine, and various workflows among those,
  • NovelAI which now does images as well as text; the images are also in a Discord bot, and it’s really fast; it uses some heuristic smut-blurrer (maybe just the standard SD one?) but the devs sort of promise they will eventually move it off of discord and then have few or no restrictions (similarly to their text generator),
  • and now I discover that I have access to Dall-E also, from OpenAI, which I have just barely begun to use (detailed surrealism).

The “you can’t copyright art made with AIs” meme seems to have withered (which is good since it’s not true, although nothing is certain), but my experiment to gather additional evidence against it has finally borne fruit (months before I expected it to, really): I have now registered my copyright in this masterpiece of mine:

A blonde porcelain doll and a worn teddy bear sit on a trunk, in a musty attic in light from the window

with the real actual US Copyright Office, who have sent me a real actual certificate testifying to it. The registration can also be found on the web (you have to go to that page and then search on Registration Number for “VA0002317843”; I have yet to find a permalink that persists, bizarrely).

I did it through LegalZoom rather than myself; it cost more (I think), but I was more confident that I was Doing It Right during the process. There were no questions about whether AI was involved, or about what software I used to create it, or anything like that. I did have to say that I’m the creator, of course, but since I am :) I don’t see a problem there.

Registering the copyright doesn’t mean it’s 100% correct, it just creates a legal presumption. Someone could still challenge it, arguing that I wasn’t really the creator at all. I think that would be very unlikely to succeed.

And in any case, here is a nice concrete counterexample to any remaining “you can’t copyright art produced with an AI” claims that might be floating around.

The image is, by the way, provided under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license, so feel free to do anything allowed by that license. :) Knock yourself out! Figuratively!

Extremely generous friend Karima also continues updating the virtual world region “AI Dreams in Art” with things she likes from my Twitter feed, etc, so drop by! It is getting blushingly positive reviews on the Social Medias; apparently there are significant numbers of people who have heard a lot about this AI Art stuff, but never really seen any. They seem to like mine! :)


A Photograph #MidJourney

As we’ve discussed, one of my favorite things is to give a text- or image-generating AI a vague and/or ambiguous prompt, and just see what happens. The results are sometimes kind of horrifying, but here I’m going to post a bunch of results that aren’t especially horrifying, and that are sometimes lovely.

The prompt for all of these is basically just “a photograph”. And what I really want to do (and I am realizing that there are various services out there that would let me do it without much fuss) is make a nice coffee-table book of these, accompanied by text produced by like NovelAI. Just because it would be neat.

What a world, eh?


One Inside of Another #MidJourney

I continue having way too muchy fun making images with MidJourney (and NightCafe, and now some things that I can’t quite show off yet).

I’m realizing that I’m a little weird, in that most people seem to be interested in just how exactly they can get the tool to produce an image that they’re thinking of, whereas I am almost entirely into typing somewhat random ambiguous stuff, and seeing what fun things the AI responds with.

For instance, here’s a snapshot of a whole bunch of images made using the prompt “one inside of another” with various seeds and switches and engine flows and things:

two dozen rather varied and ominous images

I love all of these (there were some I didn’t love, and I didn’t upscale those, so they aren’t here). The first two got me:

It seemed like there was really something going on there.

These two are with a slightly different engine flow than most of the others, but are no less wonderful:

What’s going on here? Is the AI showing wild creativity? Is it just starting in a basically random place due to the vague prompt, and then drifting into some weird random local minimum from there? Is that different from showing wild creativity?

Clearly there are lots of pictures of faces (especially women’s faces) and rooms with windows in the training set, so we get lots of those, that makes sense. But why do we get two different images that are (inter alia) the face of a person holding (and/or confronting) some semi-abstract sharp object? Why are there two faces which are split in half vertically, and one half striped / pixelated?

And what are these?

One thing is certainly inside of another. Is that a coincidence? Or is the AI “aware” of it in some sense?

I feel like I could swim in this stuff forever! That is what I thought at first about the GPT-3 stuff, though, and that wasn’t true. :) Still, if it’s just that I’m still in the initial flush of excitement, it’s a very fun flush.

Oh, and somewhat relatedly, here is a stealth announcement of a new graphic novel (or perhaps picture book) based on MidJourney images. This time I generated many many images from the same (small set of related) prompts, four at a time, and then tried to construct a story that would make sense with them. Note that this version is like 327MB for some reason, so click with care: Klara, Part 1.


Yes, works made with an AI can be copyrighted.

In fact in most cases works made with an AI, just like works made with a typewriter or a paintbrush or Photoshop, are copyrighted by the human who created them, the moment that they are “fixed” (to use the wording of the Berne convention). I’m writing this page mostly to address the many statements to the contrary that are all over the web, and that people keep posting on the MidJourney Discord and so on, so that I can like link to this page whether than typing it in yet again every time someone says it.

But I read that a series of rulings found otherwise!

Yes, sadly, I’m sure you did. Here are just a few samples of this misinformation (one is especially disappointed in Smithsonian Magazine, ffs). But if one reads beyond the misleading headlines, these are all about two decisions by the U.S. Copyright Office in the case of Thaler, and (tl;dr) all those decisions do is reject the theory that an AI can create something as a “work for hire”, and the person using the AI thereby get the copyright to it as the “employer”. Since in US law only persons or “individuals”, not including software or computers, can be “creators in fact” of a creative work, they reject that theory.

The decisions in the Thaler case most definitely do not say that a person who uses an AI program in the ordinary way, just like a person who uses a paintbrush in the ordinary way, doesn’t come to own the copyright to that thing automatically, in the ordinary way (as nicely explained here). And in various other countries, the copyright laws explicitly account for things generated by or with an AI, and acknowledge that copyright applies to them (see for instance this short survey).

(If you’re here just because someone posted you this link when you said that images made using AI can’t be copyrighted, that’s all you need to know, but feel free to read on etc!)

But when a person uses an AI, all the creativity is in the AI, so the person shouldn’t get a copyright!

No court case that I know of, in any country, has ever ruled this way. One might as well argue (and people did, when the technology was new) that there is no creativity in a photograph, since all you do is point the camera and push a button. And yet it’s (now) uncontroversial that people get copyright in the photographs that they take.

It’s easy to take a picture, but a good photographer picks a camera and lenses, decides where to point it and in what light to press the button, and then decides which images to keep. It’s easy to use an AI to make a picture, but a good user of an AI image tool picks an engine and settings, decides what prompt(s) to give it and with what switches to invoke it, and then decides which images to keep. I think those are very analogous; you may disagree. The courts have not yet weighed in as of this writing, but it seems to me that denying copyright because a particular kind of software was involved in a certain way would be a mess that courts would not want to wade into.

If there hasn’t been a positive ruling in the US, though, it could turn out…

I agree, since the law doesn’t explicitly say that a person using an AI to make an image has the copyright in the image, and because the “all the creativity is in the AI” argument does exist, it’s not impossible that some US court could find that way. So one might not want to risk anything really important on that not happening.

What’s up with Thaler, anyway?

Thaler is, well, an interesting character, it seems. He believes that some AI programs he has created have had “near death experiences”, and he has attempted to obtain patents with an AI program as the inventor, as well as the attempts to cast them as work-for-hire employees for copyright purposes, as mentioned above. An individual before his time, perhaps. Perhaps.

Update: What if the ToS of a service says…

As a couple of people asked / pointed out after I posted this, sometimes the Terms of Service on a site where you can create stuff, says or implies that you do not own the copyright to the stuff, but they do, and they grant you some sort of license.

The MidJourney ToS, in fact, currently says that “you own all Assets you create with the Services” with a few exceptions including ‘If you are not a Paid Member, Midjourney grants you a license to the Assets under the Creative Commons Noncommercial 4.0 Attribution International License (the “Asset License”).’ This is a bit terse and ambiguous, but the obvious interpretation is that in that case MidJourney owns the Assets, and grants the user a certain CC license.

As far as I know, it isn’t well-established in IP law whether a ToS can unilaterally change who owns what like this; if anyone knows more, I’d be interested! But in any case, this sort of thing still says or implies that someone owns the rights, so it doesn’t directly impact the overall subject here.

Update 2: Show me an actual AI artwork that is registered with the US Copyright office!

Funny you should ask! :)

This is boring, post a picture!

A strange surreal owl-thing or something

Sunday in August

Brilliant title, eh? :) Various things have occurred! I will mention some of them.

There is now a full draft of my graphic novel(ette); it’s 40 pages, and about 50MB, so don’t expect your phone to pop it up very quickly. And also don’t expect it to be that good :) as I’ve never really written in this medium before, and it’s tough. In the most recent draft I removed considerable exposition which felt out of place, replacing it with images and short statements. Now I’m afraid the result is basically incomprehensible :) at least to anyone not already familiar with the SF tropes I’m touching on.

It was really fun to do, though! As I’ve mentioned, all of the art was done using MidJourney, and the compositing and most of the text was done in the GIMP. I got a few nice pieces of display text from; if I’d thought about it a little harder, I would have used something more interesting than Sans-Serif Bold (and sometimes Italic) from the GIMP font collection. (There’s a little Verdana, just on the copyright page at the end I think.)

This was the most fun when I was putting together the images that I’d already created that inspired the story in the first place. It was more frustrating when I needed a particular image and was trying to create it in MidJourney; it is sometimes a challenge to generate a specific thing! The water droplet at the very end, for instance, came after many, many attempts to make a crystal / water sphere that wasn’t sitting on a surface of some kind.

Other things! In order to get even more meta than this, we entered “Here is a short description of an image that has some interesting visual elements:” into NovelAI. It responded:

A man wearing a white t-shirt and blue jeans sits in his chair, staring at the television. His eyes are closed as he watches a show about two women discussing the weather. The screen reads ‘NBC News’ with a picture of a woman.

and I typed that into MidJourney, and got this:

Four rather fuzzy pictures containing a person and some TV screens

So that’s in some sense entirely AI-generated, using a human-designed procedure. It’s also really boring!

Let’s try again; this time NovelAI says:

A woman in a white dress, standing on a rocky beach. The ocean is behind her and the setting sun makes for a bright glare to one side of her face as she looks out into the water.

and MidJourney responds with (after a considerable delay because I am in relax mode, which is basically “nice -n 19”:

Four nice slightly impressionistic images of a woman standing on rocks by the water with the sun low.

which is quite nice (although again not exactly what the prompt says).

So there you are: the first two (or eight) images produced by a particular meta-algorithm using Modern AI Technology! :)

Other things are to a great extent prevented from occurring by the fact that it is Very Humid outside, and there are Pandemics and so on still. I went out to get bagels this morning, and I was like “yow, what is this very large humid windy room here?”. There’s a chance I’ll get into Manhattan next week; that will be quite a shock!

I have not been playing Video Games to speak of, because all of these AI stuff has been more interesting. There is all sorts of stuff to say about legal issues (Yes, content generated using an AI can be copyrighted by the human creator!) and societal issues (impact of AI on artists and art perhaps similar to impacts of photography on same?) and all like that there. But it is more fun to make cool pictures!

So in closing here is the one I used on the copyright page of the Graphic Novel(ette). Be well!

A surreal image of maybe a sheep standing in shallow water looking at maybe like a blimp made of sticks or something.

So many AIs and images and stuff!

I was thinking of a post extending the legal thoughts from last time to talk about this widespread claim (based on the Thaler decisions that I mentioned briefly there) that “Artwork made with an AI can’t be copyrighted”. It’s all over the clickbait-website press, and it’s wrong. The rulings in question said that an AI can’t be the creator-in-fact of a work (in the U.S.) so someone can’t get copyright to a work based on being the “employer” of the creator-in-fact AI. But they say nothing about the obvious alternative that a human can be the creator (simpliciter) of a work make with an AI, just as a human can be the creator of a work made with Photoshop, or a paintbrush.

Heh heh, I guess I’ve already written a bit about that here now, haven’t I? But there are various arguments and counterarguments that one could talk about that I’m not going to.

Then there’s the fact that I’ve been generating So Many Images in Midjourney, which for a while there had pretty much entirely drawn me away from NightCafe. As well as those So Many Images, I’ve started to put a bunch of them together in the GIMP in the form of a sort of amateur manga or graphic story that attempts to have an actual plot and stuff; here’s a pdf of the story (the first 10 pages of it, which is all that currently exists), at considerably reduced resolution so it isn’t like over 30MB. Feedback welcome. :)

But then! By which I mean just today I think! NightCafe has become very interesting again, due to adding the Stable Diffusion engine. Which I have been using extensively, and have noted that:

  • It is kind of boring compared to the other engines I’ve used, in that it seems to usually take the simplest and most quotidian interpretation of a prompt, and create the most unremarkable (and, admittedly, sometimes impressively realistic!) image possible from it.
  • The right set of adjectives and so on can get more interesting results from it sometimes. The prompt prefix for Yeni Cavan, for instance, produces recognizably Yeni Cavan images, but somewhat less smoky and mysterious ones than Midjourney or the NightCafe Artistic engine do.
  • It has some kind of risible post-censorship blurring algorithm, and if a picture looks too naughty to that algorithm, it comes out with a very heavy blur applied. I have (accidentally) gotten one NFSW image that its filter didn’t detect, and on the other hand just including “in the style of Gauguin” in a prompt seems to pretty reliably produce just a blur. (“Well, yeah, he’s in the training set, but his stuff is really too naughty to output.”) I mean, /facepalm and all.
  • Update: when I reported a couple of very obvious porn-filter false positives, NightCafe support replied that the filter should be gone / optional in “a few days”. Very gratifyin’!
  • I wish NightCafe had an “effectively free, but might be slow” generation mode like Midjourney does. I’m running out of NightCafe credits after playing with Stable Diffusion for hours, and I’m near out of credits, and given the overall experience I will probably just to back to Midjourney now and make more images for the comic. :)

So that’s those things! But mostly it’s been lots of cool pictures. We will close with a recent one from Midjourney:

Atomic surrealism detailed render

and something that Stable Diffusion did (rather interestingly) with the same prompt:

Atomic surrealism detailed render

Stay surreal! :D


Is it plagiarism? Is it copyright infringement?

So I’ve been producing so many images in Midjourney. I’ve been posting the best ones (or at least the ones I decide to post) in the Twitters; you can see basically all of them there (apologies if that link’s annoying to use for non-Twitterers). And an amazing friend has volunteered to curate a display of some of them in the virtual worlds (woot!), which is inexpressibly awesome.

Lots of people use “in the style of” or even “by” with an artist’s name in their Midjourney prompts. I’ve done it occasionally, mostly with Moebius because his style is so cool and recognizable. It did imho an amazing job with this “Big Sale at the Mall, by Moebius”:

“Big Sale at the Mall, by Moebius” by Midjourney

It captures the coloration and flatness characteristic of the artist, and also the feeling of isolation in huge impersonal spaces that his stuff often features. Luck? Coolness?

While this doesn’t particularly bother me for artists who are no longer living (although perhaps it should), it seems questionable for artists who are still living and producing, and perhaps whose works have been used without their permission and without compensation in training the AI. There was this interesting exchange on Twitter, for instance:

The Midjourney folks replied (as you can I hope see in the thread) that they didn’t think any of this particular artist’s works were in the training set, and that experimentally adding their name to a prompt didn’t seem to do anything to speak of; but what if it had? Does an artist have the right to say that their works which have been publicly posted, but are still under copyright of one kind or another, cannot be used to train AIs? Does this differ between jurisdictions? Where they do have such a right, do they have any means of monitoring or enforcing it?

Here’s another thread, about a new image-generating AI (it’s called “Stable Diffusion” or “Stability AI”, and you can look it up yourself; it’s in closed beta apparently and the cherrypicked images sure do look amazing!) which seems to offer an explicit list of artists, many still living and working, that it can forge, um, I mean, create in the style of:

So what’s the law?

That’s a good question! I posted a few guesses on that thread (apologies again if Twitter links are annoying). In particular (as a bulleted list for some reason):

  • One could argue that every work produced by an AI like this, is a derivative work of every copyrighted image that it was trained on.
  • An obvious counterargument would be that we don’t say that every work produced by a human artist is a derivative work of every image they’ve studied.
  • A human artist of course has many other inputs (life experience),
  • But arguably so does the AI, if only in the form of the not-currently-copyrighted works that it was also trained on (as well as the word associations and so on in the text part of the AI, perhaps).
  • One could argue that training a neural network on a corpus that includes a given work constitutes making a copy of that work; I can imagine a horrible tangle of technically wince-inducing arguments that reflect the “loading a web page on your computer constitutes making a copy!” arguments from the early days of the web. Could get messy!
  • Perhaps relatedly, the courts have found that people possess creativity / “authorship” that AIs don’t, in at least one imho badly-brought case on the subject: here. (I say “badly-brought” just because my impression is that the case was phrased as “this work is entirely computer generated and I want to copyright it as such”, rather than just “here is a work that I, a human, made with the help of a computer, and I want to assert / register my copyright”, which really wouldn’t even have required a lawsuit imho; but there may be more going on here than that.)
  • The simplest thing for a court to decide would be that an AI-produced work should be evaluated for violating copyright (as a derivative work) in the same way a human-produced work is: an expert looks at it, and decides whether it’s just too obviously close a knock-off.
  • A similar finding would be that an AI-produced work is judged that way, but under the assumption that AI-produced work cannot be “transformative” in the sense of adding or changing meaning or insights or expression or like that, because computers aren’t creative enough to do that. So it would be the same standard, but with one of the usual arguments for transformativity ruled out in advance for AI-produced works. I can easily see the courts finding that way, as it lets them use an existing (if still somewhat vague) standard, but without granting that computer programs can have creativity.
  • Would there be something illegal about a product whose sole or primary or a major purpose was to produce copyright-infringing derivative works? The DMCA might possibly have something to say about that, but as it’s mostly about bypassing protections (and there really aren’t any involved here), it’s more likely that rules for I dunno photocopiers or something would apply.

So whew! Having read some of the posts by working artists and illustrators bothered that their and their colleagues’ works are being used for profit in a way that might actively harm them (and having defended that side of the argument against one rather rude and rabid “it’s stupid to be concerned” person on the Twitter), I’m now feeling some more concrete qualms about the specific ability of these things to mimic current artists (and maybe non-current artists whose estates are still active).

It should be very interesting to watch the legal landscape develop in this area, especially given how glacially slowly it moves compared to the technology. I hope the result doesn’t let Big AI run entirely roughshod over the rights of individual creators; that would be bad for everyone.

But I’m still rather addicted to using the technology to make strange surreal stuff all over th’ place. :)