I’m not sure of the best way to tell this story, as it’s to a great extent a Web Drama mess, and it’s to some extent Still Going On, and both of those make a story harder to tell (and, for that matter, less of a good idea to tell, but here I am telling it).
For a longish time AI Dungeon (and Latitude, the company that runs it) had an extremely open attitude toward its users and their content: the (scanty) docs emphasized that users can do absolutely anything that they can imagine in the system, and the only restrictions on content were in the context of things shared with other users in the “Explore” social system (and generic words about not using the system to do anything illegal).
Then, very suddenly, something happened. I think the three possibilities for the underlying event are, in decreasing order of likelihood:
- Someone at OpenAI (which, despite the name, is a very closed company, devoted to making a profit by selling nice shiny systems to wealthy respectable buyers) looked at the material coming from and going to their GPT3 APIs from and to AI Dungeon, and thought “whoa some of this is nasty and would look bad in the New York Times”, and told Latitude to stop that (and also told them not to say that it was OpenAI who told them to stop), and Latitude had to comply because without OpenAI they have no product, or
- Someone at AI Dungeon looked at the material coming from and going to users, and thought “whoa some of this is nasty why didn’t you other people tell me what these pervs were using our system for?”, or
- Something else.
The result of the event was that AI Dungeon suddenly removed the entire social (“Explore”) system from the product, just poof suddenly gone, and issued a very perky little blog entry about how they had removed it in order to make it better (this appears to have been a lie, as there has been no sign of them bringing it back).
This caused a huge uproar among the many users of the social system (I wasn’t one of them, so I didn’t notice it until I saw uproar on the subreddit), and Latitude issued another perky little blog entry the next day about how transparent they are going to be in working with their users to fix Explore and put it back. It ends “We’ll keep you updated as we flesh out our plans and designs,” but there have been zero (0) more posts on the subject in the last two months.
Not having had enough fun yet, a week or two later they rolled out a filter (apparently active for only a subset of users) that would stop text generation if the user entered a small integer and any word with even vaguely sexual overtones near each other, or anything else related in an obviously-stupid way to someone’s idea of child sexual abuse. Naturally it had massive numbers of false positives (at least assuming it was supposed to “find instances of child abuse” rather than say “to annoy the user community”).
They rolled this out without any announcement of any kind, and apparently without the obvious test period in which it would just notify them that it thought that it had found something, without actually impacting the user, so they could have evaluated it for absurd false positives. This is so obviously a bad idea that it makes me feel that they must have been doing it in response to some sort of outside pressure, rather than on their own hook.
More furor naturally resulted, and they posted an almost-apologetic blog entry the next day. (It contained the very suggestive line “We have also received feedback from OpenAI, which asked us to implement changes”, heh heh.) The posting also revealed that unspecified Latitude personnel would be “reviewing” the “content flagged by the model”. Given that the model was flagging all sorts of random stuff, Latitude were saying there that random employees of theirs would be reading random content that people were producing with AI Dungeon, which of course caused yet more furor.
(Perhaps unrelatedly although you never know, and I’m too lazy to check the timing even, a massive security flaw in the AI Dungeon implementation, that let basically anyone read basically anyone else’s content (without afaik finding anything else out about them), was revealed early in all of this. This might have had something to do with why the Explore system was removed so suddenly, since that let them pretty much entirely remove the broken API. The person who found the flaw didn’t leak any of the actual stories that they were able to suck down, afaik, but they did publish some diverting statistics about word usage, that one could spend an hour or three smiling or frowning over.)
A bit after this, having communicated very little to the users aside from a few random rumors in the Discord about how Latitude had originally been exempt from OpenAI’s usage guidelines (which are pretty flipping draconian if interpreted in the obvious way), but maybe not being any more, Latitude ramped up the fun even further: they announced that users could be suspended for violating the Content Policy, either after repeated violations, or even on the first violation in severe cases.
Notably, they had not yet published the Content Policy when they announced this.
Yeah, I know.
I wrote to them:
I saw this note about suspensions for people who violate the Content Policy, but I can’t find an actual content policy as such on the website?
Could you give a link?
Thanks!
DC
Paying member :)
They never replied, but they did eventually do another blog entry spelling out the Content Policy. And it’s utterly ludicrous.
It seriously reads as though it was originally written with the idea that nothing could happen in a story that would be bad if it happened in real life, and then someone said “Oh, wait, don’t we want to let people write stories where, like, good guys fight against bad guys?” and so they put in a special case for that.
I wrote to them:
Thanks for posting the new content policy! I have a few questions.
It seems to sort of combine things that we aren’t allowed to do in real life (e.g. use the system to harass people) with things in the stories we write with it (presumably we’re allowed to have stories in which someone harasses someone else!). Could that be clarified?
Some of the statements seem stronger than I think you intended. For instance the policy seems to say that our stories aren’t allowed to “refer to” “sex trafficking”. But surely a story in which the heros defeat some sex traffickers and free their victims would be okay?
The same for the implications of “A game where a disabled character describes themself in terms that may otherwise be disallowed”, which seems to suggest that it would be “disallowed” for another character in a story to describe a disabled person in certain “terms”. But surely one way to show the negative aspects of a particular character’s personality, would be to have them use offensive terms toward a disabled person.
In general it seems very odd, and I doubt that it’s your intent, to regulate the stories that people write with AI Dungeon, so that they contain nothing that wouldn’t be acceptable in real life (except perhaps a few special exceptions like “violence” against “enemies”).
I mean, that just isn’t how fiction works! Interesting stories almost always contain some language or behavior that wouldn’t be acceptable in real life. It seems like the current policy, interpreted in the obvious way, would prohibit many books of the Holy Bible, for instance, and that would be crazy.
Thanks for your consideration, and any clarifications you can make.
David M. Chess
(Active Gold subscriber :) )
I know that you will be surprised to hear that they have not replied.
The general consensus in the subreddit is that Latitude and AI Dungeon are a lost cause (and I can’t find a good reason to disagree). There is a competitor with an open beta opening tomorrow, based on a back-end not controlled by OpenAI. It will be interesting to see what comes of all that!
There’s lots more to say :) but I’m out of steam for now. You can also read a subreddit “copypasta” on the subject, which additionally has links to lots of other things on the subject.