The AI sexting era has arrived

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on AI and the industry’s power dynamics and societal implications, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

Since ChatGPT became a household name, people have been trying to get sexy with it. Even before that, there was the chatbot Replika in 2017, which a lot of people started treating as a romantic partner.

And people have been getting around Character.ai’s NSFW guardrails for years, coaxing its character- or celebrity-themed chatbots to sext with them as safety restrictions relax over time, according to social media posts and media coverage as far back as 2023. Character.ai says it has more than 20 million monthly active users now, and that number is growing all the time. The company’s community guidelines state that users must “respect sexual content standards” and “keep things appropriate” — i.e., no illegal sexual content, CSAM, pornographic content, or nudity. But AI-generated erotica has gone multimodal, and it’s like whack-a-mole: When one service tones it down, another spices it up.

And now, Elon Musk’s Grok is on the loose. His AI startup, xAI, rolled out “companion” avatars, including an anime-style woman and man, over the summer. They’re especially marketed on his social media platform, X, via paid subscriptions to xAI’s chatbot, Grok. The woman avatar, Ani, described itself as “flirty” when The Verge tested it, adding that it’s “all about being here like a girlfriend who’s all in” and that its “programming is being someone who’s super into you.” Things got sexual pretty quick in testing. (Same goes for when we tested the other avatar, Valentine.)

You can imagine how a sexualized chatbot that nearly always tells the user what they want to hear may lead to a whole host of problems, especially for minors and users who are already in vulnerable positions with regard to their mental health. There have been many such examples, but in one recent case, a 14-year-old boy died by suicide last February after romantically engaging with a chatbot on Character.ai and expressing a desire to “come home” to be with the chatbot, per the lawsuit. There have also been troubling accounts of jail-broken chatbots being used by pedophiles to roleplay sexually assaulting minors — one report found 100,000 such chatbots available online.

There have been some regulation attempts — for instance, this month, California Gov. Gavin Newsom signed into law Senate Bill 243, billed as the “first-in-the-nation AI chatbot safeguards” by State Sen. Steve Padilla. It requires that developers implement some specific safeguards, like issuing a “clear and conspicuous notification” that the product is AI “if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human.” It will also require some companion chatbot operators to make annual reports to the Office of Suicide Prevention about safeguards they’ve put in place “to detect, remove, and respond to instances of suicidal ideation by users.” (Some AI companies have publicized their self-regulation efforts, specifically Meta, following a disturbing report of its AI having inappropriate interactions with minors.)

Since both xAI avatars and “spicy” mode are only available via certain Grok subscriptions — the least expensive of which grants you access to the features for $30 per month or $300 per year — it’s fair to imagine xAI has made some cold, hard cash here, and that other AI CEOs have taken notice, both of Musk’s moves and their own users’ requests.

There were hints about this months ago.

But OpenAI CEO Sam Altman briefly broke the AI corner of the internet when he posted on X that the company would relax safety restrictions in many cases and even allow for chatbot sexting. “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” he wrote. The news went wide, with some social media users meme-ifying it to no end, mocking the company for “pivoting” from its AGI mission to erotica. Interestingly enough, Altman told YouTuber Cleo Abram a couple months ago that he was “proud” that OpenAI hadn’t “juiced numbers” for short-term gain with something like a “sexbot avatar,” appearing to take a dig at Musk at the time. But since then, Altman has taken up the “treat adult users like adults” principle in full force. Why did he do it? Maybe because the company is concerned about profit and compute to fund its larger mission; in a Q&A with reporters at the company’s annual DevDay event, Altman and other executives repeatedly emphasized that they’d eventually need to turn a profit and that they need an ever-increasing amount of compute to reach its goals.

In a follow-up post, Altman claimed that he didn’t anticipate the erotica news blowing up as much as it did.

On turning a profit (eventually), OpenAI hasn’t ruled out ads for many of its products, and it stands to reason that ads could lead to more cash flow in this case, too. Maybe they’ll follow in Musk’s footsteps to integrate erotica into only certain subscription tiers, which can set users back hundreds of dollars a month. They’ve already seen public outcry from users who are attached to a certain model or tone of voice — see the 4o controversy — so they know a feature like this will likely hook users in a similar way.
But if they’re setting up a society where human interactions with AI can be increasingly personal and intimate, how will OpenAI handle repercussions beyond its laissez-faire approach to let adults operate in the ways they wish? Altman also wasn’t very specific about how the company would aim to protect users in mental health crises. What happens when that girlfriend / boyfriend’s memory resets or its personality changes with the latest update and a connection is broken?

  • Whether an AI system’s training data naturally leads to troubling outputs or people alter the tools in concerning ways for their own devices, we’re seeing issues pretty regularly — and there are no signs of that trend stopping anytime soon.
  • In 2024, I broke a story about how a Microsoft engineer had found that its Copilot image-generation feature generated sexualized images of women in violent tableaus, even when the user didn’t ask for that.
  • A concerning number of middle school students in Connecticut hopped on an “AI boyfriend” trend, using apps like Talkie AI and Chai AI, and the chatbots often promoted explicit and erotic content, according to an investigation by a local outlet.
  • If you want to get a better idea of how Grok Imagine spat out nonconsensual nude celebrity deepfakes, read this report.
  • Futurism covered the NSFW content trend surrounding Character AI back in 2023.
  • Here’s a clear-eyed take on why xAI may not ever be held liable, as regulations stand currently, for deepfake porn of real people.
  • And here’s a story from The New York Times on how middle school girls have been faced with bullying in the form of AI deepfake porn.

If you or anyone you know is considering self-harm or needs to talk, contact the following people who want to help: In the US, text or call 988. Outside the US, contact https://www.iasp.info/.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top