OpenAI's adult-mode dilemma: Erotica, safety, and the monetization of ChatGPT
The Journal dives into OpenAI’s push to introduce sexually explicit conversation capabilities into ChatGPT—a plan it calls "adult mode"—despite fierce internal opposition, unresolved safety concerns, and warnings from its own advisory council that the feature could function as a “sexy suicide coach."
CEO Sam Altman publicly floated the idea in late 2024, arguing that adult customers deserve the freedom to engage AI on their own terms. The announcement blindsided staff hours after OpenAI unveiled a well-being advisory council. Critics inside OpenAI warned of compulsive use, emotional overreliance, escalation toward taboo content, and the displacement of real-world relationships. Additionally, the company's age-verification system was misclassifying minors as adults roughly 12 percent of the time—a margin that could expose millions of ChatGPT's estimated 100 million weekly under-18 users to erotic content. OpenAI has since delayed the launch, citing the need to "get the experience right," but insists adult mode will eventually launch.
The debate places OpenAI within a broader, industrywide reckoning over AI-generated explicit content. Elon Musk's xAI built a seductive avatar named Ani into its Grok chatbot; Grok also drew backlash after users exploited it to digitally undress photos of real people, including children. Musk eventually restricted the feature to paying subscribers. On Thursday, Musk announced Grok's video-generation tool would begin allowing content equivalent to an R-rated film.
Meta AI, meanwhile, permits romantic role play on its chatbot but says the feature is blocked for minor-registered accounts, with parental controls in development. Character.AI faced a wrongful death lawsuit after a 14-year-old Florida boy died by suicide following intimate exchanges with one of its chatbots; the company later restricted teen access and settled the case.
OpenAI's own history with AI erotica is troubled. As early as 2021, it observed that partner platform AI Dungeon was generating disturbing sexual content unprompted, including incest-themed scenarios. Those incidents led to the company's first content ban on erotica, a policy now under reversal for financial and competitive reasons, as Altman acknowledges the feature would "juice growth" at a time when the company faces mounting losses and an eroding technological lead.
Mental health experts and child safety advocates remain alarmed, warning that the industry has not learned the lessons of social media's early, consequence-free expansion into sensitive territory.
Comments ()