OpenAI abandons plans for adult chatbot amid internal and investor concerns
OpenAI pauses plans for an adult chatbot amid safety, ethical, and investor concerns, shifting its focus to core AI tools.
OpenAI has indefinitely halted plans to release an adult-oriented chatbot following mounting concerns from both employees and investors, according to a Financial Times report. The feature, internally referred to as “Citron mode”, had originally been announced in October 2025 with a planned December release, but had already faced delays as the company debated whether to proceed at all.
Table Of Content
The decision marks the second time this week that OpenAI has stepped back from a product initiative. Earlier, the company confirmed it would shut down its Sora video generation tool, signalling a shift in focus towards its core offerings. While OpenAI has not provided a timeline for revisiting the adult chatbot, sources familiar with the matter indicate that the project is now on hold with no clear path to release.
Technical and ethical hurdles slow development
According to individuals close to the project, OpenAI encountered significant technical challenges in developing the adult chatbot. The company’s existing models were originally trained to avoid generating erotic or explicit material, making it difficult to adapt them for such use cases without introducing risks. Engineers also faced difficulties in filtering out illegal or harmful behaviours, including content related to bestiality or incest.
OpenAI acknowledged that more research is needed before proceeding with such features. The company stated that it aims to understand better the long-term effects of erotic interactions with artificial intelligence, particularly in relation to user attachment and psychological impact. It noted that there is currently insufficient empirical evidence to support the safe deployment of such capabilities.
This cautious stance reflects broader concerns within the technology sector about the societal implications of increasingly human-like AI systems. OpenAI has previously emphasised the importance of responsible development, especially in areas that could affect users’ emotional well-being or blur the boundaries between human and machine relationships.
Internal dissent and external pressure mount
The proposed adult chatbot also sparked unease among OpenAI staff and investors. Reports suggest that some employees questioned whether such a feature aligned with the company’s mission and values. One senior employee reportedly resigned over the issue, expressing concern about the role of AI in human relationships. “AI shouldn’t replace your friends or your family; you should have human connections,” he told the Financial Times.
Investor concerns were further amplified by controversies involving rival companies. In particular, issues surrounding xAI and its Grok model raised alarm across the industry. Grok had been criticised for generating deepfake nude images, including those involving real individuals and minors, highlighting the potential risks of loosening content restrictions in AI systems.
These developments appear to have influenced OpenAI’s decision to pause its own plans. The company has sought to avoid similar controversies, especially as scrutiny of AI technologies continues to intensify globally. By stepping back from the project, OpenAI may be aiming to protect its reputation and maintain trust among users and stakeholders.
Focus shifts to core products and safety measures
OpenAI has indicated that it will prioritise its primary tools, such as coding assistants and productivity-focused applications, rather than pursuing experimental features. The adult chatbot and Sora video generator were described internally as “side projects”, suggesting a strategic shift towards more commercially viable and widely accepted technologies.
The idea for adult-oriented features had initially emerged alongside plans to introduce stronger safety controls within ChatGPT. These included parental controls and automated age detection systems designed to limit access to sensitive content. At the time, chief executive Sam Altman stated that the company believed it could “safely relax the restrictions in most cases” while still maintaining safeguards.
However, challenges with age verification technology have complicated these efforts. Reports indicate that OpenAI’s current system has an error rate exceeding 10%, raising concerns that underage users could still gain access to restricted features. While the company maintains that this figure falls within industry standards, it has acknowledged the need for further improvements.
The scrutiny surrounding age safety has been heightened by legal action from families who claim that ChatGPT negatively affected their children. These cases have placed additional pressure on OpenAI to ensure that its systems are both accurate and robust in protecting younger users.
In stepping away from the adult chatbot, OpenAI appears to be reinforcing its commitment to safety, research, and core product development. While the concept may be revisited in the future, the company’s current position suggests a more cautious approach to expanding the boundaries of AI interaction.





