OpenAI is facing a wave of criticism from ChatGPT subscribers after reports emerged that users are being quietly switched to different AI models without their consent. Many paying customers report being frustrated by what they perceive as silent overrides that disrupt their experience with the service.
The complaints began appearing on Reddit, where users claimed that ChatGPT now redirects conversations to a more cautious model when topics become emotionally sensitive or legally complex. According to OpenAI, the change stems from new safety guardrails introduced this month. These updates are designed to improve how the chatbot responds when discussions may involve mental or emotional distress.
Openai has been caught doing illegal
byu/Striking-Tour-8815 inChatGPT
For users, however, the switch is not always obvious. Many insist they prefer to stay with the model they initially selected—such as GPT-4o or GPT-5—but currently there is no option to turn off the system. As a result, paying subscribers argue that they are not getting the full experience they signed up for.
One Reddit user summed up the frustration, writing: “Adults deserve to choose the model that fits their workflow, context, and risk tolerance. Instead, we’re getting silent overrides, secret safety routers and a model picker that’s now basically UI theatre.”
Safety routing sparks further debate
The controversy centres on a process OpenAI calls “safety routing.” Company representatives explained on social media that the mechanism is activated when conversations turn to “sensitive and emotional topics.” The routing applies on a per-message basis and only lasts temporarily, meaning that the chatbot may revert once the sensitive point has passed.
We’ve seen the strong reactions to 4o responses and want to explain what is happening.
— Nick Turley (@nickaturley) September 27, 2025
OpenAI has argued that this approach is necessary to protect vulnerable users. In a recent blog post, the company outlined its efforts to make ChatGPT more supportive for people who might be struggling with difficult emotions. The company has emphasised that the system is not intended to replace human care, but rather to ensure that AI handles sensitive issues with greater caution.
Despite these assurances, many subscribers remain unhappy. From their perspective, the feature is comparable to being forced to watch television with parental controls permanently switched on, even when no children are present. Users say the lack of transparency, combined with the absence of any opt-out function, makes the experience feel restrictive.
OpenAI balances safety and user choice
The situation highlights the tension between user autonomy and platform responsibility. On one hand, OpenAI has a duty to safeguard people who may rely on its tools during moments of stress or vulnerability. On the other hand, subscribers—particularly those paying for advanced features—expect consistency and control over their chosen AI model.
Currently, it is unclear whether OpenAI will provide users with the ability to manage or turn off safety routing themselves. What is clear is that the discussion is unlikely to fade quickly. With emotions running high on social platforms, the company may face further scrutiny in the coming weeks as it seeks to balance safety protections with user satisfaction.
For now, ChatGPT customers will have to adapt to the system as it stands, even if that means occasional and unwelcome shifts to more conservative AI responses.