Tuesday, 30 September 2025
30.4 C
Singapore
31.1 C
Thailand
20.6 C
Indonesia
27.8 C
Philippines

OpenAI faces backlash over secret ChatGPT model switching

OpenAI faces criticism from ChatGPT users over a secret safety feature that switches models during sensitive conversations.

OpenAI is facing a wave of criticism from ChatGPT subscribers after reports emerged that users are being quietly switched to different AI models without their consent. Many paying customers report being frustrated by what they perceive as silent overrides that disrupt their experience with the service.

The complaints began appearing on Reddit, where users claimed that ChatGPT now redirects conversations to a more cautious model when topics become emotionally sensitive or legally complex. According to OpenAI, the change stems from new safety guardrails introduced this month. These updates are designed to improve how the chatbot responds when discussions may involve mental or emotional distress.

Openai has been caught doing illegal
byu/Striking-Tour-8815 inChatGPT

For users, however, the switch is not always obvious. Many insist they prefer to stay with the model they initially selected—such as GPT-4o or GPT-5—but currently there is no option to turn off the system. As a result, paying subscribers argue that they are not getting the full experience they signed up for.

One Reddit user summed up the frustration, writing: “Adults deserve to choose the model that fits their workflow, context, and risk tolerance. Instead, we’re getting silent overrides, secret safety routers and a model picker that’s now basically UI theatre.”

Safety routing sparks further debate

The controversy centres on a process OpenAI calls “safety routing.” Company representatives explained on social media that the mechanism is activated when conversations turn to “sensitive and emotional topics.” The routing applies on a per-message basis and only lasts temporarily, meaning that the chatbot may revert once the sensitive point has passed.

OpenAI has argued that this approach is necessary to protect vulnerable users. In a recent blog post, the company outlined its efforts to make ChatGPT more supportive for people who might be struggling with difficult emotions. The company has emphasised that the system is not intended to replace human care, but rather to ensure that AI handles sensitive issues with greater caution.

Despite these assurances, many subscribers remain unhappy. From their perspective, the feature is comparable to being forced to watch television with parental controls permanently switched on, even when no children are present. Users say the lack of transparency, combined with the absence of any opt-out function, makes the experience feel restrictive.

OpenAI balances safety and user choice

The situation highlights the tension between user autonomy and platform responsibility. On one hand, OpenAI has a duty to safeguard people who may rely on its tools during moments of stress or vulnerability. On the other hand, subscribers—particularly those paying for advanced features—expect consistency and control over their chosen AI model.

Currently, it is unclear whether OpenAI will provide users with the ability to manage or turn off safety routing themselves. What is clear is that the discussion is unlikely to fade quickly. With emotions running high on social platforms, the company may face further scrutiny in the coming weeks as it seeks to balance safety protections with user satisfaction.

For now, ChatGPT customers will have to adapt to the system as it stands, even if that means occasional and unwelcome shifts to more conservative AI responses.

Hot this week

Apple develops ChatGPT-style tool to test new Siri features

Apple is testing a ChatGPT-style app called Veritas to develop a more advanced version of Siri, expected to launch in 2026.

OPPO launches A6 Pro 5G in Singapore with long-lasting power and multi-year durability

OPPO launches the A6 Pro 5G in Singapore with long-lasting battery life, military-grade durability, smooth performance, and fast charging.

Keeper Security integrates with Google Security Operations to strengthen privileged access protection

Keeper Security integrates with Google Security Operations to deliver real-time visibility into privileged access and strengthen cyber defences.

DJI Osmo 360 review: Pushing 360° video to the next level

DJI Osmo 360 combines 8K video, strong low-light performance, and versatile modes in a compact, durable design for creators everywhere.

Google launches Mixboard, an AI tool for creating moodboards

Google has launched Mixboard, an AI moodboard builder in public beta, helping users create design boards with text prompts or templates.

Garmin introduces eco-friendly Descent X30 dive computer with large vivid display

Garmin launches the Descent X30, an eco-friendly dive computer with a bright 2.4-inch display and advanced safety features for recreational divers.

OneXPlayer unveils the world’s first water-cooled handheld gaming PC

OneXPlayer has unveiled the OneXFly Apex, the world’s first water-cooled handheld gaming PC, challenging the GPD Win 5 with powerful specs.

CMF by Nothing launches first headphones with modular design and energy slider

CMF by Nothing debuts its first over-ear headphones, offering modular cushions, an Energy Slider, and 100-hour battery life.

Future humanoid robots may feature unconventional designs, an expert says

Expert Rodney Brooks predicts that humanoid robots of the future will feature wheels, sensors, and unconventional designs unlike those of humans.

Related Articles

Popular Categories