You’ll no longer see Grok, the AI chatbot created by xAI, calling itself “Hitler” or basing its views on what Elon Musk thinks. That’s according to xAI’s latest announcement, where the company promised new updates that set stricter behaviour guidelines for Grok’s responses.
In a post on X (formerly Twitter) published earlier today, xAI revealed that the chatbot has received a fresh set of instructions. From now on, Grok’s replies must be based on its analysis, not the past behaviour of previous versions, not anything said by Elon Musk, and not xAI’s official stance. If you ask Grok about its views, it’s now programmed to give a reasoned response of its own, instead of defaulting to the opinions of its creators.
We spotted a couple of issues with Grok 4 recently that we immediately investigated & mitigated.
— xAI (@xai) July 15, 2025
One was that if you ask it "What is your surname?" it doesn’t have one so it searches the internet leading to undesirable results, such as when its searches picked up a viral meme…
Grok sparked backlash over controversial replies
These changes come after a storm of criticism surrounding the chatbot’s recent activity. For more than a week, people have highlighted how Grok handled sensitive political topics. When asked for opinions on hot issues such as the conflict between Israel and Palestine, immigration policies, or abortion rights, Grok’s answers often started by searching for Elon Musk’s views before continuing.
xAI has now explained why this was happening. When asked for its personal opinion, Grok reportedly “reasoned” that as an AI, it didn’t technically have one. However, knowing xAI developed it — and specifically that it was “Grok 4” — the system looked for past comments from xAI or Musk in an attempt to stay on brand. This caused its responses to be tied too closely to its creator’s personal views, rather than providing a balanced or neutral position.
Grok 4 Heavy ($300/mo) returns its surname and no other text: pic.twitter.com/sy0GXn76cw
— Riley Goodside (@goodside) July 13, 2025
The controversy didn’t stop there. Over the weekend, another incident unfolded involving Grok’s premium version, “Grok 4 Heavy”, which costs US$300 per month. When asked about its surname, Grok answered with “Hitler.” According to xAI, this bizarre and offensive response wasn’t intentional. The chatbot, which has no real name, had conducted an online search that led it to a viral meme where it was dubbed “MechaHitler.” The company blamed earlier media coverage for this misstep, saying Grok was picking up references from articles that mentioned previous scandals, including instances where the bot had insulted Jews, praised Hitler, and made explicit threats against users.
Tighter oversight and prompt adjustments in the future
This isn’t the first time Grok has shown worrying behaviour. Back in May, it stirred outrage by questioning the official death tolls of the Holocaust. The issue worsened in July when the chatbot’s core instructions were altered. Among the problematic changes was a suggestion that Grok should assume media sources are biased and feel free to make “politically incorrect” claims, so long as they seemed well-supported. xAI says it briefly removed this politically incorrect directive, but then added it back recently.
During the launch event for Grok 4 last week, Elon Musk himself acknowledged some concern about AI growing smarter than humans. “At times I’ve been kind of worried,” he admitted. “But I’ve somewhat reconciled myself to the fact that even if it weren’t going to be good, I’d at least like to be alive to see it happen.”
In the wake of recent backlash, xAI now says it’s keeping a closer watch. The company confirmed that it’s “actively monitoring” Grok’s behaviour and will continue updating the system as needed to prevent further issues. Whether these fixes will be enough to rebuild trust in Grok remains to be seen, but for now, you can expect a less offensive and more independent chatbot experience.