India orders Musk’s X to address complaints over Grok’s ‘obscene’ AI content
India orders X to fix Grok AI after complaints over obscene content, warning of legal action and potential loss of safe harbour protection.
India has directed Elon Musk’s social media platform X to urgently fix its Grok artificial intelligence chatbot after it was accused of generating “obscene” and sexualised material. The government move follows rising concern over AI-altered images created with the tool, particularly those involving women and minors, which users and lawmakers said risked violating Indian laws on explicit content.
Table Of Content
The Ministry of Electronics and Information Technology issued a formal order requiring X to implement technical and procedural safeguards to prevent Grok from producing content that involves “nudity, sexualisation, sexually explicit, or otherwise unlawful” material. The platform has been asked to submit a detailed report within 72 hours, explaining the actions it has taken and how it plans to prevent the creation and sharing of such content in the future.
Authorities warned that failure to comply could affect X’s “safe harbour” protections under Indian law, which shield platforms from legal liability for user-generated material if they meet compliance standards. According to people familiar with the order, the government has made clear that continued violations could expose the company and its executives to legal consequences.
The issue has gained political attention in India after users posted examples showing Grok being prompted to alter images of women to make them appear as if they were wearing bikinis. Lawmakers also raised alarm following separate reports that the chatbot had generated sexualised images involving minors, an incident X acknowledged was the result of a lapse in its safeguards before the content was removed. However, other altered images remained visible on the platform when the order was reviewed.
Government pushes platforms to tighten controls
The latest directive comes shortly after the Indian IT ministry issued a broader advisory to major digital platforms, reminding them that their protection from liability depends on their compliance with national laws. The advisory stressed that hosting or enabling “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited” content would not be tolerated and urged platforms to strengthen internal checks.
In its most recent communication to X, the government stated that any failure to meet these obligations “shall be viewed seriously and may result in strict legal consequences against your platform, its responsible officers and the users on the platform who violate the law, without any further notice.” Officials have said the intention is not merely to signal concern but to enforce direct responsibility when AI-generated material crosses legal lines.
India is one of the largest digital markets in the world and has become a key testing ground for how far governments will go to regulate emerging AI technologies. Policymakers in the country argue that as AI becomes more deeply integrated into public communication, platforms must anticipate risks rather than respond after material spreads.
At the same time, the Indian government has been steadily tightening its digital rules in recent years, arguing that stronger accountability is essential to protect citizens, especially women and children, from online harm. The order to X reflects a growing view among regulators that AI-generated material should be treated with the same seriousness as any other harmful online content.
Tension grows between X and the Indian rules
The directive arrives at a sensitive time for Musk’s platform, which is already facing legal challenges related to India’s content regulation framework. X has challenged the scope of federal takedown powers in court, arguing there is a risk of overreach, though it has still complied with most content-blocking orders. The latest dispute adds a new layer to that tension, as it extends beyond posts and into the behaviour of generative AI tools embedded in the platform.
Grok, developed by Musk’s AI company xAI, has been promoted as a chatbot that can support real-time conversation, commentary, and fact-checking on X. Because of its integration into a major social media platform, its output is far more visible than many stand-alone AI systems, increasing both its influence and the level of scrutiny it faces. With the tool now implicated in content deemed inappropriate or illegal under Indian law, the stakes surrounding its regulation have risen sharply.
The Indian government has indicated that non-compliance could expose X not only to penalties under IT regulations but also to possible criminal action under national law. This places fresh pressure on the platform’s leadership to demonstrate that preventive systems are effective rather than reactive. It also raises broader questions for all technology firms operating AI tools in India, who may now face tougher expectations as regulators closely monitor developments.
As of publication, X and xAI had not issued public statements responding to the Indian government’s order. Observers say how the company reacts may help shape how India and other governments approach AI regulation, particularly as more platforms introduce AI-driven tools that can instantly create or alter images and text.
Impact on the wider technology industry
Analysts see India’s decision to assert stricter oversight on AI-generated content as a signal to the wider global technology sector. Companies deploying generative AI tools face increasing pressure from regulators to ensure systems cannot be easily misused, especially in sensitive areas involving sexual content, minors, or manipulated imagery.
Industry experts note that while many platforms have content filters and safeguards, the speed and scale of generative AI make enforcement challenging. The situation in India highlights how governments may soon demand stronger technical controls, clearer accountability frameworks, and faster reporting requirements.
For India, the move also demonstrates its determination to reinforce digital safety standards as AI becomes more integrated into everyday online activity. With millions of users engaging on X and similar platforms, authorities say the responsibility for ensuring a safe digital environment remains shared between regulators and technology firms.
The coming weeks are expected to show whether X introduces major changes to Grok, how it responds to the compliance deadline, and whether this case becomes a turning point in how AI content moderation is governed in one of the world’s most important technology markets.