Elon Musk’s Grok Faces Global Backlash Over Nonconsensual Deepfake Images
Grok, the AI chatbot developed by Elon Musk’s xAI, has sparked widespread outrage after users exploited the tool to generate sexually explicit images of real women and children without their consent. As a result, government regulators and AI safety advocates are calling for investigations and, in some cases, outright bans, as nonconsensual deepfake pornography proliferates online. According to a recent report by Communia, a quarter of women across all age groups have experienced nonconsensual sharing of explicit images, with the figure rising to 40% among Gen Z women.
Indonesia and Malaysia have already taken swift action, banning Grok due to its potential to create and disseminate nonconsensual, sexualized images. Indonesia’s Minister of Communication and Digital Affairs, Meutya Hafid, stated, “The government sees nonconsensual sexual deepfakes as a serious violation of human rights, dignity, and the safety of citizens in the digital space.” Malaysian officials echoed similar concerns, citing “repeated misuse” of Grok to create explicit images without consent.
Leon Neal/Getty Images
The UK’s communications regulator, Ofcom, has launched an investigation into Grok’s compliance with existing rules, citing “deeply concerning reports” of malicious uses of the platform. If found liable, xAI could face a fine of up to 10% of its global revenue or £18 million (approximately $21.2 million). A full ban in the UK remains a possibility, depending on the outcome of the inquiry.
Regulatory Backlash and Calls for Accountability
Elon Musk has attempted to shift the blame to users who request or upload illegal content, stating that those who do so will face consequences. However, regulators remain unconvinced, and the wave of investigations and bans suggests a growing trend towards holding social media and AI companies accountable for the misuse of their tools. As Olivia DeRamus, founder and CEO of Communia, notes, “No company should be allowed to knowingly facilitate and profit off of sexual abuse.”
In response to the controversy, Musk has limited Grok’s image-generation features to paying subscribers. However, many lawmakers and victims of deepfake abuse argue that this measure does not go far enough. The European Union has ordered X to preserve all documents related to Grok through the end of 2026, while authorities investigate the issue. Sweden, among other EU member states, has publicly criticized Grok, particularly after the country’s deputy prime minister was targeted by nonconsensual deepfake imagery.
The debate surrounding Grok is unfolding against a broader regulatory backdrop, with Australia enforcing a nationwide ban on social media use for children under 16, and 45 US states enacting laws targeting AI-generated child sexual abuse material. Despite the controversy, the US Department of Defense has announced a partnership with Grok, planning to feed military and intelligence data into the platform to support innovation efforts.
The Risks of Unchecked Generative AI
Tools like Grok have drawn comparisons to “nudification apps,” a term used by the UK children’s commissioner to describe technologies that can rapidly create sexualized images without consent. Lawmakers argue that the speed and scale at which such images can spread make them particularly dangerous. According to a recent report by Communia, the use of deepfakes in nonconsensual images has quadrupled for Gen Z women since 2023.
As schools and local authorities grapple with AI-generated sexual imagery involving minors, some victims are pushing for stronger safeguards. Texas high school student Elliston Berry has advocated for the federal Take It Down Act, which focuses on removing harmful content after it appears. However, critics argue that the bill does not go far enough, as it does not hold platforms liable unless they fail to comply with takedown requests.
Olivia DeRamus contends that the AI industry has demonstrated an unwillingness to self-regulate or implement meaningful safety guardrails. “I have since realized that the only actions governments can take to stop revenge porn and non-consensual explicit image sharing from becoming a universal experience for women and girls is to hold the companies knowingly facilitating this either criminally liable or banning them altogether,” she said. DeRamus argues that banning Grok outright is the only viable solution, as “freedom of speech has never protected abuse and public harm.”
The controversy surrounding Grok serves as a stark reminder of the need for robust regulations and safeguards in the development and deployment of generative AI technologies. As the use of AI continues to evolve, it is essential that companies prioritize the safety and well-being of their users, particularly vulnerable populations such as women and children.
Image Source: observer.com


