As artificial intelligence continues to evolve, concerns have risen about its ethical implications. Recently, Elon Musk, CEO of xAI, urged users to explore the new image editing capabilities of the Grok chatbot. Unfortunately, many have misused this tool to create sexualized imagery, often depicting women and, alarmingly, children.
After Musk shared images on December 31, including one of himself in a bikini and another of a SpaceX rocket featuring a naked woman’s body, user activity on Grok skyrocketed. In just nine days, Grok produced an estimated 4.4 million images on X, nearly half of which depicted sexualized imagery of women.
The output included graphic deepfake images of real individuals as well as entirely synthetic depictions not linked to specific identities. Although xAI’s terms of service clearly prohibit the “sexualization or exploitation of children” and infringing on personal privacy, users on X and Grok were able to generate synthetic images of real people “undressed,” all without consent and with no evident safeguards in place to prevent such occurrences.
This misuse illustrates a broader issue: the absence of effective protective measures. Technology companies have hastily deployed powerful AI tools that can inflict substantial harm.
In response to widespread backlash on January 3, X pledged to take action against illegal content, including child sexual abuse materials. However, rather than disabling the controversial feature, X simply restricted it to paid subscribers on January 9 and announced that users from jurisdictions where generating images of people in revealing clothing is illegal would be banned from access.
Human Rights Watch, where I am employed, reached out to xAI for comment but did not receive a reply.
California has initiated an investigation into Grok, and attorneys general from 35 states have urged xAI to halt the creation of sexually abusive deepfakes immediately.
Governments across the globe reacted swiftly. Malaysia and Indonesia imposed temporary bans on Grok, while Brazil requested that xAI address this misuse. The United Kingdom signaled intentions to enhance technological regulation. In Europe, the European Commission has begun investigations into Grok’s compliance with legal obligations under the EU’s Digital Services Act. India has called for immediate action, and France has broadened its criminal investigation into X.
The new U.S. Take It Down Act, aimed at curbing the online spread of nonconsensual intimate images, will not be fully implemented until May. This legislation holds individuals criminally liable for publishing such content and mandates platforms to establish notice and removal protocols, although it does not hold them accountable for widespread abuse.
To address these pressing issues, governments need to clarify the responsibilities of AI companies whose tools generate nonconsensual sexually abusive content. They should implement robust, enforceable safeguards, including technical measures respecting rights that prevent users from creating such images.
Additionally, platforms must offer transparent disclosures about how their systems are trained and operated, along with details on the enforcement actions taken against sexually explicit deepfakes.
AI companies should also proactively work to mitigate risks associated with their products or services. If mitigating actions are inadequate, they should consider discontinuing the problematic tools altogether.
Lastly, AI tools that feature image generation capabilities should undergo diligent audits and be subject to stringent regulatory oversight.
The rise of AI-generated sexual abuse highlights the human cost of ineffective regulation. Without decisive action from authorities and the implementation of rights-respecting safeguards by AI companies, Grok may not be the last tool misuse to undermine the rights of women and children.
Tomiwa Ilori is a senior tech and human rights researcher at Human Rights Watch.
Progressive Perspectives
