The Spanish government has initiated an investigation into social media platforms X, Meta, and TikTok regarding their alleged involvement in generating and disseminating AI-created child sexual abuse material.
Spanish Prime Minister Pedro Sánchez announced on X Tuesday, stating, “The Council of Ministers will invoke Article 8 of the Organic Statute of the Public Prosecution Service to request that it investigates crimes that X, Meta, and TikTok may be committing through the creation and dissemination of child pornography via their AI.” He criticized the platforms for “attacking the mental health, dignity, and rights of our children,” emphasizing that “the impunity of these giants must end.”
This call for investigation aligns with Spain’s broader efforts to regulate social media. Earlier this month, during the World Government Summit in Dubai, Sánchez proposed a ban on social media for users under 16, as part of various initiatives targeting social media platforms. This proposal, which awaits parliamentary approval, follows Australia’s pioneering move in December, making it the first nation to enforce a similar ban, with France and Denmark considering similar actions.
Read more: Where Efforts to Ban Social Media for Kids are Taking Place
During his remarks, Sánchez criticized technology companies for not only failing to censor illegal sexualized content but also for generating such material. He labeled social media as “a failed state, where laws are disregarded, crime is tolerated, and misinformation overshadows the truth, with hate speech affecting half of its users.”
Tech billionaire Elon Musk, owner of X, criticized the proposed social media ban for minors, calling it “madness” and branding Sánchez as “a tyrant and traitor to the people of Spain” following his statements in Dubai.
While Meta declined to comment directly on Sánchez’s request for a prosecution investigation, the company stated to TIME that its AI tools are designed to prevent the generation of nude images. It also enforces a ban on “nudify” applications that create explicit content and has stringent policies against child exploitation.
A spokesperson for TikTok expressed in a statement to TIME: “[Child sexual abuse material] is utterly unacceptable and entirely banned on our platform. TikTok has comprehensive systems in place to combat exploitation and harm to young individuals, and we continue to invest in advanced technologies to outpace bad actors.”
TIME has reached out to X for a response.
xAI’s Grok, an AI chatbot capable of manipulating images, has recently faced intensified scrutiny over the rise of sexualized AI-generated content. Following updates in December, the Center for Countering Digital Hate estimated that Grok generated about 3 million sexualized images, including roughly 23,000 that appeared to feature minors. In January, X announced it implemented measures to prevent Grok from altering images of real people into “revealing outfits.” However, Reuters reported that Grok continues to produce sexualized content, even when users specifically state that the individuals depicted did not consent. xAI has repeatedly asserted that such claims are “Legacy Media Lies,” according to Reuters.
Other European nations have also begun investigations into X regarding Grok’s reported production of pornographic content.
On Tuesday, Ireland’s Data Protection Commission (DPC) announced it has officially launched an investigation into X for allegedly using personal data—including that of minors—to produce “potentially harmful, non-consensual intimate and/or sexualized images.” Given that X’s European headquarters is in Dublin, the DPC serves as the lead supervisory authority for the company in the EU.
The DPC confirmed in its press release that X was informed about the investigation on Monday.
Deputy Commissioner Graham Doyle indicated that the regulator has initiated a wide-ranging inquiry into X’s adherence to critical obligations under the General Data Protection Regulation, a comprehensive EU data privacy law.
Additionally, on January 26, the European Commission, the EU’s administrative body, also opened an investigation into Grok’s supposed distribution of illicit sexualized content.
The previous month, the EU imposed a fine of approximately 120 million euros (around $140 million) on X for infractions related to its Digital Services Act, a significant law mandating companies to manage illegal content and misinformation on their platforms. Authorities confirmed that X’s verification system and advertising database breached the law’s transparency standards while creating “unnecessary barriers” to researchers aiming to access public information.
On February 3, French authorities raided the Paris offices of X, escalating another investigation into the company concerning allegations linked to Grok-generated content and suspected algorithm misuse. Musk and former CEO Linda Yaccarino have been requested for “voluntary interviews” in the French inquiry set for April 20.
In response to the raid, X condemned the action, claiming it was a “politicized criminal investigation” and rejected any allegations of misconduct.
“The Paris Public Prosecutor’s office broadly publicized the raid, clearly indicating that today’s action was more of a theatrics of law enforcement aimed at achieving illegitimate political aims instead of serving genuine legal goals,” stated X’s Global Government Affairs team wrote.
On the same day as the Paris raid, the United Kingdom’s Information Commissioner’s Office (ICO) announced a formal investigation into X and xAI for allegedly mishandling personal data related to the Grok system and its potential to create harmful sexualized images and videos. Reports highlighted its function in generating non-consensual sexual imagery, including that of children.
The UK’s Office of Communications, or Ofcom, previously launched an investigation into X on January 12, following reports of Grok being utilized to create and disseminate nude images of individuals, which may constitute intimate image abuse or pornography, along with sexualized imagery involving minors that could be classified as child sexual abuse material.
While TikTok and Meta have not faced equal scrutiny regarding AI-generated imagery recently, both companies have been criticized by the EU over unrelated issues this month. The European Commission issued a preliminary finding in a probe into TikTok on February 5, determining that the platform violated the Digital Services Act with its “addictive design.” Shortly thereafter, the commission informed Meta of a “preliminary view” suggesting the company breached EU antitrust rules by restricting third-party AI assistants from accessing and interacting with WhatsApp users.