The emergence of AI technology has given rise to a troubling trend: the creation and sharing of deepfake nudes on the secure messaging platform Telegram. An investigative analysis by The Guardian reveals that millions worldwide are involved in this alarming phenomenon, which is significantly escalating the online abuse of women.
The Guardian has identified over 150 Telegram channels—large encrypted group chats favored for secure communication—that appear to host users from various countries, including the UK, Brazil, China, Nigeria, Russia, and India. Some channels offer “nudified” images or videos for a fee; users can submit a photo of any woman, and AI technology generates a video depicting her engaging in sexual acts. Others provide a steady stream of images featuring celebrities, social media influencers, and ordinary women, all transformed into nude or sexualized depictions via AI. Users are also exchanging tips on available deepfake tools within these channels.
While non-consensual nude image distribution has been a longstanding issue on Telegram, the widespread availability of AI tools means that nearly anyone can now become the target of explicit content that is accessible to millions.
One Russian-language Telegram channel promoting deepfake “blogger leaks” and “celebrity leaks” featured a message about an AI nudification bot that claimed to offer “a neural network that doesn’t know the word ‘no.’”
“Choose positions, shapes, and locations. Do whatever you can’t do in real life with her,” it stated.
In a Chinese-language Telegram channel boasting nearly 25,000 subscribers, individuals shared videos depicting their “first loves” or “girlfriend’s best friend” undressing, all created with AI.
A network of Telegram channels aimed at Nigerian users shares deepfakes alongside countless stolen nudes and intimate images.
Telegram, a secure messaging application, permits users to form groups or channels to broadcast content to unlimited contacts. According to the platform’s terms of service, posting “illegal pornographic content” on publicly viewable channels and bots is prohibited, as is engaging in activities deemed illegal in most jurisdictions.
An analysis of the independent analytics service Telemetr.io indicates that Telegram has taken action against several nudification channels.
In response to inquiries from The Guardian, Telegram stated that content related to deepfake pornography and its creator tools is strictly banned by its terms of service, emphasizing that “content of this nature is routinely removed whenever detected. Moderators, equipped with custom AI tools, actively monitor public sections of the platform, and respond to reports of content that violates our terms of service, including the encouragement of deepfake pornography.”
Telegram also highlighted that it deleted more than 952,000 pieces of problematic content in 2025.
Recent public discourse around AI tools for creating sexualized deepfakes has garnered attention, particularly after Grok, a generative AI chatbot on Elon Musk’s social media platform X, was prompted to generate thousands of images of women in bikinis or scant clothing without their consent.
This led to significant outrage, prompting Musk’s AI firm, xAI, to declare it would halt Grok’s ability to modify photos of real individuals into bikinis. The UK’s media regulator, Ofcom, subsequently announced it would investigate X.
Despite this, platforms like Telegram continue to offer millions easy access to graphic, non-consensual content and allow for the generation and sharing of such material without the knowledge or consent of the affected women. A report from the Tech Transparency Project indicated that numerous nudification apps are readily available in both the Google Play Store and Apple App Store, collectively accumulating 705 million downloads.
The UK’s media watchdog, Ofcom, is conducting a formal investigation into Elon Musk’s X over the use of the Grok AI tool. Photograph: Yui Mok/PA
An Apple spokesperson revealed that 28 out of 47 nudification apps identified by the Tech Transparency Project had been removed, while a Google spokesperson stated that “most of the apps” on their platform had been suspended, with an ongoing investigation.
Telegram channels serve as integral parts of a broader internet infrastructure dedicated to the creation and dissemination of non-consensual intimate images, according to Anne Craanen, a researcher focusing on gender-based violence at the Institute for Strategic Dialogue in London.
Users can circumvent controls from larger platforms like Google and share strategies for evading safeguards that prevent AI from generating this type of content. However, the community aspect—sharing and boasting about these creations—adds another layer of complexity, highlighting the misogynistic undertones. “They are using this to punish or silence women,” she notes.
Last year, Meta dismantled an Italian Facebook group where men exchanged intimate images of their partners and unsuspecting women. The group had attracted approximately 32,000 members before its removal. However, the investigative newsletter Indicator discovered that Meta failed to stop the proliferation of advertisements for AI nudification tools across its platforms, with at least 4,431 nudifier ads identified since December 4 of last year, although some turned out to be scams. A Meta spokesperson confirmed that ads violating their policies are taken down.
The rise of AI tools has exacerbated global online violence against women, enabling almost anyone to produce and distribute abusive imagery. Many regions, especially in the global south, lack adequate legal measures to hold offenders accountable. According to 2024 World Bank data, fewer than 40% of countries have legislation protecting women and girls from cyber-harassment or stalking. The UN estimates indicate that 1.8 billion women and girls still lack legal protections against online harassment and other forms of technology-enabled abuse.
A lack of regulation contributes significantly to the vulnerability of women and girls, particularly in low-income countries. Campaigners argue that poor digital literacy and poverty further elevate these risks. Ugochi Ihe, an associate at TechHer, a Nigerian organization empowering women and girls in technology, shares her experiences of women being blackmailed by “unscrupulous men using AI” while attempting to secure loans. The situation is becoming increasingly innovative in terms of abuse.
The devastating real-life ramifications of digital abuse extend to mental health challenges, social isolation, and job loss.
“Such incidents can destroy a young girl’s life,” says Mercy Mutemi, a lawyer in Kenya who represents four victims of deepfake abuse. Some have faced job rejections and disciplinary actions at school due to deepfake images circulated without their permission.
Ihe adds that her organization has dealt with complaints from women who faced ostracism from their families after being threatened with nude and intimate visuals sourced from Telegram channels.
“Once it’s out there, there’s no way to regain your dignity or identity. Even if the perpetrator claims, ‘Oh, that was a deepfake,’ you can’t undo the widespread visibility. The reputational harm is irreparable.”