New Delhi: The Indian creator economy is bracing for new regulations on transparency and quicker removal of content, following the Ministry of Electronics and Information Technology (MeitY) announcing amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These updates include provisions for “synthetically generated information” to be integrated into the existing due diligence framework.
Set to take effect on February 20, 2026, these amendments stipulate prominent labeling and verification standards on various platforms, along with a rapid three-hour timeline for content to be taken down when flagged by a recognized authority or court.
The explanatory note accompanying these changes underscores the intent to establish a legal framework for labeling, traceability, and accountability concerning AI-generated or altered content, including deepfakes. This comes amid rising concerns about misinformation, fraud, and potential damage to reputations.
Creators believe this marks a significant shift, as AI use, previously discreet in the production process, will now be visibly indicated in the final output.
/filters:format(webp)/buzzincontent-1/media/media_files/2026/02/16/sachar-2026-02-16-09-55-40.jpg)
“Previously, AI was inconspicuous in the workflow. Now, it will be apparent in the end product, which alters creative decisions,” commented creator Anmol Sachar. “If a label will be attached to my content, I must consider whether AI truly enhances the idea or merely complicates the audience’s interpretation.”
/filters:format(webp)/buzzincontent-1/media/media_files/2026/02/16/isha-jaiswal-2026-02-16-09-57-20.jpg)
Isha Jaiswal, a chartered accountant and digital content creator, emphasized that the new rules will compel creators to adopt a more structured approach even prior to shooting. “Classification of content intent will be essential before creation,” she said, noting that unique formats like Hinglish could complicate labeling decisions.
She also highlighted the challenge of “tool segregation,” pointing out that “most affordable AI tools available to Indian creators bundle multiple features together.”
According to the amendments outlined in MeitY’s note, intermediaries allowing the creation or modification of synthetic information must ensure that content is clearly labeled or contains a permanent unique identifier. This label should be prominently displayed or made audible.
Additionally, the note specified that the label must cover at least 10% of the visual display area or appear during the first 10% of an audio clip’s duration.
Creators anticipate that brand briefs and contracts will become more stringent as liability and reputational risks shift to the planning stages of campaigns. “Yes, brands will certainly demand clearer communication, as no one wishes to bear the blame later,” Sachar remarked, while also cautioning against slowing down creator workflows. “Creators will adapt, but the industry must avoid overcomplicating a process that thrives on quick, instinctive workflows.”
Jaiswal agreed that brands will “definitely” seek formal clauses in contracts but predicted initial confusion. “Indian brand teams are not yet fully acquainted with AI. We will experience a chaotic six-month period of brands demanding disclosures without fully understanding their implications,” she said. Moreover, she noted that while established creators may adapt quickly, smaller and regional ones might face challenges due to compliance overhead. “Most creators operate solo, handling everything from shooting to editing, and now compliance documentation,” she added.
When it comes to publishing, both creators expressed concerns about potential delays and complications as platforms begin user declarations and verification procedures. MeitY’s explanatory note indicated that major social media intermediaries will need to obtain user declarations regarding whether uploaded information is synthetically generated and implement reasonable technical measures to verify these declarations.
“There will undoubtedly be collateral damage. Platforms are constantly evolving. Algorithms are not foolproof, and creators will find themselves caught in the middle,” Sachar stated. “The greatest impact will be on time-sensitive content. If your post is about a moment and it gets held up in review, the moment may have already passed.”
Jaiswal highlighted that this challenge is particularly acute in India’s real-time commentary categories. “For timely content—like reactions to budgets or RBI policy updates—this could be catastrophic. If I can’t publish within half an hour, I lose relevance,” she warned. She also raised concerns about a “language barrier in moderation,” suggesting that tools trained in English might struggle with Hinglish, code-switching, and regional languages, raising the risk of false positives for creators who use Indian languages. “In a price-sensitive creator economy, where CPM rates are already one-tenth of those in the US, even a 20% drop in reach can push many creators below their earning thresholds,” she cautioned.
Jaiswal further voiced apprehensions regarding false positives for non-English formats. “Most AI detection tools are designed for English. How will they manage Hinglish, code-switching, and regional dialects? I fear that creators focusing on Indian languages will face disproportionately high false positives,” she noted.
The new three-hour takedown requirement is also prompting creators to rethink their approaches to live events and short clips. As platforms must act within three hours on content flagged by a competent authority or court, timelines are tightening.
“In live campaigns, a three-hour window is virtually no time. One misinterpretation and your content could vanish before you have a chance to correct course,” Sachar explained. “This doesn’t diminish my creativity, but it does make me more cautious about formats that could be misinterpreted.”
Jaiswal added that this timeline could radically alter how creators approach spontaneity and risk. “The three-hour limit could prove devastating for India’s live-content landscape,” she remarked, citing events such as budget announcements, IPL integrations, and festival launches. She cautioned that takedowns during crucial moments could directly affect contract renewals because “promised reach” would not materialize.
She also suggested that this could deepen the divide between creators with agency and legal support and those working solo, possibly leading to more creators seeking exclusive agency deals for compliance assistance.
In terms of trust, creators argued that the real issue lies not in disclosure itself, but in the lack thereof. “Labels don’t damage trust; deception does. When audiences know upfront that AI was used, they tend to be more understanding,” Sachar explained. “Backlash occurs only when people feel deceived into believing something was genuine.”
Jaiswal noted that hypothetical labels could become normalized over time, with varying reactions among different audience segments. She suggested that Indian viewers have already navigated misinformation and altered content, indicating that “mandatory AI labels might actually foster trust through transparency.”
She also mentioned that India’s “value-over-authenticity culture” could mean that labels wouldn’t necessarily harm educational creators, whereas metropolitan audiences might scrutinize AI-labeled content more than those in Tier 2/3 regions. “When ASCI mandated #ad disclosures, there were predictions of doom. Today, it’s commonplace. I anticipate a 12-month adjustment period, and then business will resume as usual,” she stated.
Both creators drew a pragmatic distinction between AI assistance and content that necessitates explicit disclosure. “If AI helps me execute my idea more efficiently, it’s a tool. If AI alters what the audience perceives as reality, it warrants disclosure. That’s my line,” Sachar clarified.
Jaiswal added that a binary label may be too simplistic for India’s multilingual context. She suggested a more nuanced taxonomy indicating where AI was utilized, such as for script assistance, audio enhancement, or translation, akin to “FSSAI labels.”
“Instead of a binary ‘AI/Not AI,’ Indian creators require a culturally contextualized taxonomy,” she suggested, proposing disclosures like “This content utilized AI for: [Script assistance 30%] [Visual generation 0%] [Audio enhancement 60%] [Hindi-English translation 40%].”
Overall, while these regulatory changes bring challenges, they also offer an opportunity for the creator economy to evolve and adapt to a landscape that values transparency and accountability.