Regulating Synthetic Media: India’s Amendments to the IT Rules, 2021
- 12 Feb 2026
In News:
The Union Government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, effective February 20, 2026, aimed at regulating AI-generated (synthetic) content and significantly compressing takedown timelines for unlawful material. The reforms seek to address the growing challenge of non-consensual deepfakes, intimate imagery and AI-driven misinformation, while strengthening intermediary accountability under the IT Act, 2000.
Key Amendments
1. Sharp Reduction in Takedown Timelines
The amendments drastically compress content removal timelines:
- Court/Government-declared illegal content: 3 hours (earlier 24–36 hours)
- Non-consensual intimate imagery/deepfakes: 2 hours (earlier 24 hours)
- Other unlawful content: 3 hours (earlier 36 hours)
The government argues that earlier timelines failed to prevent virality and that major platforms possess sufficient technological capacity for rapid moderation. However, critics highlight operational challenges in determining “illegality” within such narrow windows, raising concerns of defensive over-censorship.
2. Mandatory Labelling of AI-Generated Content
The Rules introduce a legal definition of “Synthetically Generated Information (SGI)”—audio, visual or audiovisual content artificially created or altered using computer resources in a manner that appears real.
Key provisions include:
- AI-generated content must be labelled “prominently”.
- The earlier proposal mandating labels to occupy 10% of image space has been diluted.
- Platforms must require user disclosure of AI-generated content.
- Intermediaries must proactively deploy reasonable technical measures to prevent unlawful synthetic content.
Routine editing and good-faith quality enhancements are excluded from the definition, narrowing regulatory scope.
Safe Harbour and Intermediary Liability
Under Section 79 of the IT Act, 2000, intermediaries enjoy “safe harbour” protection from liability for user-generated content, provided they exercise due diligence. The amendments clarify that failure to act against unlawful synthetic content may amount to a breach of due diligence, potentially leading to loss of safe harbour protection. This significantly increases compliance pressure on digital platforms.
Administrative and Federal Dimensions
The amendments also permit States to appoint multiple authorised officers for issuing takedown directions, reversing earlier restrictions. This strengthens decentralised enforcement and enhances administrative responsiveness in populous states.
Trigger Events and Global Context
The urgency of regulation follows global controversies, including AI platforms generating non-consensual intimate images. Such incidents raised concerns regarding privacy violations, gender dignity, misinformation and democratic integrity. India’s amendments thus align with broader international debates on AI governance and platform accountability.
Constitutional and Governance Concerns
The reforms operate at the intersection of competing constitutional values:
- Article 21 (Right to Privacy and Dignity): Faster removal of non-consensual deepfakes strengthens protection of personal dignity.
- Article 19(1)(a) (Freedom of Speech): Extremely short timelines may chill legitimate expression, as platforms could resort to precautionary takedowns.
Key challenges include determining illegality within hours, technological burden on smaller intermediaries, risks of over-removal, and the need for clarity in law enforcement communications.
Way Forward
To ensure balanced regulation:
- Develop clearer standards for determining illegality.
- Establish independent review or appellate mechanisms.
- Strengthen indigenous AI detection tools under national AI initiatives.
- Harmonise implementation with the Digital Personal Data Protection framework.
- Build capacity of state enforcement authorities.
Conclusion
India’s amended IT Rules mark a decisive shift toward proactive regulation of AI-driven digital harms. While the framework strengthens privacy and platform accountability, its long-term success depends on calibrated enforcement, institutional safeguards against overreach and technological readiness to balance innovation with constitutional freedoms.