Labelling AI-Generated Content in India: Towards Responsible Digital Governance
- 25 Oct 2025
In News:
The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandating the labelling and disclosure of AI-generated content on social media platforms. The move comes amid growing public concern over deepfakes and synthetic media, which have begun to challenge democratic discourse, individual privacy, and digital trust.
The Rise of AI-Generated Content and Deepfake Concerns
India is experiencing a rapid surge in the use of artificial intelligence (AI) for content creation across entertainment, advertising, and online communication. However, this technological boom has also led to the proliferation of deepfakes — hyper-realistic videos, images, and audio clips generated by AI that mimic real individuals or events.
The issue gained national prominence in 2023, when a manipulated video of a popular actor went viral, prompting outrage and prompting Prime Minister Narendra Modi to call deepfakes a new “crisis.” These incidents have exposed how synthetic content can be weaponised for political propaganda, misinformation, financial fraud, and reputational harm.
Key Provisions of the Draft Rules
The proposed amendments introduce a comprehensive framework to enhance transparency and accountability in digital content creation:
- Mandatory Self-Declaration:Users uploading content on platforms such as YouTube, Instagram, or X (formerly Twitter) must declare whether their material is AI-generated or synthetic.
- Dual Labelling Mechanism:
- Embedded Label: AI-generated visuals and audio must carry a visible watermark or label covering at least 10% of the surface area or duration.
- Platform-Level Label: A visible disclaimer will appear wherever such content is displayed online.
- Platform Accountability:If users fail to disclose synthetic content, platforms must proactively detect and label it using AI-based detection tools. Non-compliance could lead to loss of safe harbour protection under Section 79 of the IT Act, making intermediaries legally liable for misinformation.
- Metadata Requirement:AI-generated material must include a permanent, traceable metadata identifier embedded at the time of creation to ensure accountability.
- Scope of Application:The rule extends beyond social media to AI content generation tools like OpenAI’s Sora and Google’s Gemini, requiring built-in watermarking mechanisms.
Rationale and Policy Objectives
The policy aims to ensure that users in a democracy can distinguish between authentic and synthetic content. By mandating labelling and traceability, the government seeks to curb misinformation, protect democratic integrity, and uphold public trust in the digital ecosystem.
The ministry’s note emphasizes that AI-generated misinformation poses risks to national security, elections, and social stability, making proactive governance essential. Previously, such misuse was addressed under general impersonation and fraud provisions of the IT Act, 2000, but the evolving sophistication of generative AI tools now demands specific regulatory safeguards.
Global Context
India’s initiative aligns with global best practices.
- China (2025): Introduced mandatory AI labelling for deepfakes, voice synthesis, and chatbots with visible and hidden watermarks.
- European Union: Under its AI Act, mandates user notification when interacting with AI systems.
- United States: Developing federal standards for content authenticity and AI watermarking.
By adopting a binding legal framework, India positions itself among the early regulators of generative AI, setting a precedent for responsible innovation.
Implementation Challenges and the Way Forward
While the proposal has been broadly welcomed, challenges persist. Detecting AI-generated content across diverse languages and formats requires sophisticated detection infrastructure. Excessive compliance burdens may also affect startups and smaller creators in India’s expanding $12 billion AI ecosystem.
The government has invited public and industry feedback until November 6, 2025, signaling openness to iterative policy design. Successful implementation will depend on multi-stakeholder cooperation, technological innovation, and digital literacy among users.
Conclusion
The proposed amendments mark a decisive shift in India’s digital governance—from reactive moderation to preventive transparency. By mandating AI content labelling, India aims to balance technological innovation with ethical responsibility, ensuring that the age of artificial intelligence strengthens rather than undermines truth, democracy, and public trust.