India is taking a proactive stance against the misuse of artificial intelligence by proposing new regulations that mandate the clear labeling of all AI-generated content. This initiative marks a significant effort by the government to combat the proliferation of deepfakes and synthetic media, which pose increasing risks to users, public discourse, and democratic processes.
The Ministry of Electronics and Information Technology (MeitY) has opened a public feedback period for proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The deadline for submitting feedback is November 6th. MeitY cited growing concerns over the misuse of advanced generative AI tools, noting that the rise of synthetic information can lead to user harm, misinformation, election manipulation, and identity impersonation.
Under the proposed framework, companies that develop AI generation tools will be required to embed persistent and visible watermarks or metadata identifiers on all synthetic content they produce. For images and videos, these labels must cover at least 10% of the display area. Similarly, audio content will need to feature identifiers for the initial 10% of its playback duration.
The draft regulations also introduce a formal definition for “synthetically generated information,” describing it as content that is “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.” This definition is intended to bring AI-generated material under the existing due diligence and takedown obligations applicable to unlawful online content.
Furthermore, platforms will be restricted from allowing users to remove or obscure these identifiers, thereby enhancing the traceability of AI-generated material. Major social media intermediaries will be required to prompt users to declare if their uploaded content is synthetically generated and to implement automated systems to verify these declarations. All confirmed or declared synthetic content must then carry clear labels or notices, helping users differentiate between genuine and manipulated media.
Failure to comply with these regulations could result in platforms losing their “safe harbour” protections under Section 79 of the IT Act, 2000, potentially leading to regulatory penalties. However, the existing safe harbour provisions will remain, with a clarification that platforms still benefit from these protections even when removing synthetic content through their grievance redressal mechanisms.
MeitY stated that the new rules aim to foster user awareness, improve traceability, and ensure accountability, all while maintaining a supportive environment for innovation in AI technologies. Importantly, these rules are intended to apply only to publicly accessible content, not private or unpublished material.
While the regulations specify labeling requirements for visual and audio content, there remains an ambiguity regarding AI-generated text, such as that produced by chatbots. The draft does not detail how text-based content should be labeled, a challenge faced by other jurisdictions as well. For instance, the EU’s AI Act, set to take effect in 2026, also mandates labeling for synthetic text but lacks specific guidance on its implementation. China’s approach, on the other hand, is more prescriptive, requiring visible labels for AI-generated text and embedded metadata.
Experts like Dhruv Garg, partner at the Indian Governance & Policy Project, noted India’s approach of regulating AI platforms as intermediaries, offering them safe harbour protections. He emphasized the need for these regulations to strike a balance between transparency, scalability, innovation, and creative expression.
The ministry’s move follows parliamentary discussions and previous advisories urging social media platforms to address the harms associated with deepfakes.