India’s Ministry of Electronics and Information Technology (MeitY) has introduced a new set of draft regulations specifically targeting “synthetically generated information,” commonly known as deepfakes. These proposed changes to the IT Rules, 2021, aim to officially bring AI-created content under the government’s legal framework. The new document not only provides a clear definition of synthetic information but also places greater responsibility on social media platforms to visibly label and highlight AI-generated content for users.
Mandatory Labeling for AI-Generated Content
Within a document titled “Draft amendments to IT Rules, 2021 relating to Synthetically generated information,” MeitY has outlined several new regulations and a precise definition of synthetic content. This is described as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.”
This broad definition covers digitally created content, whether AI-generated or not, across various formats including text, images, videos, and audio. However, the core concern remains deepfakes. MeitY emphasized the urgency for these rules, stating that deepfake audio and video, along with other deceptive content on social media, pose significant risks. These include harming reputations, improperly influencing elections, and enabling financial fraud.
Under the proposed regulations, any platforms or services that offer tools for creating or modifying synthetic content will be obligated to apply clear labels identifying it as such. For visual media like images and videos, a visible identifier must cover at least 10 percent of the display area. Similarly, for audio deepfakes, platforms must include an audible declaration for a minimum of 10 percent of the audio’s total length.
Furthermore, these platforms must embed permanent, unremovable metadata into the content, ensuring its origin is always traceable.
Beyond just AI developers, these new rules also place increased accountability on social media platforms that host or facilitate AI-generated content. Companies such as Facebook, Instagram, Reddit, and X (formerly Twitter) will now need to get declarations from users uploading synthetic content and implement technical solutions to verify these statements. If content is detected as AI-generated but remains undeclared by the user, the platform must then apply an appropriate label.
According to the draft, any intermediary that knowingly allows, promotes, or fails to prevent the circulation of undeclared or unlabeled synthetic content will be deemed in violation of their due diligence.
It’s important to note that these draft rules are currently open for public and stakeholder consultation. They are not yet in force and will only become law after receiving approval from both houses of Parliament.