Prominent AI labels, faster takedowns: IT Ministry notifies new social media rules
- In Reports
- 03:24 PM, Feb 11, 2026
- Myind Staff
India’s Information Technology Ministry on Tuesday notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing new compliance requirements for social media platforms related to AI-generated and unlawful content.
The newly notified framework is being seen as a mixed development for social media companies. On one side, the government has diluted an earlier proposal related to mandatory AI content labelling. On the other hand, it has introduced much stricter timelines for platforms to take down unlawful content, cutting the response window from 36 hours to just three hours.
Under the amendments, the government has removed the earlier proposal made in October that required labels on AI-generated content to cover at least 10% of the space. Instead, the final notified rule now states that such labels must be displayed “prominently.” This means the exact 10% requirement has been removed.
According to a senior government official, this change was made after consultations with tech companies. The official said companies flagged that a 10% label requirement would reduce the space available for the actual content, making it less appealing for viewers. The proposal had also faced pushback from major tech platforms.
The changes are set to come into effect on February 20, which is also the final day of the upcoming India-AI Impact Summit.
The debate over AI-generated content has intensified globally due to increasing misuse. The risks of such content were highlighted earlier this year when Grok, the AI service of X (formerly Twitter), began generating images of women in revealing clothing in response to user prompts. This raised serious concerns over dignity and privacy and drew criticism from governments around the world, including India. After the controversy and following Grok being banned in some countries, X reportedly modified its filters to prevent the creation of such images.
Along with AI-related rules, the amendments also tighten takedown timelines for a broad range of content that is considered unlawful under existing law. Social media platforms must now remove non-consensual intimate imagery within two hours, which is a major reduction from the earlier requirement of 24 hours.
For other categories of unlawful content, intermediaries are now required to act within three hours, compared to the earlier 36-hour window.
These compressed timelines are expected to attract strong pushback from large technology firms. Companies may argue that such strict requirements create a heavy compliance burden. If platforms fail to remove content within the newly prescribed time limits, they could lose their “safe harbour” protection. Safe harbour is an important legal immunity that shields intermediaries from legal liability for user-generated content hosted on their platforms.
However, the government defended the shorter timelines, saying that the earlier limits were too long and allowed unlawful content to go viral.
A government official said, “Tech companies have an obligation now to remove unlawful content much more quickly than before. They certainly have the technical means to do so.”
Legal experts have raised concerns that such quick deadlines may not be practical. Rahil Chatterjee, Principal Associate at Ikigai Law, said, “The amendments compress takedown timelines from 36 hours to just three hours, and this applies across all categories of content, not only synthetic or AI-generated material. In reality, there is often no clear or immediate test for illegality, and even law-enforcement communications do not always spell this out unambiguously. Requiring platforms to take definitive action within such a short window will be extremely difficult to operationalise and creates a real risk of over-censorship.”
The notified rules also refine the definition of synthetically generated information (SGI). The definition now includes carveouts for assistive and quality-enhancing uses of AI. This means that routine and good-faith editing of audio, video, or audio-visual content will not be considered SGI under the amended rules.
Under the updated framework, if an intermediary becomes aware that its platform is being used to create, host, or share SGI, it must take “appropriate” and “expeditious” action. This may include immediate disabling of access to such information, removal of content, suspension of accounts, or termination of user accounts.
Further, if an intermediary provides tools or services that allow users to create, modify, or share SGI, it must implement “reasonable” and “appropriate” technical measures. These measures are aimed at preventing SGI that violates existing laws or leads to misrepresentation of real-world events or a person’s identity.
The amendments also place responsibility on big tech platforms to ensure that users declare when information is synthetically generated. Platforms must deploy technical measures to verify the accuracy of such information and, once verified, must ensure that it is clearly and prominently displayed with an appropriate label or notice.
Overall, while the government has relaxed the earlier fixed-space labelling proposal, the rules introduce stricter takedown timelines and stronger compliance obligations for intermediaries to control unlawful and misleading synthetic content.

Comments