Indian government withdraws advisory mandating approval for new AI products
- In Reports
- 12:07 PM, Mar 16, 2024
- Myind Staff
The Ministry of Electronics and Information Technology (MeitY), in a new advisory issued on March 15, has scrapped the need for obtaining government approval before deploying “under-tested” or “unreliable” AI models and tools in the country. This decision comes in response to widespread backlash from the industry.
The new advisory issued on Friday supersedes the two-page note issued on March 1 on the due diligence to be carried out by intermediaries and platforms under the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
In its revised state, intermediaries are no longer mandated to submit an action taken-cum-status report. However, they are still required to comply with the directive immediately.
The obligations in the revised advisory remain the same but the language has been toned down.
Unlike the provision in the March 1 version, which required platforms to obtain the government's "explicit permission" before deploying AI models, the new advisory states that under-tested and unreliable AI models should only be made available in India after they are labelled. This labelling is intended to inform users about the "possible inherent fallibility or unreliability of the output generated."
Additionally, the advisory stated that AI models should not be utilised to disseminate content that contravenes any Indian law. Intermediaries are mandated to ensure that their AI models and algorithms do not facilitate bias, discrimination, or jeopardise the integrity of the electoral process.
The intermediaries have also been advised to use “consent popup” or similar mechanisms to “explicitly inform users about the unreliability of the output.
The new advisory maintains MeitY’s focus on ensuring the easy identification of all deepfakes and misinformation. Consequently, intermediaries are advised to either label or embed the content with "unique metadata or identifier". This content can take the form of audio, visual, text, or audio-visual formats. The government aims for the content to be recognised "in such a manner that such information may be used potentially as misinformation or deepfake", although it has not provided a specific definition for "deepfake".
MeitY also requires that this label, metadata, or unique identifier clearly identifies content as artificially generated/modified/created. Additionally, it should indicate that the intermediary's computer resource has been utilised to make such modifications.
“Further, in case any changes are made by a user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change,” the revised advisory said.
The advisory no longer bears language related to “first originator”.
The communication has been issued to eight significant social media intermediaries, the same ones that received the deepfakes advisory in December 2023 and the subsequently retracted advisory from March 1. They are --- Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), Twitter, Snapchat, Microsoft/LinkedIn (for OpenAI), and ShareChat.
The March 1 advisory faced significant criticism, with many startup founders condemning it as a misguided decision. Aravind Srinivas, the CEO of Perplexity, labelled it as a "bad move by India" in a post on X.
To be sure, a “platform” is a term that has neither been used nor defined in either the IT Act or the IT Rules, 2021. While the updated advisory aims to establish guidelines around large language models and AI models employed by major social media platforms, it's important to note that the models themselves are not considered intermediaries or significant social media intermediaries. The latter refers specifically to social media companies with over 5 million users in India.
Image source: Money control
Comments