In the recent and noteworthy developments, the Ministry of Electronics and Information Technology(MeiTY) revoked the requirement for platforms to seek government approval for AI models under development. This is the contrary of earlier rule which mandated the AI models’ platforms to seek government approval for the models under development.
No Need to Seek Government Approval for AI models
The latest advisory issued on March 15, 2024 laid emphasis on the labelling of AI-generated content, particularly those susceptible to misuse in deep fake technology.
The previous advisory, which was earlier issued on March 1 was criticised severely by many start-up founders, calling it a bad move.
The Revised Advisory
Below are some points in the advisory:
- There is no need of any intermediaries in order to submit an action taken-cum-status report either but are still required to comply with immediate effect. Notably, there has been change in the language but the obligations even in the new advisory remain same.
- As per the advisory, the concerns over the negligence of intermediaries and platforms in implementing due diligence which has been mandated by existing IT rules have been mandated.
- The advisory has directed the intermediary as well as the platform, to ensure that the AI-generated content is labelled in order to prevent its susceptibility to deepfake manipulation.
- The platforms have been directed to deploy such AI models, which prevent the users from posting or sharing unlawful content.
- The advisory has retained the emphasis laid by MeitY on identification of all deepfakes and misinformation. Hence, it has been advised to the intermediaries that they either label their content or embed “unique metadata or identifier” in their content. This content can be in the form of audio, visual, text or audio-visual.
- The ministry further requires the label to identify and segregate the content as being artificially generated/modified/created, and that the intermediary’s computer resource has been used to make such modification.
- The same guidelines have been issued to 8 significant social media intermediaries which includes the likes of Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), Twitter, Snap, Microsoft/LinkedIn (for OpenAI) and ShareChat. They were issued the deepfakes advisory in December 2023 and the March 1 advisory.