India’s IT ministry has issued strict guidelines requiring government approval for deploying unfinished or unreliable AI systems following controversy around Google’s AI platform.
IT Minister Rajeev Chandrasekhar emphasized that claiming a problematic AI model is still in testing does not absolve companies of accountability.
Google Gemini Episode “Very Embarrassing”
Referencing the recent incident where Google’s Gemini AI gave objectionable responses about Prime Minister Modi, Chandrasekhar called it “very embarrassing.”
However, he said that cannot excuse the platform from prosecution under India’s IT laws which prioritize user safety.
The advisory directs all platforms to openly disclose trial status and seek explicit consent before deploying unreliable AI models accessed by Indian users.
Approval Needed for Untested AI Rollouts
“Use of under-testing / unreliable Artificial Intelligence model(s)…must be done so with the explicit permission of the Government of India,” the advisory mandates.
It states entities must appropriately label experimental systems to highlight potential inaccuracies in outputs.
No entity can evade accountability by apologizing after problematic AI model issues emerge publicly.
Social Media Firms Must Label Unreliable AI Content
The government also explicitly instructed social media platforms to tag questionable content surfaced through developmental AI.
Intermediaries cannot permit users to access illegal materials generated via generative AI without adding visibility into its unrefined status.