Artificial intelligence (AI) models and generative AI models that are in the testing phase or are untrustworthy in any way will have to get “express permission from the Government of India” before being deployed in India, India’s Ministry of Electronics and Information Technology (MeitY), issued in an advisory notice, according to the reports. The notification comes just days after some users discovered that Google’s Gemini AI chatbot was responding with inaccurate and misleading information about the country’s prime minister.

According to a report by The Economic Times the advisory was issued on March 1 and companies have been asked to follow it going forward. The advisory board asked firms that have already implemented an AI platform in the country to ensure that “their computing resources do not allow any bias or discrimination or jeopardize the integrity of the electoral process.” In addition, MeitY is also reported to have asked AI platforms to add metadata in case AI-generated content could be used to spread misinformation or create deepfakes.

Companies were also asked to add explicit disclaimers in case the platform could behave in an untrustworthy manner and generate inaccurate information. In addition, the platforms will also have to warn users not to use AI to create deep fakes or other content that could influence the election in any way, according to the report. Although the recommendation is currently not legally binding, it states that this is the future of AI regulation in India.

The issue of unreliability first arose when some users posted screenshots of Google Gemini posting inaccurate information about Prime Minister Narendra Modi. On February 23, Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar answered of X (formerly known as Twitter) and said: “These are direct contraventions of Rule 3(1)(b) of the Intermediate Rules (IT Rules) of the IT Act and contraventions of several provisions of the Penal Code.”

The release of the advisory drew mixed reactions from entrepreneurs and the tech space. While some appreciated the move, calling it a necessity to mitigate misinformation, others stressed that the regulation could have an adverse impact on the growth of the emerging sector. Co-founder and CEO of Perplexity AI Aravind Srinivas Named this is “India’s bad move” in a post.

In the same vein, Pratik Desai, founder of KissanAI said, “I was such a fool to think I would work to bring GenAI to Indian agriculture from SF. We have been training a multi-modal low-cost pest and disease model and have been so excited about it. This is terrible and demotivating after working 4 years full time to develop AI in this domain in India.”

In response to criticism in a series of publications, Chandrasekhar emphasized that the recommendation was issued in light of the nation’s existing laws that prohibit platforms from allowing or generating illegal content. “[..]platforms have clear existing obligations under IT and criminal law. So the best way to protect yourself is to use labeling and explicit consent, and if you’re a major platform, get permission from the government before deploying error-prone platforms,” he added.

The Union Minister too explain that the recommendation targets “significant platforms” and only “large platforms” will need to seek permission from MeitY. This advice is not applicable to startups. He further added that following the advisory board’s instructions is in the best interest of companies as it creates insurance from users who might otherwise file a lawsuit against the platform. “Security and trust in India’s internet is a shared and common goal for the government, users and platforms,” ​​he said.

Affiliate links may be automatically generated – see our ethics statement for details.

For details on the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and more at Mobile World Congress in Barcelona, ​​visit our MWC 2024 hub.