Sundar Pichai, CEO of Google and Alphabet, speaks about artificial intelligence during a Bruegel think tank conference in Brussels, Belgium, on January 20, 2020.

Yves Herman | Reuters

Google announced that it will limit the types of election-related questions users can ask its Gemini chatbot, adding that it has already rolled out the changes in India, where voters will head to the polls this spring.

“Out of an abundance of caution on such an important topic, we’ve begun implementing restrictions on the types of election-related queries that Gemini will return answers to,” Google wrote in blog post in Tuesday. “We take seriously our responsibility to provide high-quality information about these types of requests and are constantly working to improve our protections.”

A Google spokesperson told CNBC that the changes are in line with the company’s planned approach to elections, and that it is introducing restrictions on Gemini “in preparation for the many elections happening around the world in 2024 and out of an abundance of caution.”

The announcement comes after Google pulled its AI imaging tool last month after a series of controversies, including historical inaccuracies and controversial answers. The company introduced the image generator earlier in February through Gemini — Google’s main suite of AI models — as part of a significant rebranding.

“We’ve taken the feature offline while we fix this,” Demis Hassabis, CEO of Google’s DeepMind, said last month during a panel at the Mobile World Congress conference in Barcelona. “We’re hoping to get it back online very soon in the next couple of weeks, couple of weeks.” He added that the product was not “working the way we intended.”

The news also comes at a time when tech platforms are gearing up for a huge election year around the world that affects over four billion people in more than 40 countries. The rise of AI-generated content has led to serious concerns about election-related disinformation, with the number of deep fakes generated growing by 900% annually, according to data from machine learning firm Clarity.

Election-related disinformation has been a major problem since the 2016 presidential campaign, when Russian actors was looking for deployment cheap and easy ways to spread inaccurate content on social platforms. Right now, lawmakers are even more concerned about the rapid rise of AI.

“There is cause for serious concern about how artificial intelligence can be used to mislead voters in campaigns,” Josh Becker, a California Democratic senator, told CNBC last month in an interview.

The detection and watermarking technologies used to identify deepfakes have not advanced fast enough to keep up. Even if the platforms behind AI-generated images and videos agree to bake in invisible watermarks and certain types of metadata, there are ways around these safeguards. Sometimes a screenshot can even mislead a detector.

In recent months, Google has emphasized its commitment to pursuing — and investing heavily in — AI assistants, or agents. The term often describes tools ranging from chatbots to coding assistants and other productivity tools.

Alphabet CEO Sundar Pichai highlighted AI agents as a priority during the company’s Jan. 30 earnings call. Pichai said he eventually wants to offer an AI agent that can perform more and more tasks for the user, including in Google Search — though he said there is “a lot of implementation ahead.” Likewise, CEOs of tech giants from Microsoft to Amazon have redoubled their commitment to building AI agents as productivity tools.

Rebranding Google’s Gemini, rolling out apps and expanding features was a first step toward “building a true AI assistant,” Sissy Hsiao, Google’s vice president and general manager of Google Assistant and Bard, told reporters on a call in February.