Google CEO Sundar Pichai speaks at the Google I/O developer conference.

Andrey Sokolov | Picture Alliance | Getty Images

Google on Tuesday hosted its annual I/O developer conference and released a range of AI products, from new search and chat features to AI hardware for cloud clients. The announcements highlight the company’s focus on AI as it fends off rivals such as OpenAI.

Many of the features or tools that Google has introduced are only in testing or limited to developers, but they give an idea of ​​how Google thinks about AI and where it is investing. Google makes money from AI by charging developers who use its models and from customers who pay for Gemini Advanced, its ChatGPT competitor, which costs $19.99 a month and can help users summarize PDFs, Google Docs and others.

Tuesday’s announcements follow similar events held by its AI competitors. Earlier this month, AmazonAnthropic announced its first enterprise offering and free iPhone app. Meanwhile, OpenAI on Monday released a new AI model and a desktop version of ChatGPT, along with a new user interface.

Here’s what Google announced.

Gemini AI updates

Google Veo, Imagen 3 and audio reviews

Google announced Veo, its latest high-definition video generation model, and Imagen 3, its highest-quality text-to-image model, which promises lifelike images and “less distracting visual artifacts than our previous models.” .

The tools will be available to select creators on Monday and will come with Vertex AI, Google’s machine learning platform that allows developers to train and deploy AI applications. Until then, there will be a waiting list.

The company also demonstrated “Audio Reviews,” the ability to generate audio discussions based on text input. For example, if a user uploads a lesson plan, the chatbot can speak a summary of it. Or, if you ask for an example of a real-life science problem, he can do it via interactive audio.

New search features

Google launches ‘AI Insights’ on Google Search on Monday in the US AI Insights show a brief summary of the answers to the most complex search questions, according to Liz Reed, head of Google Search. For example, if a user searches for the best way to clean leather boots, the results page might show an “AI Overview” at the top with a multi-step cleaning process gathered from information synthesized from around the web.

The company said it plans to bring assistant-like scheduling capabilities directly into search. explains that users will be able to search for something like, “‘Create an easy-to-prepare 3-day meal plan for a group,’ and you’ll get a starting point with a wide variety of recipes from around the web.”

As for its progress in offering “multimodality,” or integrating more images and video within generative AI tools, Google said it will begin testing the ability for users to ask questions through video, such as capturing a problem with a product that own, upload and ask the search engine to understand the problem. In one example, Google showed someone photographing a broken record player while asking why it wasn’t working. Googling found the turntable model and suggested it might be malfunctioning because it wasn’t properly balanced.

Another new feature in testing called “AI Teammate” will integrate into the user’s Google Workspace. It can build a searchable collection of work from messages and email threads with multiple PDFs and documents. For example, a would-be founder can ask the AI ​​teammate, “Are we ready to launch?” and the assistant will provide analysis and summary based on the information accessed in Gmail, Google Docs, and other Workspace apps.

Project Astra

AI hardware

Watch CNBC's full interview with Alphabet CEO Sundar Pichai

https://www.cnbc.com/2024/05/14/google-io-2024-ai-gemini.html