Google boss Sundar Pichai closed the company’s I/O developer conference by noting that his nearly two-hour presentation mentioned AI 121 times. It was everywhere.

Google’s latest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, cheaper model than Gemini Pro for building AI-powered apps and services.

Google says it will double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code, or more than 1.4 million words simultaneously.

But the bigger news is how the company is building AI into all the things you already use. With search, it will be able to answer your complex questions (a la Copilot on Bing), but for now you’ll need to sign up for the company’s Search Labs to try it out. AI-generated answers will also appear alongside typical search results, just in case the AI ​​knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you’ll be able to ask Google Photos a tricky question, like show me the best photo of every national park I’ve visited. You can also ask Google Photos to generate captions for you.

And if you have Android, Gemini integrates directly into the device. Gemini will know the app, image, or video you’re running, and you’ll be able to pull it up as an overlay and ask it contextual questions, like how to change settings or maybe even who’s showing on the screen.

Even though these were the bigger beats, there was an awful lot to chew on. See all titles here.

— Matt Smith

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a Review

Google unveils Veo and Imagen 3, its latest AI media creation models

You can receive these reports daily directly in your inbox. Subscribe here!

TMA

Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into the camera on your smartphone (or smart glasses) and can contextually analyze and answer questions about the things it sees. Project Astra can offer silly wordplay suggestions as well as identify and define things it sees. A video demonstration shows Project Astra identifying the high-frequency part of a speaker. It’s equal parts impressive and, well, familiar. We tested it right here.

Keep reading.

The increasingly restless world of X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his most unsuspecting users, that “’cis’ or ‘cisgender’ are considered insults on this platform.” On Tuesday, X reportedly began posting an official warning. Quick reminder: this is not an insult.

Keep reading.

Ilya Sutskever announced at X, formerly of Twitter, that he is leaving OpenAI nearly a decade after co-founding the company. He is confident that OpenAI “will build [artificial general intelligence] it’s both safe and useful” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Muratti. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal last year. Sutzkever, then a board member, was involved in both of their firings.

Keep reading.

https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss