Head over to Google’s Vertex AI Studio site and click “Try it in console” to goof around with some of the AI tools Google talked about at I/O today. The site is meant for developers who want to test the company’s models out while deciding what works best for their software, but anyone can play with it.
Google I/O 2025
Google I/O is where Google previews its plans for Gemini, Android, and beyond. At I/O 2025, we’re expecting a heavy focus on AI, as Google integrates Gemini across its ecosystem of apps and devices. The event kicks off on May 20th with a keynote at 1PM ET / 10AM PT.


The scam detection feature Google just announced requires Android users to opt in, and Google claims it’s on-device only, but it’s still essentially listening to your every conversation to look for fraudulent-sounding language.
Are we really ready to swap scamming concerns with privacy-related ones?
Toward the end (maybe?) of the I/O keynote, Google threw in a cute little ditty about all the things you can do with Gemini prompts: generate photos of cats playing guitar, find smart things to say about Renoir, etc.
It includes the phrase “There’s no wrong way to prompt,” which, have you met people?




Here’s a quick look at the new multimodal AI project Google just announced that’s called Astra and how it can help you find misplaced glasses.
Note: this video was edited for length and clarity, but the original video was one single take.




You’ve long been able to search Google using still images, but now the company bringing “ask with video” search to Google Lens.
In an example during the I/O keynote, Google’s Rose Yao asked Lens why her turntable’s tonearm wouldn’t stay still, recording a brief clip to demonstrate the issue. Not exactly mind-blowing, but certainly helpful!
















Onstage at Google I/O, the company is showing off its AI enhanced Google Search that can take a complex and long entry and generate an AI Overview. In an example, Google searched “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill,” and it provided all of that for the user.








CEO Sundar Pichai just announced new Trillium chips, coming later this year, that are 4.7 times faster than their predecessors, as Google competes with everyone else building new AI chips. Pichai also highlighted Axion, Google’s first ARM-based CPU, which the company announced last month.
Google will also be “one of the first” cloud companies to offer Nvidia’s Blackwell GPU starting in 2025.
Correction: Axion was announced last month, not last year. Also, corrected the spelling of Axion.
Google has announced Gemini’s 1.50 Pro model is coming to its AI-powered note-taking app. Soon, users will be able to create notebook guides out of student notes with summaries, quizzes, and FAQs.
But the real star is the Audio Overviews feature, which can turn the material into an interactive discussion and answer students’ questions.
The faster version of its next-gen large language model offers similar multimodal reasoning and long context capabilities to Gemini 1.5 Pro (also announced today) but optimized for low-latency responses and overall efficiency.
Google announced Gemini 1.5 Flash is available for developers to try in Google AI Studio and Vertex AI today, with 1 million tokens to start and 2 million available upon request.


Google CEO Sundar Pichai just announced that the AI-generated summaries, now known as “AI Overviews,” will be launching to everyone in the US “this week,” with more countries coming soon.

Rhymes with “high time.”


As usual, Google has chosen to reveal the date of its I/O developer conference through a collaborative puzzle. For I/O ’24, it’s a Pipe Dream-style game, guiding a marble from point A to point B.
Completing your levels helps the overall progress (and probably trains generative AI models somehow), which you can monitor on the main event page. Or just wait for others to finish it while you dig through the I/O 23 announcements to see what has or has not shipped yet.
Update (5:03PM ET): The puzzle’s done, and we have a date (May 14th).






Google announced at IO that Bard would include images where relevant in its responses, and that change is live. Below, a tweet shows what that looks like. The images come from sites like Pinterest. I wish they were AI-generated for extra fun, but alas.
Still, I tried asking it for the best food in Austin, TX, and it failed to highlight Casino El Camino.








From yesterday’s Command Line newsletter:
Throughout the series of closed-door policy panels yesterday, I’m told that global affairs president Kent Walker and other Google execs fielded pointed questions about the company’s use of personal data for everything from large language models to ad targeting. Applying copyright law to generative AI was discussed but seemingly not a top concern for the room. Instead, most of the questions were about the use of AI as it relates to user privacy and safety. Regulators wanted to know how Google thinks about using sensitive data from both individual users and enterprise customers for its training sets. And during a talk dedicated specifically to AI and search, concerns were raised about showing harmful, AI-generated results...
You can sign up to read the full thing at the link below.
[The Verge]
Most Popular
- Epic just won its Google lawsuit again, and Android may never be the same
- Nintendo Switch prices are going up after this weekend
- Google’s Pixel Tablet is $190 off for a limited time
- Google has just two weeks to begin cracking open Android, it admits in emergency filing
- Samsung TVs are coming back online after apps stopped working