Google 's annual developer conference, the I/O 2025 , arrived with its usual flurry of announcements, demos, and forward-looking promises. This year's I/O, held at the Shoreline Amphitheatre in Mountain View, continued the company's all-in approach on artificial intelligence with significant updates across its product lineup. From an eye-popping $250/month Ultra subscription to Search reimagined with AI, Google delivered a smorgasbord of announcements that ranged from genuinely impressive to slightly concerning for your wallet. Let's do a run down of everything that Google announced at the I/O 2025.
Gemini 2.5 models get even smarter
Google's Gemini, the AI model powering many of the its services, received major upgrades across its lineup. The new Gemini 2.5 Pro now features an "enhanced reasoning mode" called Deep Think, which allows the model to consider multiple hypotheses before responding - particularly useful for complex math and coding problems. According to Google, this puts it at the top of the LMArena leaderboard in all categories, with Elo scores up more than 300 points since the first-generation Gemini Pro.
Meanwhile, Gemini 2.5 Flash - the more efficient, cost-effective version - has been improved across reasoning, multimodality, code, and long context capabilities. Google claims it's now second only to 2.5 Pro on benchmark tests, while requiring 20-30% fewer tokens for responses. For developers looking for powerful, budget-friendly AI tools, this is the equivalent of getting premium performance at a mid-range price point.
Project Starline gets real and a new name, the Google Beam
Remember Project Starline, Google's futuristic 3D video conferencing booth that promised to make it seem like your conversation partner was sitting right across from you? It's no longer just a research project - it's becoming a real product called Google Beam.
The technology uses a six-camera array and AI to merge video streams into a realistic 3D experience viewed on a light field display. Google claims "near perfect head tracking, down to the millimeter" at 60 frames per second, all happening in real-time. HP will be the first to release Beam devices for enterprise customers later this year, with companies like Deloitte, Duolingo, and Salesforce already signed up.
I'm not saying it's going to replace Zoom overnight, but it might actually make those endless virtual meetings slightly less soul-crushing.
AI Ultra could be the subscription service no one asked for but they might want it
In the "things nobody saw coming" category, Google announced a new premium subscription tier called AI Ultra, priced at a jaw-dropping $250 per month. Yes, you read that correctly, two hundred and fifty dollars. Monthly.
For this Netflix-annual-subscription-times-twenty price tag, subscribers get early access to Google's latest AI tools and unlimited use of features like Deep Research. The package includes 30TB of storage across Google Photos, Drive, and Gmail, plus YouTube Premium and access to experimental features like Project Mariner. New subscribers can get 50% off for the first three months, which still lands at $125 monthly.
Google clearly believes some people are willing to pay premium prices for cutting-edge AI tools. The rest of us will just have to wait for these features to eventually trickle down to more affordable tiers.
Google Search gets a complete makeover with AI mode
After what Google called "one of the most successful launches in Search in the past decade" with AI Overviews, the company is going all-in on transforming how we search the web with AI Mode. This separate tab in Google Search lets users ask longer, more complex questions and follow up with additional queries - essentially turning Google Search into a conversational AI experience.
Early testers have been asking queries that are two to three times longer than traditional searches, suggesting users are already adapting to this new paradigm. AI Mode is rolling out to everyone in the US starting this week, powered by Gemini 2.5.
Google is also adding features like Deep Search, which expands background queries from tens to hundreds to create comprehensive search responses, and an AI shopping experience that helps users find items and even virtually "try on" clothing by uploading a photo of themselves.
Google brings smart glasses out of shade with Android XR
Following last week's Android Show , where Google previewed some Android 16 features, I/O provided more details on Google's mixed reality plans. The company unveiled its Android XR platform for augmented, mixed, and virtual reality devices.
The most interesting development is Project Aura, a prototype of Android XR-powered smart glasses developed with Xreal. These glasses will feature Gemini integration and a large field of view, along with built-in cameras and microphones. Google is also partnering with popular eyewear brands including Samsung, Gentle Monster, and Warby Parker to create more stylish options.
Android XR will support features like live translation, directional navigation via a mini Google Maps display, and the ability to view immersive 360-degree videos. After Google Glass and previous AR attempts, this feels like the company's most serious push yet into wearable computing.
Gmail gets AI-powered Smart Replies that actually sound like you
Google's AI is coming for your inbox, but in a helpful way. Gmail will soon feature personalized smart replies that analyze your writing style and past emails to suggest responses that sound authentically like you. The system considers your typical greetings, tone, and even favorite word choices to generate more relevant replies.
With your permission, Gemini can pull information from both your Gmail and Google Drive to craft these responses. For instance, if a friend emails asking about a road trip you've taken, Gemini could reference your past itineraries stored in Docs to suggest a detailed response.
The feature will launch through Google Labs in July for English users on web, iOS, and Android platforms. It's either incredibly convenient or slightly unnerving, depending on how you feel about AI mimicking your writing style.
Google Meet adds real-time AI translation
Video calls with international colleagues or friends are about to get much easier. Google Meet is adding an AI-powered real-time translation feature that converts your speech into your conversation partner's preferred language nearly instantaneously. The system even attempts to match your tone and cadence while translating.
The feature initially supports translation between English and Spanish, with more languages planned for the near future. It's rolling out in beta to Google AI Pro and Ultra subscribers first. This could be a game-changer for global businesses and multilingual families alike.
Google's AI Agent is becoming more powerful
Google's experimental AI agent, Project Mariner, received significant upgrades. The system can now handle up to 10 different tasks simultaneously, from booking flights to researching topics to comparing shopping options.
The latest version also introduces a "teach and repeat" function, where you can demonstrate a task once, and it learns how to perform similar tasks in the future. Google is bringing Project Mariner's capabilities to developers via the Gemini API, with trusted testers like Automation Anywhere and UiPath already building with it.
Google's vision for a universal AI assistant takes shape with Project Astra
Remember when AI assistants were just glorified weather forecasters and timer-setters? Project Astra, Google's ambitious universal AI assistant prototype, is aiming to make those days feel like ancient history. At I/O 2025, Google showcased Astra's enhanced capabilities, demonstrating how it can now proactively use your phone's camera to "see" the world around you and take action without explicit commands.
Google is positioning Astra as the culmination of its AI assistant work, capable of everything from diving into your emails to find bike specs to researching repair information and even calling local shops on your behalf. While some might feel queasy about giving an AI this much access to their digital lives, the productivity gains could be substantial, if you're comfortable with an AI assistant that might occasionally pipe up with unsolicited advice while you're trying to fix your bike chain in peace.
The Astra features are already appearing in Gemini Live, which is now available to all Android users and rolling out to iOS starting today.
Google’s Veo and Imagen gets real(er)
If you thought artificially generated videos look comic, you should have watched Google’s keynote, as it’s new models were previewed. Veo 3, the latest version of Google's AI video generator, now supports audio generation - adding ambient sounds, music, and even dialogue to AI-generated videos. Previous versions could only create silent clips, which wasn't particularly useful in a world where TikTok and YouTube dominate.
Imagen 4, Google's text-to-image model, has been improved to generate more photorealistic images with better handling of fine details like fabrics, water droplets, and animal fur. It can now export images in various formats and resolutions up to 2K.
Google is also introducing Flow, an AI filmmaking app that builds on these technologies. Flow lets users create eight-second AI-generated video clips based on text prompts or images, with scene-builder tools to stitch clips together for longer videos. Think of it as having a mini film production studio in your pocket - albeit one that sometimes has bizarre interpretations of your creative vision.
Stitch is an AI-generated UI designer
For developers and designers, Google unveiled Stitch, an AI-powered tool that generates user interfaces based on text descriptions. Users can provide wireframes, rough sketches, or screenshots of other designs to guide Stitch's output, making it easier to quickly prototype app and website designs.
While it has more limited capabilities than some competing AI design tools, Stitch provides HTML and CSS markup for the designs it generates, helping bridge the gap between concept and implementation. The experiment is currently available through Google Labs.
Gemini 2.5 models get even smarter
Google's Gemini, the AI model powering many of the its services, received major upgrades across its lineup. The new Gemini 2.5 Pro now features an "enhanced reasoning mode" called Deep Think, which allows the model to consider multiple hypotheses before responding - particularly useful for complex math and coding problems. According to Google, this puts it at the top of the LMArena leaderboard in all categories, with Elo scores up more than 300 points since the first-generation Gemini Pro.
Meanwhile, Gemini 2.5 Flash - the more efficient, cost-effective version - has been improved across reasoning, multimodality, code, and long context capabilities. Google claims it's now second only to 2.5 Pro on benchmark tests, while requiring 20-30% fewer tokens for responses. For developers looking for powerful, budget-friendly AI tools, this is the equivalent of getting premium performance at a mid-range price point.
Project Starline gets real and a new name, the Google Beam
Remember Project Starline, Google's futuristic 3D video conferencing booth that promised to make it seem like your conversation partner was sitting right across from you? It's no longer just a research project - it's becoming a real product called Google Beam.
The technology uses a six-camera array and AI to merge video streams into a realistic 3D experience viewed on a light field display. Google claims "near perfect head tracking, down to the millimeter" at 60 frames per second, all happening in real-time. HP will be the first to release Beam devices for enterprise customers later this year, with companies like Deloitte, Duolingo, and Salesforce already signed up.
I'm not saying it's going to replace Zoom overnight, but it might actually make those endless virtual meetings slightly less soul-crushing.
AI Ultra could be the subscription service no one asked for but they might want it
In the "things nobody saw coming" category, Google announced a new premium subscription tier called AI Ultra, priced at a jaw-dropping $250 per month. Yes, you read that correctly, two hundred and fifty dollars. Monthly.
For this Netflix-annual-subscription-times-twenty price tag, subscribers get early access to Google's latest AI tools and unlimited use of features like Deep Research. The package includes 30TB of storage across Google Photos, Drive, and Gmail, plus YouTube Premium and access to experimental features like Project Mariner. New subscribers can get 50% off for the first three months, which still lands at $125 monthly.
Google clearly believes some people are willing to pay premium prices for cutting-edge AI tools. The rest of us will just have to wait for these features to eventually trickle down to more affordable tiers.
Google Search gets a complete makeover with AI mode
After what Google called "one of the most successful launches in Search in the past decade" with AI Overviews, the company is going all-in on transforming how we search the web with AI Mode. This separate tab in Google Search lets users ask longer, more complex questions and follow up with additional queries - essentially turning Google Search into a conversational AI experience.
Early testers have been asking queries that are two to three times longer than traditional searches, suggesting users are already adapting to this new paradigm. AI Mode is rolling out to everyone in the US starting this week, powered by Gemini 2.5.
Google is also adding features like Deep Search, which expands background queries from tens to hundreds to create comprehensive search responses, and an AI shopping experience that helps users find items and even virtually "try on" clothing by uploading a photo of themselves.
Google brings smart glasses out of shade with Android XR
Following last week's Android Show , where Google previewed some Android 16 features, I/O provided more details on Google's mixed reality plans. The company unveiled its Android XR platform for augmented, mixed, and virtual reality devices.
The most interesting development is Project Aura, a prototype of Android XR-powered smart glasses developed with Xreal. These glasses will feature Gemini integration and a large field of view, along with built-in cameras and microphones. Google is also partnering with popular eyewear brands including Samsung, Gentle Monster, and Warby Parker to create more stylish options.
Android XR will support features like live translation, directional navigation via a mini Google Maps display, and the ability to view immersive 360-degree videos. After Google Glass and previous AR attempts, this feels like the company's most serious push yet into wearable computing.
Gmail gets AI-powered Smart Replies that actually sound like you
Google's AI is coming for your inbox, but in a helpful way. Gmail will soon feature personalized smart replies that analyze your writing style and past emails to suggest responses that sound authentically like you. The system considers your typical greetings, tone, and even favorite word choices to generate more relevant replies.
With your permission, Gemini can pull information from both your Gmail and Google Drive to craft these responses. For instance, if a friend emails asking about a road trip you've taken, Gemini could reference your past itineraries stored in Docs to suggest a detailed response.
The feature will launch through Google Labs in July for English users on web, iOS, and Android platforms. It's either incredibly convenient or slightly unnerving, depending on how you feel about AI mimicking your writing style.
Google Meet adds real-time AI translation
Video calls with international colleagues or friends are about to get much easier. Google Meet is adding an AI-powered real-time translation feature that converts your speech into your conversation partner's preferred language nearly instantaneously. The system even attempts to match your tone and cadence while translating.
The feature initially supports translation between English and Spanish, with more languages planned for the near future. It's rolling out in beta to Google AI Pro and Ultra subscribers first. This could be a game-changer for global businesses and multilingual families alike.
Google's AI Agent is becoming more powerful
Google's experimental AI agent, Project Mariner, received significant upgrades. The system can now handle up to 10 different tasks simultaneously, from booking flights to researching topics to comparing shopping options.
The latest version also introduces a "teach and repeat" function, where you can demonstrate a task once, and it learns how to perform similar tasks in the future. Google is bringing Project Mariner's capabilities to developers via the Gemini API, with trusted testers like Automation Anywhere and UiPath already building with it.
Google's vision for a universal AI assistant takes shape with Project Astra
Remember when AI assistants were just glorified weather forecasters and timer-setters? Project Astra, Google's ambitious universal AI assistant prototype, is aiming to make those days feel like ancient history. At I/O 2025, Google showcased Astra's enhanced capabilities, demonstrating how it can now proactively use your phone's camera to "see" the world around you and take action without explicit commands.
Google is positioning Astra as the culmination of its AI assistant work, capable of everything from diving into your emails to find bike specs to researching repair information and even calling local shops on your behalf. While some might feel queasy about giving an AI this much access to their digital lives, the productivity gains could be substantial, if you're comfortable with an AI assistant that might occasionally pipe up with unsolicited advice while you're trying to fix your bike chain in peace.
The Astra features are already appearing in Gemini Live, which is now available to all Android users and rolling out to iOS starting today.
Google’s Veo and Imagen gets real(er)
If you thought artificially generated videos look comic, you should have watched Google’s keynote, as it’s new models were previewed. Veo 3, the latest version of Google's AI video generator, now supports audio generation - adding ambient sounds, music, and even dialogue to AI-generated videos. Previous versions could only create silent clips, which wasn't particularly useful in a world where TikTok and YouTube dominate.
Imagen 4, Google's text-to-image model, has been improved to generate more photorealistic images with better handling of fine details like fabrics, water droplets, and animal fur. It can now export images in various formats and resolutions up to 2K.
Google is also introducing Flow, an AI filmmaking app that builds on these technologies. Flow lets users create eight-second AI-generated video clips based on text prompts or images, with scene-builder tools to stitch clips together for longer videos. Think of it as having a mini film production studio in your pocket - albeit one that sometimes has bizarre interpretations of your creative vision.
Stitch is an AI-generated UI designer
For developers and designers, Google unveiled Stitch, an AI-powered tool that generates user interfaces based on text descriptions. Users can provide wireframes, rough sketches, or screenshots of other designs to guide Stitch's output, making it easier to quickly prototype app and website designs.
While it has more limited capabilities than some competing AI design tools, Stitch provides HTML and CSS markup for the designs it generates, helping bridge the gap between concept and implementation. The experiment is currently available through Google Labs.
You may also like
Anupam Kher says 'Tanvi The Great' was received with love, warmth at Cannes
Eyes Care: Do you want to remove glasses from your eyes? Follow these 5 tips given by Baba Ramdev..
Maharashtra reports two Covid-linked deaths, govt urges citizens not to panic
UPSC IFS 2024 Result: UPSC IFS final result released, Kanika Anbhay tops; Check your name in PDF..
Latest Tottenham injury news and return dates ahead of Man United Europa League final