The launch of the Nintendo Switch 2 is almost here, and it'll come with a handful of titles ready for gamers to dive into from day one. Luckily, we now have an early look at one in particular, from game developer CD Projekt Red.
In a YouTube video by Nintendo Life, CD Projekt Red's Cyberpunk 2077 is revealed running on the Nintendo Switch 2, with visual quality that rivals other PC handhelds like the Steam Deck. This is thanks to Nvidia's custom T239 chip, which allows the new handheld to take advantage of DLSS upscaling, for better-than-native performance while upscaling from a lower internal resolution.
Considering earlier expectations that were based on the hardware rumors (which turned out to be legitimate), Cyberpunk 2077 has impressed many gamers with its lighting and environment details. However, it's worth noting that Nvidia's DLSS upscaling is rumored to be used quite aggressively, which is clear to see in some of the blurry sequences in the gameplay showcase.
This is to be expected as the Switch 2 is already pushing above its weight in running a game like Cyberpunk 2077. But there are still very evident performance dips, particularly during vehicle traversal, which highlights the potential issue – if DLSS is indeed used aggressively and performance is not up to par, seeing dips into what looks like the upper end of 20fps, then is it really impressive after all?
PlayStation 5 on the left, Nintendo Switch 2 on the right... (Image credit: Nintendo Life)It's worth noting that it isn't exactly clear which segments of the gameplay below are either docked or handheld (it's likely the former considering the 4K video quality), and there will be a choice between quality and performance modes.
This is also still a work in progress and will likely be drastically different from the launch version, but it will be interesting to see how this fares against the MSI Claw 8 AI+ – which delivers great visuals and performance playing Cyberpunk 2077 – along with other upcoming handhelds like the recently-announced MSI Claw A8 using AMD's Ryzen Z2 Extreme.
Analysis: It's better than I expected, but doesn't warrant the Switch 2's costNow, before you say I have a Nintendo Switch 2 agenda, I do think games like Cyberpunk 2077 have the potential to further exceed their performance and visual expectations on the device. Despite that, the handheld's $449.99 / £395.99 / AU$699.95 price has me asking a basic question – wouldn't it be better to buy a PS5, Xbox Series X at around the same price, for a better experience?
I could go into the handheld PC comparisons, and the Claw 8 AI+'s processing power, but I'd hate to sound like a broken record. Spoiler alert; it's purported to be the better and more powerful device.
However, the simple fact here is that the Switch 2's Cyberpunk 2077 isn't in the same ballpark visually and performance-wise as either of Sony's or Microsoft's consoles. In that sense, the Switch 2's value as a gaming console rival is lost if it costs nearly the same and yet provides a worse experience.
Before you point out that the MSI Claw 8 AI+ costs more than the PS5 and Xbox Series X, it's not in the category of game console (it also doesn't come with a dock for extra performance), and its price compared to the Switch 2 is still warranted considering the power packed in such a compact device. If the Switch 2's price were much lower, I'd be far more impressed with Cyberpunk 2077's performance, but tariffs or not, that's not the case.
DLSS seems to be the one factor that will do the heavy lifting with the Switch 2, and I'd argue it's the one reason why its version of Cyberpunk 2077 can be compared to other handhelds using either XeSS or FSR (neither of which are on the level of Nvidia's DLSS). Even then, without tools like Frame Generation, it still leaves me unimpressed with the Switch 2, but I'll happily eat my words if I'm proven wrong with its capabilities.
You might also like...A Widow's Game is a gripping new Netflix movie which is giving serious Gone Girl vibes, and I'm so excited to watch when it arrives on one of the best streaming services.
Arriving on May 30, it definitely has the potential to be one of the best Netflix movies, as it's inspired by a very interesting case I hadn't heard of before known as "the black widow of Patraix."
The Spanish-language movie is directed by Carlos Sedes and written by the team that brought us the Netflix drama series The Asunta Case. That series follows a couple who reported their daughter missing, which unravels the truth about a seemingly picture-perfect family.
Check out the new trailer below.
What is the plot of A Widow's Game? (Image credit: Netflix)Set in 2017, the body of a man is found in a parking lot. He's been stabbed seven times and the authorities believe all signs point to a crime of passion. With a veteran inspector heading up the crime, she's soon led to a suspect no one expected: Maje, the young widow who had been married to the victim for less than a year.
The cast is led by Pan's Labyrinth star Ivana Baquero, who plays Maje, and Criminal's Carmen Machi, who is Eva, the case inspector. The cast also includes Tristán Ulloa, Joel Sánchez, Álex Gadea, Pablo Molinero, Pepe Ocio, Ramón Ródenas, Amparo Fernández and Miquel Mars.
I love a good crime drama and I'm very excited to see this one unfold and how the titular widow is brought to justice. If she is, of course!
You might also likeThree spouseware apps - Cocospy, Spyic, and Spyzie, have gone dark. The apps, which are all basically clones of one another, are no longer working. Their websites are gone, and their cloud storage, hosted on Amazon, is deleted.
The news was broken by TechCrunch earlier this week, who said that the reason behind the disappearance is not blatantly obvious, but it could be linked to data breaches that happened earlier this year.
“Consumer-grade phone surveillance operations are known to shut down (or rebrand entirely) following a hack or data breach, typically in an effort to escape legal and reputational fallout,” the publication wrote.
With Aura's parental control software, you can filter, block, and monitor websites and apps, set screen time limits. Parents will also receive breach alerts, Dark Web monitoring, VPN protection, and antivirus.
Preferred partner (What does this mean?)View Deal
The grey zone“LetMeSpy, a spyware developed out of Poland, confirmed its “permanent shutdown” in August 2023 after a data breach wiped out the developer’s servers. U.S.-based spyware maker pcTattletale went out of business and shut down in May 2024 following a hack and website defacement.”
Spouseware, or spyware, is a type of application that operates in the grey zone. It is advertised as a legitimate software, used to keep track of minors, people with special needs, and similar. However, most of the time it is just a cover for illegal activities, such as spying on other members of the household, love interests, and similar.
Given its nature, the development team and key people are usually hidden, which makes it difficult for members of the media to get a comment or a statement.
In late February this year, two of the apps - Cocospy and Spyic - were found exposing sensitive user data: email addresses, text messages, call logs, photographs, and other sensitive information. Furthermore, researchers were able to exfiltrate 1.81 million of email addresses used to register with Cocospy, and roughly 880,000 addresses used for Spyic. Besides email addresses, the researcher managed to access most of the data harvested by the apps, including pictures, messages, and call logs.
Just a week later, similar news broke for Spyzie. The app was found leaking email addresses, text messages, call logs, photographs, and other sensitive data, belonging to millions of people who, without their knowledge or consent, have had these apps installed on their devices. The people who installed those apps, in most cases partners, parents, significant others, have also had their email addresses exposed in the same manner.
Via TechCrunch
You might also likeGoogle Search is under pressure – not only are many of us replacing it with the likes of ChatGPT Search, Google's attempts to stave off competition with the features like AI Overviews have also backfired due to some worrying inaccuracies.
That's why Google has just given Search its biggest overhaul for over 25 years at Google I/O 2025. The era of the 'ten blue links' is coming to close, with Google now giving its AI Mode (previously stashed away in its Labs experiments) a wider rollout in the US.
AI Mode was far from the only Search news at this year's I/O – so if you been wondering what the next 25 years of 'Googling' looks like, here are all of the new Search features Google's just announced.
A word of warning: beyond AI Mode, many of the features will only be available to Labs testers in the US – so if you want to be among the first to try them "in the coming weeks", turn on the AI Mode experiment in Labs.
1. AI Mode in Search is rolling out to everyone in the US (Image credit: Google)Yes, Google has just taken off the stabilizers off its AI Mode for Search – which was previously only available in Labs to early testers – and rolled it out to everyone in the US. There's no word yet on when it's coming to other regions.
Google says that "over the coming weeks" (which sounds worryingly vague) you'll see AI Mode appear as a new tab in Google Search on the web (and in the search bar in the Google app).
We've already tried out AI Mode and concluded that "it might be the end of Search as we know it", and Google says it's been refining it since then – the new version is apparently powered by a custom version of Gemini 2.5.
@techradar ♬ original sound - TechRadar 2. Google also has a new 'Deep Search' AI Mode (Image credit: Google)A lot of AI chatbots – including ChatGPT and Perplexity – now offer a Deep Research mode for longer research projects that require a bit more than a quick Google. Well, now Google has its own equivalent for Search called, yes, 'Deep Search'.
Available in Labs "in the coming months" (always the vaguest of release windows), Deep Search is a feature within AI Mode that's based on the same "query fan-out" technique as that broader mode, but according to Google takes it to the "next level".
In reality, that should mean an "expert-level, fully-cited report" (Google says) in only a few minutes, which sounds like a big time-saver – as long as the accuracy is a bit better than Google's AI Overviews.
3. Search Live lets you quiz Google with your camera (Image credit: Google)Google already lets you ask questions about the world with Google Lens, and demoed its Project Astra universal assistant at Google I/O 2024. Well, now it's folding Astra into Google Search so you can ask questions in real-time using your smartphone's camera.
'Search Live' is another Labs feature and will be marked by a 'Live' icon in Google's AI Mode or in Google Lens. Tap it and you'll be able to point your camera and have a back-and-forth chat with Google about what's in front of you, while getting links sent to you with more info.
The idea sounds good in theory, but we're still yet to try it out beyond its prototype incarnation last year and the multimodal AI project is cloud-based, so your mileage may vary depending on where you're using it. But we're excited to see how far it's come in the last year or so with this new Labs version in Search.
@techradar ♬ original sound - TechRadar 4. AI Overviews are going global (Image credit: Future)We're not exactly wild about AI Overviews, which are the little AI-generated paragraphs you often see at the top of your search results. They're sometimes inaccurate and have resulted in some infamous clangers, like recommending that people add glue to their pizzas. But Google is ploughing ahead with them and announced that AI Overviews are getting wider rollout.
The new expansion means the feature will be available in more than 200 countries and territories and more than 40 languages worldwide. In other words, this is the new normal for Google Search, so we'd better get used to it.
Google's Liz Reid (VP, Head of Search) acknowledged in a press briefing before Google I/O 2025 that AI Overviews have been a learning experience, but claims they've improved since those early incidents.
"Many of you may have seen that a set of issues came up last year, although they were very much education and quite rare, we also still took them very, very seriously and made a lot of improvements since then", she said.
5. Google Search will soon be your ticket-buying agent (Image credit: Google)Finding and and buying tickets and still something of painful experience in Google Search. Fortunately, Google is promising a new mode that's powered by Project Mariner, which is an AI agent that can surf the web just like a human and complete tasks.
Rather than a separate feature, this will apparently live within AI Mode and kick in when you ask questions like "Find two affordable tickets for this Saturday's Reds game in the lower level".
This will see it scurry off and analyze hundreds of ticket options with real-time pricing. It can also fill in forms, leaving you with the simple task of hitting the 'purchase' button (in theory, at least).
The only downside is that this is another of Google's Lab projects that will launch "in the coming months", so who knows when we'll actually see it in action.
6. Google Shopping is getting an AI makeover (Image credit: Google)Google gave its Shopping tab within Google Search a big refresh back in October 2024, and now many of those features are getting another boost thanks to some new integration with AI Mode.
The 'virtual try-on' feature (which now lets you upload a photo of yourself to see how new clothing might look on you, rather than models) is back again, but the biggest new feature is an AI-powered checkout feature that tracks prices for you, then buys things on your behalf using Google Pay when the price is right (with your confirmation, of course).
We're not sure this is going to help cure our gear-acquisition syndrome, but it it does also have some time-saving (and savings-wrecking) potential.
7. Google Search is getting even more personalized (if you want it to)Like traditional Search, Google's new AI Mode will offer suggestions based on your previous searches, but you can also make it a lot more personalized. Google says you'll be able to connect it to some of its other services, most notably Gmail, to help its answer your queries with a more tailored, personal touch.
One example Google gave was asking AI Mode for "things to do in Nashville this weekend with friends". If you've plugged it into other Google services, it could use your previous restaurant bookings and searches to lean the results towards restaurants with outdoor seating.
There are obvious issues here – for many, this may be a privacy invasion too far, so they'll likely not opt into connecting it to other services. Also, these 'personal context' powers sound like they have the 'echo chamber' problem of assuming you always want to repeat your previous preferences.
Still, it could be another handy evolution of Search for some, and Google says you can always manage your personalization settings at any time.
You might also likeGoogle clearly wants to inject artificial intelligence into more creative tools, as evidenced by the introduction of Flow at today’s Google I/O 2025.
Flow is the search giant’s new ‘AI filmmaking tool’ that uses Google’s AI models, such as Veo, Imagen, and Gemini to help creative types explore storytelling ideas in movies and videos without needing to go out and film clips and cinematic scenes or sketch out a lot of storyboard scenes by hand.
Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow lets users add in text prompts in natural, everyday language to create scenes, such as "astronauts walk out of the museum on a bridge,” and the AI tech behind Flow will create such a scene.
Flow lets filmmakers bring their own assets into it, from which characters and other images can be created. Once a subject or scene is created, it can be integrated into clips and scenes in a fashion that’s consistent with the video or film as a whole.
There are other controls beyond the creation of assets and scenes, with Flow offering direct manipulation of camera angles, perspectives and motion, easy editing of scene to hone in on features or widen up a shot to include more action - this appears to work as easily as a cropping tool - and offers the ability to manage all the ‘ingredients’ and prompt for Flow.
Flow will be available for subscribers of Google Al Pro and Google Al Ultra plans in the US, with more countries slated to get access to the filmmaking AI soon.
AI-made movies? Image 1 of 8Google Flow in action (Image credit: Google Flow)Image 2 of 8(Image credit: Google Flow)Image 3 of 8(Image credit: Google Flow)Image 4 of 8(Image credit: Google Flow)Image 5 of 8(Image credit: Google)Image 6 of 8(Image credit: Google)Image 7 of 8(Image credit: Google Flow)Image 8 of 8(Image credit: Google)From seeing videos of Flow in action, it appears to be a powerful tool that brings an idea into a visual form, and with surprising realism. Powered by natural language prompts means budding filmmakers can create shots and science that would in the past have required dedicated sets or at least some deft CGI work.
In effect, Flow could be one of those AI tools that opens up the world of cinema to a wider range of creatives, or at least gives amateurs more powerful creative tools to bring their ideas to life.
However, this does raise the question of whether Flow would be used to create ideas for storytelling that would then be brought into silver screen life via physical sets, actors, and dedicated cinema CGI. Or if Flow will be used to create whole movies with AI, effectively letting directors be the sole producers of films, and bypass the need for actors, camera people, and the wealth of crew that are integral to traditional movie making.
As such, AI-powered tools like Flow could breathe new life into the world of cinema that one might argue has got a little stale, at least on the big production commercial side, and at the same time disrupt the roles and work required in the movie-making industry.
You might also likeGoogle just announced that its AI voice assistant, Gemini Live, is now available for free on iOS and Android.
Gemini Live has been available to paid subscribers for a while now, but you can now chat with AI, use your smartphone's camera to show it things, and even screen share without spending any money.
The major announcement happened at Google I/O, the company's flagship software event. This year, Google I/O has focused heavily on Gemini and the announcement of AI Mode rolling out to all US Google Search users.
@techradar ♬ original sound - TechRadarGemini Live is one of the best AI tools on the market, competing with ChatGPT Advanced Voice Mode. Where Gemini Live thrives is in its ability to interact with what you see on screen and in real life.
Before today, you needed an Android device to access Live's camera, but now that has all changed, and iPhone users can experience the best that Gemini has to offer.
Google says the rollout will begin today, with all iOS users being able to access Gemini Live and screen sharing over the following weeks.
More Gemini Live integration in your daily lifeFree access and iOS rollout weren't the only Gemini Live features announced at Google I/O. In fact, new functionality for the voice assistant could be a headline new addition.
Over the coming weeks, Google says Gemini Live will "integrate more deeply into your daily life. " Whether that's by adding events to your Google Calendar, accessing Google Maps, or interacting with more of the Google ecosystem, Gemini Live is going to become an essential part of how AI interacts with your device.
While Google didn't say if this functionality will be available on iOS, it's safe to assume that, for now, increased system integration will be limited to Android.
Gemini Live's free rollout, along with its upgrades, is one of, if not the, best announcements of Google I/O, and I can't wait to see how it improves over the next few months.
How to use Gemini Live (Image credit: Google)Accessing Gemini Live is simple, you just need access to the Gemini app on iOS or Android.
AI video generation tools such as Sora and Pika can create alarmingly realistic bits of video, and with enough effort, you can tie those clips together to create a short film. One thing they can't do, though, is simultaneously generate audio. Google's new Veo 3 model can, and that could be a game changer.
Announced on Tuesday at Google I/O 2025, Veo 3 is the third generation of the powerful Gemini video generation model. With the right prompt, it can produce videos that include sound effects, background noises, and, yes, dialogue.
Google briefly demonstrated this capability for the video model. The clip was a CGI-grade animation of some animals talking in a forest. The sound and video were in perfect sync.
If the demo can be converted into real-world use, this represents a remarkable tipping point in the AI content generation space.
"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis in a press call.
Lights, camera, audioHe isn't wrong. Thus far, no other AI video generation model can simultaneously deliver synchronized audio, or audio of any kind, to accompany video output.
It's still not clear if Veo 3, which, like its predecessor, Veo 2, should be able to output 4K video, surpasses current video generation leader OpenAI Sora in the video quality department. Google has, in the past, claimed that Veo 2 is adept at producing realistic and consistent movement.
Regardless, outputting what appears to be fully produced video clips (video and audio) may instantly make Veo a more attractive platform.
It's not just that Veo 3 can handle dialogue. In the world of film and TV, background noises and sound effects are often the work of Foley artists. Now, imagine if all you need to do is describe to Veo the sounds you want behind and attached to the action, and it outputs it all, including the video and dialogue. This is work that takes animators weeks or months to do.
In a release on the new model, Google suggests you tell the AI "a short story in your prompt, and the model gives you back a clip that brings it to life."
If Veo 3 can follow prompts and output minutes or, ultimately, hours of consistent video and audio, it won't be long before we're viewing the first animated feature generated entirely through Veo.
Veo is live today and available in the US as part of the new Ultra tier ($249.99 a month) in the Gemini App and also as part of the new Flow tool.
Google also announced a few updates to its Veo 2 video generation model, including the ability to generate video based on reference objects you provide, camera controls, outpainting to convert from portrait to landscape, and object add and erase.
@techradar ♬ original sound - TechRadar You might also likeAt Google I/O 2025 Google finally gave us what we’ve all been waiting for (well, what I’ve been waiting for): a proper Android XR showcase.
The new Google operating system made for Android headsets and Android glasses has been teased as the next big rival to Meta’s Horizon OS – the software that powers the Meta Quest 3 and Quest 3S – and we finally have a better picture of how it stacks up.
Admittedly the showcase was a little short, but we do know several new details about Android XR, and here are four you need to know.
1. Android XR has Gemini at its core (Image credit: Future)While I’d argue Google’s Android XR showcase wasn’t as in-depth as I wanted, it did show us what the operating system has running at its core: Google Gemini.
Google’s advanced AI is the OS’ defining feature (at least that’s how Google is positing it).
On-glasses-Gemini can recommend you a place to eat ramen then offer you on-screen directions to where to find it, it can perform live-translation, and on a headset it can use Google Maps' immersive view to virtually transport you to any destination you request.
Particularly on the glasses this completely hands-free approach – combined with cameras and a head-up display – looks to be Google Gemini in its most useful form. You can get the assistant’s help as quickly as you can ask for it, no fumbling to get your phone out required.
I want to see more but this certainly looks like a solid upgrade on the similar Meta AI feature the Ray-Ban Meta smart glasses offer.
2. Android XR is for more than Samsung (Image credit: Xreal)Ahead of Google I/O we knew Samsung was going to be a key Android XR partner – alongside Qualcomm, who’s providing all the necessary Snapdragon chipsets to power the Android XR hardware.
But we now know several other companies are collaborating with Google.
Xreal has showcased Project Aura, which will bring Android XR to an upgraded version of its tethered glasses that we’re familiar with (like the Xreal One) – with Aura being complete with a camera and Snapdragon processor.
Then Google also teased glasses from Gentle Monster and Warby Parker, implying it is taking Meta’s approach of partnering with fashion brands, rather than just traditional tech brands.
Plus, given that Gentle Monster and Warby Parker offer very different design aesthetics, this will be good news for people who want varied fashion choices for their new smart glasses accessories.
3. Project Moohan is still coming ‘later this year’ (Image credit: Google)The Android XR headset Project Moohan is still set to launch in 2025, but Google and Samsung have yet to confirm a specific release date.
I was hoping we’d get something more concrete, but continued confirmation that Moohan will be landing in 2025 is better than it being delayed.
Google and its partners weren’t keen to give us any firm dates, in fact. Xreal calling its Project Aura the second official Android XR glasses suggests it’ll land sometime after Moohan, but before anything else – however, we’ll have to wait and see what plays out.
4. Meta should be worried, but not terrified (Image credit: Google)Google certainly dealt XR’s biggest player – Meta, with its hugely popular Quest headset hardware – a few blows and gave its rival something to be worried about.
However, this showcase is far from a finisher, especially not in the headset department.
Meta’s Connect 2025 showcase in September is expected to show us similar glasses tech and features, and depending on release dates Meta might beat Android XR to the punch.
That said, competition is only going to be a good thing for us consumers, as these rivals battle over price and features to entice us to one side or the other. Unlike previous battles in the XR space this certainly seems like a balanced fight and I’m excited to see what happens next.
@techradar ♬ original sound - TechRadar You might also likeGoogle and Samsung's Android XR collab has been a major focus, but at Google I/O 2025 a new (yet familiar) partner emerged to showcase the second official Android XR device: Xreal with Project Aura.
Xreal and its Xreal One glasses currently top our list for the best smart glasses thanks to their impressive audio and visual quality.
However, while they include AR elements – they make your connected device (a phone, laptop, or console, among other options) float in front of you like you’re in a private movie theatre, which is fantastic by the way – they aren’t yet as versatile as other smart glasses propositions we’re being promised by Google, Meta, Snap and others.
Xreal Project Aura – a pair of XR glasses officially referred to as an optical see-through (OST) XR device – should shift Xreal’s range towards that of its rivals thanks to its advanced Qualcomm chipset, Xreal’s visual system expertise, and Google’s Android XR software. The combination of which should (hopefully) form a more fully realized spatial computing device than we’ve seen from Xreal before.
Samsung aren't the only Android XR glasses (Image credit: Google)As exciting as this announcement is – I’ll explain more below in a moment – we should keep our emotions in check until further details on Project Aura are revealed at the Augmented World Expo (AWE) in June, and in other announcements set to be made “later this year” (according to Xreal).
Simply because beyond its existence and its general design we know very little about Aura.
We can see it has in-built cameras, have been promised Qualcomm processors, and it appears to use the same dual-eye display technology exhibited by Xreal’s other glasses. Plus it'll tethered rather than fully wireless, though it should still offer all of the Android XR abilities Google has showcased.
But important questions like its cost and release date haven’t yet been detailed.
I’m hoping it’ll offer us a more cost-effective entry point to this new era of XR glasses, but we’ll have to wait and see before we know for certain if this is “a breakthrough moment for real-world XR” as Chi Xu, Co-founder and CEO of Xreal promises.
Still, even before knowing its specs and other key factors I’m leaning towards agreeing with Xreal’s CEO.
I love my Xreal One glasses (Image credit: Future / Hamish Hector) Meta should be worriedSo why is this Xreal Android XR reveal potentially so important in my eyes?
Because while Meta has promised its Horizon OS will appear on non-Meta headsets – from Asus, Lenovo, and Xbox – since that announcement we’ve seen nothing of these other headsets in over a year. That is, beyond a whisper on winds (read: a small leak) about Asus’ Project Tarius.
Android XR on the other hand has, before launch, not only confirmed collaborations between Google and other companies (Xreal and Samsung) but shown those devices in action.
They aren’t just promises, they’re real.
A threat to the Meta Quest 3? (Image credit: Meta)Now the key deciding factor will be if Android XR can prove itself as an operating system that rivals Horizon OS in terms of the breadth and quality of its XR apps. With Google, Samsung, Xreal, and more behind it, I’m feeling confident that it will.
If it lives up to my expectations, Android XR could seriously shake up Meta’s XR dominance thanks to the varied XR hardware options under its umbrella out the gate – that should lead to competition resulting in better devices and prices for us consumers as an end result.
We’ll have to continue to watch how Android XR develops, but it looks like Google is off to a very strong start. For the first time in a while Meta might finally be on the back foot in the XR space, and the ball is in its court to respond.
You might also likeYou may have already seen Google's Project Starline tech, which reimagines video calls in full 3D. It was first teased over four years ago, and at Google I/O 2025 we got the news that it's rolling out in full with a new name: Google Beam.
Since its inception, the idea of Google Beam has been to make it feel like you're in the same room as someone when you're on a video call with them. Rather than using headsets or glasses though, it relies on cameras, mics, and AI technology.
@techradar ♬ original sound - TechRadar"The combination of our AI video model and our light field display creates a profound sense of dimensionality and depth," says Google. "This is what allows you to make eye contact, read subtle cues, and build understanding and trust as if you were face-to-face."
Beam participants need to sit in a custom-made booth, with a large, curved screen that's able to generate a partly three-dimensional rendering of the person they're speaking to. The first business customers will get the equipment from HP later this year.
Real-time translation Google Beam in action (Image credit: Google)There's another element of Google Beam that's been announced today, and that's real-time translation. As you might expect, this is driven by AI technology, and makes it easier to converse with someone else in a different language.
As per the demo that Google has shown off, the translation is just a second or two behind the speech, and it works in the same way that a translation might be added afterwards on top of someone speaking in a video recording.
It's another impressive part of the Google Beam experience, and offers another benefit for organizations with teams and clients all across the world. According to Google, it can preserve voice, tone, and expression, while changing the language the audio is spoken in.
This part of the experience won't only be available in Google Beam though: it's rolling out now inside Google Meet now for consumers, though you are going to need either the Google AI Pro or the Google AI Ultra plan to access it.
You might also likeGoogle became a verb for search long before AI chatbots arrived to answer their first prompt, but now those two trends are merging as Google solidified AI's position in search with the full rollout of AI Mode for all US Google Search users. Google made the announcement as part of Google I/O, which is underway in California.
Finding results from a generative model that often gives you everything you need on the Google result page is a fundamental shift in traditional web search paradigms.
For over 25 years now, we've traditionally searched on a term, phrase, or even complete thought and found pages and pages of links. The first page is the links that matter most in that they'll mostly closely align with your query. It's no secret that companies, including the one I work at, fight tooth and nail to create content that lives on the first page of those results.
Things began to change in the realm of Google Search when Google introduced AI Overviews in 2023. As of this week, they're used by 1.5 billion monthly users, according to Google.
Where AI Overview was a light-touch approach to introducing generative AI to search, AI Mode goes deeper and further. The latest version of AI Mode, introduced at Google I/O 2025, adds more advanced reasoning and can handle even longer and more complex queries.
Suffice it to say, your Google Search experience may never be the same.
View from the top Google CEO Sundar Pichai (Image credit: Bloomberg/Getty Images)Google CEO Sundar Pichai, though, has a different view. In a conversation with reporters before Google I/O and in answer to a question about the rise of AI chatbots like Gemini and the role of search, Pichai said, "It's been a pretty exciting moment for search."
He said that engagement with AI Overviews and even the limited AI Modes tests has shown increased engagement, with people spending more time in search and inputting longer and longer queries.
No one asked if the rise of chatbots could mean the end of search as we know it, but perhaps Pichai inferred the subtext, adding, "It's very far from a zero-sum moment."
If anything, Pichai noted, "People, I think, are just getting more curious; people are using a lot of this a lot more. "
While AI Overview is often accused of having some factual issues, Google is promising that AI mode, which uses more powerful models than AI Overview, will be more accurate. "AI Mode uses more powerful models and uses more reasoning across – sort of doing more work– ...and so it reaches an even higher bar," said Google Search Head Liz Reid.
As for where search is going, Pichai sees features like AI Mode "expanding use cases". He also thinks that agentic AI is "giving people glimpses of a proactive world."
I think, by this, Pichai means that AI-powered search will eventually learn your needs and do your bidding, even if your query or prompt doesn't fully describe your need.
What that means in practice is still up for debate, but for Google and Picahi, the advent of AI in search is all upside.
"I do think it's made the Web itself more exciting. People are engaging a lot more across the board, and so it's a very positive moment for us."
You might also like