Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
Updated: 2 hours 51 min ago

Google Search just got its biggest-ever upgrade to lure you away from ChatGPT – here are 7 new things to try

Tue, 05/20/2025 - 12:47

Google Search is under pressure – not only are many of us replacing it with the likes of ChatGPT Search, Google's attempts to stave off competition with the features like AI Overviews have also backfired due to some worrying inaccuracies.

That's why Google has just given Search its biggest overhaul for over 25 years at Google I/O 2025. The era of the 'ten blue links' is coming to close, with Google now giving its AI Mode (previously stashed away in its Labs experiments) a wider rollout in the US.

AI Mode was far from the only Search news at this year's I/O – so if you been wondering what the next 25 years of 'Googling' looks like, here are all of the new Search features Google's just announced.

A word of warning: beyond AI Mode, many of the features will only be available to Labs testers in the US – so if you want to be among the first to try them "in the coming weeks", turn on the AI Mode experiment in Labs.

1. AI Mode in Search is rolling out to everyone in the US

(Image credit: Google)

Yes, Google has just taken off the stabilizers off its AI Mode for Search – which was previously only available in Labs to early testers – and rolled it out to everyone in the US. There's no word yet on when it's coming to other regions.

Google says that "over the coming weeks" (which sounds worryingly vague) you'll see AI Mode appear as a new tab in Google Search on the web (and in the search bar in the Google app).

We've already tried out AI Mode and concluded that "it might be the end of Search as we know it", and Google says it's been refining it since then – the new version is apparently powered by a custom version of Gemini 2.5.

@techradar

♬ original sound - TechRadar 2. Google also has a new 'Deep Search' AI Mode

(Image credit: Google)

A lot of AI chatbots – including ChatGPT and Perplexity – now offer a Deep Research mode for longer research projects that require a bit more than a quick Google. Well, now Google has its own equivalent for Search called, yes, 'Deep Search'.

Available in Labs "in the coming months" (always the vaguest of release windows), Deep Search is a feature within AI Mode that's based on the same "query fan-out" technique as that broader mode, but according to Google takes it to the "next level".

In reality, that should mean an "expert-level, fully-cited report" (Google says) in only a few minutes, which sounds like a big time-saver – as long as the accuracy is a bit better than Google's AI Overviews.

3. Search Live lets you quiz Google with your camera

(Image credit: Google)

Google already lets you ask questions about the world with Google Lens, and demoed its Project Astra universal assistant at Google I/O 2024. Well, now it's folding Astra into Google Search so you can ask questions in real-time using your smartphone's camera.

'Search Live' is another Labs feature and will be marked by a 'Live' icon in Google's AI Mode or in Google Lens. Tap it and you'll be able to point your camera and have a back-and-forth chat with Google about what's in front of you, while getting links sent to you with more info.

The idea sounds good in theory, but we're still yet to try it out beyond its prototype incarnation last year and the multimodal AI project is cloud-based, so your mileage may vary depending on where you're using it. But we're excited to see how far it's come in the last year or so with this new Labs version in Search.

@techradar

♬ original sound - TechRadar 4. AI Overviews are going global

(Image credit: Future)

We're not exactly wild about AI Overviews, which are the little AI-generated paragraphs you often see at the top of your search results. They're sometimes inaccurate and have resulted in some infamous clangers, like recommending that people add glue to their pizzas. But Google is ploughing ahead with them and announced that AI Overviews are getting wider rollout.

The new expansion means the feature will be available in more than 200 countries and territories and more than 40 languages worldwide. In other words, this is the new normal for Google Search, so we'd better get used to it.

Google's Liz Reid (VP, Head of Search) acknowledged in a press briefing before Google I/O 2025 that AI Overviews have been a learning experience, but claims they've improved since those early incidents.

"Many of you may have seen that a set of issues came up last year, although they were very much education and quite rare, we also still took them very, very seriously and made a lot of improvements since then", she said.

5. Google Search will soon be your ticket-buying agent

(Image credit: Google)

Finding and and buying tickets and still something of painful experience in Google Search. Fortunately, Google is promising a new mode that's powered by Project Mariner, which is an AI agent that can surf the web just like a human and complete tasks.

Rather than a separate feature, this will apparently live within AI Mode and kick in when you ask questions like "Find two affordable tickets for this Saturday's Reds game in the lower level".

This will see it scurry off and analyze hundreds of ticket options with real-time pricing. It can also fill in forms, leaving you with the simple task of hitting the 'purchase' button (in theory, at least).

The only downside is that this is another of Google's Lab projects that will launch "in the coming months", so who knows when we'll actually see it in action.

6. Google Shopping is getting an AI makeover

(Image credit: Google)

Google gave its Shopping tab within Google Search a big refresh back in October 2024, and now many of those features are getting another boost thanks to some new integration with AI Mode.

The 'virtual try-on' feature (which now lets you upload a photo of yourself to see how new clothing might look on you, rather than models) is back again, but the biggest new feature is an AI-powered checkout feature that tracks prices for you, then buys things on your behalf using Google Pay when the price is right (with your confirmation, of course).

We're not sure this is going to help cure our gear-acquisition syndrome, but it it does also have some time-saving (and savings-wrecking) potential.

7. Google Search is getting even more personalized (if you want it to)

Like traditional Search, Google's new AI Mode will offer suggestions based on your previous searches, but you can also make it a lot more personalized. Google says you'll be able to connect it to some of its other services, most notably Gmail, to help its answer your queries with a more tailored, personal touch.

One example Google gave was asking AI Mode for "things to do in Nashville this weekend with friends". If you've plugged it into other Google services, it could use your previous restaurant bookings and searches to lean the results towards restaurants with outdoor seating.

There are obvious issues here – for many, this may be a privacy invasion too far, so they'll likely not opt into connecting it to other services. Also, these 'personal context' powers sound like they have the 'echo chamber' problem of assuming you always want to repeat your previous preferences.

Still, it could be another handy evolution of Search for some, and Google says you can always manage your personalization settings at any time.

You might also like
Categories: Technology

Want to be the next Spielberg? Google’s AI-powered Flow could bring your movie ideas to life

Tue, 05/20/2025 - 12:46
  • Google Flow is a new tool for filmmakers to tap into the power of generative AI
  • Flow uses multiple Google AI models to create cinematic scenes and characters from text prompts
  • This could open up more creative movie-making for people without Hollywood budgets

Google clearly wants to inject artificial intelligence into more creative tools, as evidenced by the introduction of Flow at today’s Google I/O 2025.

Flow is the search giant’s new ‘AI filmmaking tool’ that uses Google’s AI models, such as Veo, Imagen, and Gemini to help creative types explore storytelling ideas in movies and videos without needing to go out and film clips and cinematic scenes or sketch out a lot of storyboard scenes by hand.

Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow lets users add in text prompts in natural, everyday language to create scenes, such as "astronauts walk out of the museum on a bridge,” and the AI tech behind Flow will create such a scene.

Flow lets filmmakers bring their own assets into it, from which characters and other images can be created. Once a subject or scene is created, it can be integrated into clips and scenes in a fashion that’s consistent with the video or film as a whole.

There are other controls beyond the creation of assets and scenes, with Flow offering direct manipulation of camera angles, perspectives and motion, easy editing of scene to hone in on features or widen up a shot to include more action - this appears to work as easily as a cropping tool - and offers the ability to manage all the ‘ingredients’ and prompt for Flow.

Flow will be available for subscribers of Google Al Pro and Google Al Ultra plans in the US, with more countries slated to get access to the filmmaking AI soon.

AI-made movies? Image 1 of 8

Google Flow in action (Image credit: Google Flow)Image 2 of 8

(Image credit: Google Flow)Image 3 of 8

(Image credit: Google Flow)Image 4 of 8

(Image credit: Google Flow)Image 5 of 8

(Image credit: Google)Image 6 of 8

(Image credit: Google)Image 7 of 8

(Image credit: Google Flow)Image 8 of 8

(Image credit: Google)

From seeing videos of Flow in action, it appears to be a powerful tool that brings an idea into a visual form, and with surprising realism. Powered by natural language prompts means budding filmmakers can create shots and science that would in the past have required dedicated sets or at least some deft CGI work.

In effect, Flow could be one of those AI tools that opens up the world of cinema to a wider range of creatives, or at least gives amateurs more powerful creative tools to bring their ideas to life.

However, this does raise the question of whether Flow would be used to create ideas for storytelling that would then be brought into silver screen life via physical sets, actors, and dedicated cinema CGI. Or if Flow will be used to create whole movies with AI, effectively letting directors be the sole producers of films, and bypass the need for actors, camera people, and the wealth of crew that are integral to traditional movie making.

As such, AI-powered tools like Flow could breathe new life into the world of cinema that one might argue has got a little stale, at least on the big production commercial side, and at the same time disrupt the roles and work required in the movie-making industry.

You might also like
Categories: Technology

Gemini Live is now free for everyone on Android and iOS, and you can finally share your screen and camera on iPhone - here's how to try it

Tue, 05/20/2025 - 12:45
  • Google's Gemini Live is now free for all Android and iOS users
  • iOS users can now share their screen and camera with Gemini Live (previously only available on Android)
  • Expect more integration with Google apps in the coming weeks

Google just announced that its AI voice assistant, Gemini Live, is now available for free on iOS and Android.

Gemini Live has been available to paid subscribers for a while now, but you can now chat with AI, use your smartphone's camera to show it things, and even screen share without spending any money.

The major announcement happened at Google I/O, the company's flagship software event. This year, Google I/O has focused heavily on Gemini and the announcement of AI Mode rolling out to all US Google Search users.

@techradar

♬ original sound - TechRadar

Gemini Live is one of the best AI tools on the market, competing with ChatGPT Advanced Voice Mode. Where Gemini Live thrives is in its ability to interact with what you see on screen and in real life.

Before today, you needed an Android device to access Live's camera, but now that has all changed, and iPhone users can experience the best that Gemini has to offer.

Google says the rollout will begin today, with all iOS users being able to access Gemini Live and screen sharing over the following weeks.

More Gemini Live integration in your daily life

Free access and iOS rollout weren't the only Gemini Live features announced at Google I/O. In fact, new functionality for the voice assistant could be a headline new addition.

Over the coming weeks, Google says Gemini Live will "integrate more deeply into your daily life. " Whether that's by adding events to your Google Calendar, accessing Google Maps, or interacting with more of the Google ecosystem, Gemini Live is going to become an essential part of how AI interacts with your device.

While Google didn't say if this functionality will be available on iOS, it's safe to assume that, for now, increased system integration will be limited to Android.

Gemini Live's free rollout, along with its upgrades, is one of, if not the, best announcements of Google I/O, and I can't wait to see how it improves over the next few months.

How to use Gemini Live

(Image credit: Google)

Accessing Gemini Live is simple, you just need access to the Gemini app on iOS or Android.

  • Open the Gemini app
  • Tap the Gemini live icon (found at the far right of the text input box)
  • Start chatting with Gemini Live
You might also like
Categories: Technology

Google's Veo 3 marks the end of AI video's 'silent era'

Tue, 05/20/2025 - 12:45
  • Google's video generation model got a major upgrade
  • Announced at Google I/O, Veo 3 can combine audio and video in its output
  • It's an Ultra and US-only feature for now

AI video generation tools such as Sora and Pika can create alarmingly realistic bits of video, and with enough effort, you can tie those clips together to create a short film. One thing they can't do, though, is simultaneously generate audio. Google's new Veo 3 model can, and that could be a game changer.

Announced on Tuesday at Google I/O 2025, Veo 3 is the third generation of the powerful Gemini video generation model. With the right prompt, it can produce videos that include sound effects, background noises, and, yes, dialogue.

Google briefly demonstrated this capability for the video model. The clip was a CGI-grade animation of some animals talking in a forest. The sound and video were in perfect sync.

If the demo can be converted into real-world use, this represents a remarkable tipping point in the AI content generation space.

"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis in a press call.

Lights, camera, audio

He isn't wrong. Thus far, no other AI video generation model can simultaneously deliver synchronized audio, or audio of any kind, to accompany video output.

It's still not clear if Veo 3, which, like its predecessor, Veo 2, should be able to output 4K video, surpasses current video generation leader OpenAI Sora in the video quality department. Google has, in the past, claimed that Veo 2 is adept at producing realistic and consistent movement.

Regardless, outputting what appears to be fully produced video clips (video and audio) may instantly make Veo a more attractive platform.

It's not just that Veo 3 can handle dialogue. In the world of film and TV, background noises and sound effects are often the work of Foley artists. Now, imagine if all you need to do is describe to Veo the sounds you want behind and attached to the action, and it outputs it all, including the video and dialogue. This is work that takes animators weeks or months to do.

In a release on the new model, Google suggests you tell the AI "a short story in your prompt, and the model gives you back a clip that brings it to life."

If Veo 3 can follow prompts and output minutes or, ultimately, hours of consistent video and audio, it won't be long before we're viewing the first animated feature generated entirely through Veo.

Veo is live today and available in the US as part of the new Ultra tier ($249.99 a month) in the Gemini App and also as part of the new Flow tool.

Google also announced a few updates to its Veo 2 video generation model, including the ability to generate video based on reference objects you provide, camera controls, outpainting to convert from portrait to landscape, and object add and erase.

@techradar

♬ original sound - TechRadar You might also like
Categories: Technology

Google finally gave us a closer look at Android XR – here are 4 new things we've learned

Tue, 05/20/2025 - 12:45
  • Android XR has been showcased by Google at I/O 2025
  • It relies heavily on AI to deliver many of its features like live translation
  • Several different glasses brands will deliver Android XR features

At Google I/O 2025 Google finally gave us what we’ve all been waiting for (well, what I’ve been waiting for): a proper Android XR showcase.

The new Google operating system made for Android headsets and Android glasses has been teased as the next big rival to Meta’s Horizon OS – the software that powers the Meta Quest 3 and Quest 3S – and we finally have a better picture of how it stacks up.

Admittedly the showcase was a little short, but we do know several new details about Android XR, and here are four you need to know.

1. Android XR has Gemini at its core

(Image credit: Future)

While I’d argue Google’s Android XR showcase wasn’t as in-depth as I wanted, it did show us what the operating system has running at its core: Google Gemini.

Google’s advanced AI is the OS’ defining feature (at least that’s how Google is positing it).

On-glasses-Gemini can recommend you a place to eat ramen then offer you on-screen directions to where to find it, it can perform live-translation, and on a headset it can use Google Maps' immersive view to virtually transport you to any destination you request.

Particularly on the glasses this completely hands-free approach – combined with cameras and a head-up display – looks to be Google Gemini in its most useful form. You can get the assistant’s help as quickly as you can ask for it, no fumbling to get your phone out required.

I want to see more but this certainly looks like a solid upgrade on the similar Meta AI feature the Ray-Ban Meta smart glasses offer.

2. Android XR is for more than Samsung

(Image credit: Xreal)

Ahead of Google I/O we knew Samsung was going to be a key Android XR partner – alongside Qualcomm, who’s providing all the necessary Snapdragon chipsets to power the Android XR hardware.

But we now know several other companies are collaborating with Google.

Xreal has showcased Project Aura, which will bring Android XR to an upgraded version of its tethered glasses that we’re familiar with (like the Xreal One) – with Aura being complete with a camera and Snapdragon processor.

Then Google also teased glasses from Gentle Monster and Warby Parker, implying it is taking Meta’s approach of partnering with fashion brands, rather than just traditional tech brands.

Plus, given that Gentle Monster and Warby Parker offer very different design aesthetics, this will be good news for people who want varied fashion choices for their new smart glasses accessories.

3. Project Moohan is still coming ‘later this year’

(Image credit: Google)

The Android XR headset Project Moohan is still set to launch in 2025, but Google and Samsung have yet to confirm a specific release date.

I was hoping we’d get something more concrete, but continued confirmation that Moohan will be landing in 2025 is better than it being delayed.

Google and its partners weren’t keen to give us any firm dates, in fact. Xreal calling its Project Aura the second official Android XR glasses suggests it’ll land sometime after Moohan, but before anything else – however, we’ll have to wait and see what plays out.

4. Meta should be worried, but not terrified

(Image credit: Google)

Google certainly dealt XR’s biggest player – Meta, with its hugely popular Quest headset hardware – a few blows and gave its rival something to be worried about.

However, this showcase is far from a finisher, especially not in the headset department.

Meta’s Connect 2025 showcase in September is expected to show us similar glasses tech and features, and depending on release dates Meta might beat Android XR to the punch.

That said, competition is only going to be a good thing for us consumers, as these rivals battle over price and features to entice us to one side or the other. Unlike previous battles in the XR space this certainly seems like a balanced fight and I’m excited to see what happens next.

@techradar

♬ original sound - TechRadar You might also like
Categories: Technology

Xreal is making Android XR glasses and this could be the biggest XR announcement since the Meta Quest 2 – here’s why

Tue, 05/20/2025 - 12:45
  • Xreal is working on Android XR glasses
  • They're codenamed Project Aura (the same name Google's rumored Glass 2 apparently had)
  • The glasses are a collab between Xreal, Google and Qualcomm

Google and Samsung's Android XR collab has been a major focus, but at Google I/O 2025 a new (yet familiar) partner emerged to showcase the second official Android XR device: Xreal with Project Aura.

Xreal and its Xreal One glasses currently top our list for the best smart glasses thanks to their impressive audio and visual quality.

However, while they include AR elements – they make your connected device (a phone, laptop, or console, among other options) float in front of you like you’re in a private movie theatre, which is fantastic by the way – they aren’t yet as versatile as other smart glasses propositions we’re being promised by Google, Meta, Snap and others.

Xreal Project Aura – a pair of XR glasses officially referred to as an optical see-through (OST) XR device – should shift Xreal’s range towards that of its rivals thanks to its advanced Qualcomm chipset, Xreal’s visual system expertise, and Google’s Android XR software. The combination of which should (hopefully) form a more fully realized spatial computing device than we’ve seen from Xreal before.

Samsung aren't the only Android XR glasses (Image credit: Google)

As exciting as this announcement is – I’ll explain more below in a moment – we should keep our emotions in check until further details on Project Aura are revealed at the Augmented World Expo (AWE) in June, and in other announcements set to be made “later this year” (according to Xreal).

Simply because beyond its existence and its general design we know very little about Aura.

We can see it has in-built cameras, have been promised Qualcomm processors, and it appears to use the same dual-eye display technology exhibited by Xreal’s other glasses. Plus it'll tethered rather than fully wireless, though it should still offer all of the Android XR abilities Google has showcased.

But important questions like its cost and release date haven’t yet been detailed.

I’m hoping it’ll offer us a more cost-effective entry point to this new era of XR glasses, but we’ll have to wait and see before we know for certain if this is “a breakthrough moment for real-world XR” as Chi Xu, Co-founder and CEO of Xreal promises.

Still, even before knowing its specs and other key factors I’m leaning towards agreeing with Xreal’s CEO.

I love my Xreal One glasses (Image credit: Future / Hamish Hector) Meta should be worried

So why is this Xreal Android XR reveal potentially so important in my eyes?

Because while Meta has promised its Horizon OS will appear on non-Meta headsets – from Asus, Lenovo, and Xbox – since that announcement we’ve seen nothing of these other headsets in over a year. That is, beyond a whisper on winds (read: a small leak) about Asus’ Project Tarius.

Android XR on the other hand has, before launch, not only confirmed collaborations between Google and other companies (Xreal and Samsung) but shown those devices in action.

They aren’t just promises, they’re real.

A threat to the Meta Quest 3? (Image credit: Meta)

Now the key deciding factor will be if Android XR can prove itself as an operating system that rivals Horizon OS in terms of the breadth and quality of its XR apps. With Google, Samsung, Xreal, and more behind it, I’m feeling confident that it will.

If it lives up to my expectations, Android XR could seriously shake up Meta’s XR dominance thanks to the varied XR hardware options under its umbrella out the gate – that should lead to competition resulting in better devices and prices for us consumers as an end result.

We’ll have to continue to watch how Android XR develops, but it looks like Google is off to a very strong start. For the first time in a while Meta might finally be on the back foot in the XR space, and the ball is in its court to respond.

You might also like
Categories: Technology

Google Beam could change your video calls forever with glasses-free 3D and near real-time translation

Tue, 05/20/2025 - 12:45
  • Project Starline launched in 2021 and is now Google Beam
  • It's rolling out this year, adding 3D to video calls
  • There's also a real-time translation component to the tech

You may have already seen Google's Project Starline tech, which reimagines video calls in full 3D. It was first teased over four years ago, and at Google I/O 2025 we got the news that it's rolling out in full with a new name: Google Beam.

Since its inception, the idea of Google Beam has been to make it feel like you're in the same room as someone when you're on a video call with them. Rather than using headsets or glasses though, it relies on cameras, mics, and AI technology.

@techradar

♬ original sound - TechRadar

"The combination of our AI video model and our light field display creates a profound sense of dimensionality and depth," says Google. "This is what allows you to make eye contact, read subtle cues, and build understanding and trust as if you were face-to-face."

Beam participants need to sit in a custom-made booth, with a large, curved screen that's able to generate a partly three-dimensional rendering of the person they're speaking to. The first business customers will get the equipment from HP later this year.

Real-time translation

Google Beam in action (Image credit: Google)

There's another element of Google Beam that's been announced today, and that's real-time translation. As you might expect, this is driven by AI technology, and makes it easier to converse with someone else in a different language.

As per the demo that Google has shown off, the translation is just a second or two behind the speech, and it works in the same way that a translation might be added afterwards on top of someone speaking in a video recording.

It's another impressive part of the Google Beam experience, and offers another benefit for organizations with teams and clients all across the world. According to Google, it can preserve voice, tone, and expression, while changing the language the audio is spoken in.

This part of the experience won't only be available in Google Beam though: it's rolling out now inside Google Meet now for consumers, though you are going to need either the Google AI Pro or the Google AI Ultra plan to access it.

You might also like
Categories: Technology

Google CEO: AI is not a 'zero-sum moment' for search

Tue, 05/20/2025 - 12:45

Google became a verb for search long before AI chatbots arrived to answer their first prompt, but now those two trends are merging as Google solidified AI's position in search with the full rollout of AI Mode for all US Google Search users. Google made the announcement as part of Google I/O, which is underway in California.

Finding results from a generative model that often gives you everything you need on the Google result page is a fundamental shift in traditional web search paradigms.

For over 25 years now, we've traditionally searched on a term, phrase, or even complete thought and found pages and pages of links. The first page is the links that matter most in that they'll mostly closely align with your query. It's no secret that companies, including the one I work at, fight tooth and nail to create content that lives on the first page of those results.

Things began to change in the realm of Google Search when Google introduced AI Overviews in 2023. As of this week, they're used by 1.5 billion monthly users, according to Google.

Where AI Overview was a light-touch approach to introducing generative AI to search, AI Mode goes deeper and further. The latest version of AI Mode, introduced at Google I/O 2025, adds more advanced reasoning and can handle even longer and more complex queries.

Suffice it to say, your Google Search experience may never be the same.

View from the top

Google CEO Sundar Pichai (Image credit: Bloomberg/Getty Images)

Google CEO Sundar Pichai, though, has a different view. In a conversation with reporters before Google I/O and in answer to a question about the rise of AI chatbots like Gemini and the role of search, Pichai said, "It's been a pretty exciting moment for search."

He said that engagement with AI Overviews and even the limited AI Modes tests has shown increased engagement, with people spending more time in search and inputting longer and longer queries.

No one asked if the rise of chatbots could mean the end of search as we know it, but perhaps Pichai inferred the subtext, adding, "It's very far from a zero-sum moment."

If anything, Pichai noted, "People, I think, are just getting more curious; people are using a lot of this a lot more. "

While AI Overview is often accused of having some factual issues, Google is promising that AI mode, which uses more powerful models than AI Overview, will be more accurate. "AI Mode uses more powerful models and uses more reasoning across – sort of doing more work– ...and so it reaches an even higher bar," said Google Search Head Liz Reid.

As for where search is going, Pichai sees features like AI Mode "expanding use cases". He also thinks that agentic AI is "giving people glimpses of a proactive world."

I think, by this, Pichai means that AI-powered search will eventually learn your needs and do your bidding, even if your query or prompt doesn't fully describe your need.

What that means in practice is still up for debate, but for Google and Picahi, the advent of AI in search is all upside.

"I do think it's made the Web itself more exciting. People are engaging a lot more across the board, and so it's a very positive moment for us."

You might also like
Categories: Technology

Gemini’s AI images have been updated to Imagen 4 with a ‘huge step forward in quality’ – here’s what you need to know

Tue, 05/20/2025 - 12:45
  • Imagen 4 is now available to all Gemini users
  • 2K images with more detail and better typography
  • Google claims you can now use it to create greetings cards

Google's AI image generation just levelled up, with a new version of Imagen 4 bringing with it a bunch of big upgrades including a higher resolution and better text handling.

The upgrade was announced at Google I/O 2025 today, and should noticeably improve Gemini’s image capabilities, which were already rivalling those of ChatGPT.

Taking over from the previous version 3, Imagen 4 has "remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, according to Google. You can see the new level of detail in the preview images above and below.

Imagen 4 is also the first version of Google’s AI image generator that can go up to 2K resolution, meaning you’ll be able to make larger images for presentations and pictures that will look even better when printed out.

The detail on the water droplets in this image generated by Imagen 4 is quite impressive. (Image credit: Google)

A real challenge for AI image generators in the past (apart from creating realistic fingers) has been representing text in a way that makes sense and is readable.

While Imagen 3 did make significant inroads into presenting typography in a better way, Imagen 4 promises to take text to the next level.

Google claims Imagen 4 will be “significantly better at spelling and typography, making it easier to create your own greeting cards, posters and even comics”.

Usage limits

When it comes to the usage limits on Imagen 4, we don’t expect the situation to be radically different from those with Imagen 3, but will update this post if we hear anything different.

Currently, if you are using Imagen 3 through the Gemini chatbot, daily limits vary depending on whether you’re a free Gemini user or a Gemini Advanced subscriber.

Free users can expect around 10-20 image generations per day, depending on how heavily the service is being used. Gemini Advanced subscribers can expect higher limits of up to 100-150 daily image generations.

As with Imagen 3, there are content restrictions on Imagen 4, especially around generating images of real individuals. However, Imagen 4 has no problems generating images of generic people.

Available today across Google apps

Imagen 4 isn’t only available in Gemini, either; from today you’ll be able to use it across Whisk, Vertex AI, Slides, Vids, Docs and more in Workspace.

And there’s more to come, too. Google says that it will “soon” be launching a super-fast variant of Imagen 4 that’s up to 10x faster than Imagen 3 at generating images.

You may also like
Categories: Technology

Google Gemini 2.5 just got a new 'Deep Think' mode – and 6 other upgrades

Tue, 05/20/2025 - 12:45
  • Google Gemini 2.5 Pro is getting a new Deep Think model
  • Deep Think allows Gemini to consider multiple reasoning paths before responding
  • Deep Think will improve Gemini's accuracy on complex math and code

Google is adding some extra brainpower to Gemini with a new Deep Think Mode. The company unveiled the latest option for Google Gemini 2.5 Pro at Google I/O 2025, showing off just what its AI can do with extra depth.

Deep Think basically augments Gemini's AI 'mind' with additional brains. Gemini in Deep Think mode won't just spit out an answer to a query as fast as possible. Instead, it runs multiple possible lines of reasoning in parallel before deciding how to respond. It’s like the AI equivalent of looking both ways, or rereading the instructions before building a piece of furniture.

And if Google's tests are anything to go by, Deep Think's brainpower is working. It’s performing at a top-tier level on the 2025 U.S. math olympiad, coming out on top in the LiveCodeBench competitive programming test, scoring an amazingly high 84% on the popular MMMU, a sort of decathlon of multimodal reasoning tasks. Deep Think isn’t widely available just yet. Google is rolling it out to trusted testers only for now. But, presumably, once all the kinks are ironed out, everyone will have access to the deepest of Gemini's thoughts.

Gemini shines on

Deep Think fits right into the rest of Gemini 2.5’s growing lineup and the new features arriving for its various models in the API used by developers to embed Gemini in their software.

For instance, Gemini 2.5 Pro now supports native audio generation out. That means it can talk back to you. The speech has an “affective dialogue” feature, which detects emotional shifts in your tone and adjusts accordingly. If you sound stressed, Gemini might stop talking like a patient customer service agent and respond more like an empathetic and thoughtful friend (or at least how the AI interprets such a response). And it will be better at knowing when to talk at all thanks to the new Proactive Audio feature, which filters out background noise so Gemini only chimes in when it’s sure you’re talking to it.

Paired with new security safeguards and the upcoming Project Mariner computer-use features, Gemini 2.5 is trying very hard to be the AI you trust not just with your calendar or code, but with your book narration or entire operating system.

Another element expanding across Gemini 2.5 is what Google calls a 'thinking budget.' Previously unique to Gemini 2.5 Flash, the thinking budget lets developers decide just how deeply the model should think before responding. It's a good way to ensure you get a full answer without spending too much. Otherwise, Deep Think could give you just a taste of its reasoning, or give you the whole thing and make it too expensive for any follow-ups.

In case it's not clear what those thoughts involve, Gemini 2.5 Pro and Flash will offer 'thought summaries' for developers, a document showing the exact details of what the AI was doing in terms of applying information through its reasoning process, so you can actually look inside the AI brain.

All of this signals a pivot from models that just talk fast to emphasizing ones that can reason deeper, if slower. Deep Think is part of that shift toward deliberate, layered reasoning. It’s not just trying to predict the next word anymore, it's applying that logic to ideas and the very process of coming up with answers to your questions. Google seems keen to make Gemini not only able to fetch answers, but to understand the shape of the question itself.

Of course, AI reasoning still exists in a space where a perfectly logical answer might come with a random side of nonsense, no matter how impressive the benchmark scores. But you can start to see the shape of what’s coming, where the promise of an actual 'co-pilot' AI comes to fruition.

You might also like
Categories: Technology

Hideo Kojima says Physint, his next game after Death Stranding 2: On the Beach, is 5-6 years away from release

Tue, 05/20/2025 - 11:31
  • Hideo Kojima has said he'd like to make a film after Death Stranding 2: On the Beach and Physint are complete
  • He says Physint is at least five or six years away from being released
  • The game director is also working on a horror game for Microsoft called oD

Death Stranding 2: On the Beach director Hideo Kojima has said that his next game, Physint, is still at least five to six years away from release.

In a new interview with French magazine Le Film Français(via VGC), ahead of the launch of Death Stranding 2, Kojima was asked whether he would ever consider directing a film in the future.

The game director said he would, and that he "received many offers after leaving Konami."

Besides the Death Stranding sequel, Kojima is currently working on his action espionage game Physint, which he said will take another five to six years to finish before he can consider moving into filmmaking.

“Besides Death Stranding 2, there is Physint in development," Kojima said. "That will take me another five or six years. Maybe after that, I could finally decide to tackle a film. I grew up with cinema. Directing would be a kind of homage to it. Besides, I’m getting older, and I would prefer to do it while still young.”

Phyisint is a brand new "original IP" that was announced during the PlayStation State of Play in January 2024 and will be Kojima Productions' third major game.

Kojima is also developing OD, his horror project for Microsoft that was revealed back in 2023. The director didn't mention anything new about OD during his interview, but it's said to be a "totally new style of game" being developed alongside Xbox Game Studios and will star actors Sophia Lillis, Udo Kier, and Hunter Schaffer.

For now, Kojima fans can look forward to Death Stranding 2: On the Beach, which is set to launch on June 26, 2025, for PS5.

You might also like...
Categories: Technology

Europa League Final 2025 LIVE: How to watch Man Utd vs Tottenham for FREE

Tue, 05/20/2025 - 11:28

The 2025 Europa League Final is here - Tottenham face off against Man Utd in all English final as both teams look to put behind dreadful domestic campaigns.

The final will not only see one team lift the trophy, but also secure Champions League football for 2025-26 - a sweet reward for UEFA's second-tier competition.

FREE coverage has been provided thanks to TNT Sports via Discovery Plus in the UK and Ireland.

Ready to catch all the action? We'll keep you up-to-date with all the latest from Bilbao including highlights, replays and live updates.

(Image credit: Photo by Alex Pantling - UEFA/UEFA via Getty Images)

Tottenham and Man Utd face off tomorrow night in one of the most highly anticipated Europa League Final's in many a year.

The finalists have only received 15,000 tickets, but if you can't make it to Bilbao you can keep up with the action across a multitude of tv channels and streams.

TNT Sports have made it FREE via Discovery Plus in the UK and Ireland. While those in the US can keep up with the action using Paramount Plus.

Ange Postecgolou denies being a 'clown'.

The Australian has addressed the press 24 hours out from their crunch clash against Ruben Amorim's side tomorrow.

We'll show you how to catch all the action wherever you are right here.

Europa League Final: FREE in the UK

Did you know the game is being broadcast for FREE on Discovery Plus in the UK and Ireland.

Categories: Technology

Windows 10’s latest update packs a nasty bug, and while your system might be safe, it’s vital you check now

Tue, 05/20/2025 - 09:37
  • Windows 10’s May update carries a bug that could be a painful experience
  • Microsoft has rushed out an emergency fix already
  • Be sure to apply that fix before you install the May update – but if you’ve already encountered this bug, there’s still a way out

Windows 10 users need to be aware of a fresh bug in the latest update for the OS, even though it’s a glitch that’s going to be much more prevalent with business laptops rather than consumer machines.

That’s because if your Windows 10 PC does encounter the problem, it can be quite a nasty one to have to rescue your system from – and you can avoid any potentially technically traumatic episode by simply installing an emergency fix Microsoft has just rushed out.

Windows Latest reported the issue with the May update for Windows 10, which causes an affected PC to fail to install the upgrade, and then run an automatic repair – a process that can happen several times, confusingly.

Adding further to the confusion is that if you have BitLocker or Device Encryption turned on (so the data on your drive is encrypted), you’ll end up at the recovery screen. That recovery process asks for your key ID, and if you don’t have that info to hand, then you’re in something of a pickle, shall we say.

Let’s cover those all-important caveats first, though, the main one being that to be affected, your PC must be running an Intel vPro processor (10th-gen or newer). This is because the bug relates to Intel Trusted Execution Technology (TXT for short) which is part of the vPro array of security measures.

As the name suggests, vPro is a brand of chips mostly used for professional (business) notebooks, but they can be found in consumer laptops, too. As Microsoft notes: “Consumer devices typically do not use Intel vPro processors and are less likely to be impacted by this issue.”

It’s worth checking if your PC has such an Intel vPro chip inside, and if it has, if you haven’t already installed the May update for Windows 10 22H2, whatever you do, push pause on that.

Rather than grabbing the May cumulative update, to avoid the bug in question, make sure you install Microsoft’s emergency patch which was deployed yesterday.

This is KB5061768, which you can only install manually – it won’t be delivered by Windows Update. Get it from Microsoft’s update catalog here, and download the ‘Windows 10 version 1903 and later’ variant which is correct for your PC. (That’s likely the 64-bit (or x64) version – check your processor type in the Device Specifications section of System > About in the Settings app, but if you don’t have a 64-bit CPU and OS, you want the x86 version, ignore the Arm variant).

(Image credit: MAYA LAB / Shutterstock) Breaking down the problem – and what to do if you’re already hit, and locked out of your PC

What’s actually happening with this glitch? There’s some problem with the May update for Windows 10 which is causing a process (lsass.exe, a security-related service) to be terminated unexpectedly. This is prompting the automatic repair process to run to try and fix things, though as noted above, your Windows 10 PC may make several repeated failed attempts to install the update before it gives up and rolls back to the previous (April) update (hopefully).

That’s messy, but things are worse for those using Device Encryption or BitLocker, who could end up stuck at the recovery screen if they don’t have their recovery key to hand.

So, what happens if you’ve missed the boat to install this emergency fix from Microsoft, as you’ve already installed the May update for Windows 10, and now you can’t get into your system (past the recovery screen) to download and apply said fix?

Well, in this case, Microsoft advises that to start Windows 10 successfully, you’ll need to turn off Intel Trusted Execution Technology and another setting, Intel VT for Direct I/O, in your PC’s BIOS. However, that apparently requires entering your BitLocker recovery key (again, problematic if you don’t have it on hand).

If you’re stuck in this particular dead-end, according to Windows Latest, it’s possible to simply turn off Intel Trusted Execution Technology (TXT) in your BIOS, without touching the other setting (Intel VT), and then you can successfully restart your PC to get back to the desktop.

The first step here is to get into the BIOS, and the method to do this varies depending on your PC (check the manuals supplied with your machine). The key to access the BIOS can be one of a number of possibilities, but it’s often F2, F10, or F12, which you press repeatedly as the system just starts to boot up.

Once in the BIOS, you need to find the Intel TXT (or Trusted Execution Technology) setting. This is likely in Security > Virtualization, or System Security Settings, or some label pertaining to Security or System Configuration. It’ll most likely be a security-related title, so check carefully through any such option screens looking for Intel TXT. When you locate this, turn it off, but as mentioned, you can leave Intel VT for Direct I/O alone.

Now choose the option to save changes to the BIOS and reboot your PC, and you should be back in Windows 10, where you can now install Microsoft’s patch (KB5061768) from the update catalog. Once that’s done, you can go back into your BIOS and switch Intel TXT back on.

All things considered, to avoid any potential messing around like this, it’s a far better idea to install the fix before you grab the May cumulative update for Windows 10.

This is not the first time Microsoft has visited a bug like this on Windows 10 users (or indeed Windows 11 PCs). It’s also worth remembering that if you’re running Windows 11, and you upgrade to the latest version, 24H2, using a clean install, this applies the Device Encryption feature automatically. Note that an in-place upgrade to Windows 11 24H2 won’t do this, only a clean install of Windows 11 24H2. Furthermore, it has to be an installation linked to a Microsoft account, too, as that’s where the encryption recovery key info is saved (which is why you must be very careful about deleting a Microsoft account, as the key vanishes with it).

Device Encryption is basically a ‘lite’ version of BitLocker, providing encryption for Windows 11 Home PCs, but it only covers the data on the main system drive.

You may also like...
Categories: Technology

Ransomware attacks can't be eliminated, but collaboration can increase resilience

Tue, 05/20/2025 - 09:10

Ransomware remains one of the most disruptive and costly cyber threats facing businesses and public sector organizations. In June 2024, a ransomware attack on Synnovis, an NHS laboratory services provider, resulted in £32.7 million in damages – over seven times its annual profits. This incident caused widespread disruption to medical procedures across London hospitals, further reinforcing the real-world consequences of such attacks.

This is just one example of the many high-profile incidents that have occurred over the years, despite successful efforts by the UK Government and their allies to use various tools to disrupt and counter the operations of ransomware gangs.

One tool under consideration by the UK Government is extending a ban on ransom payments beyond central government to all public sector bodies and Critical National Infrastructure (CNI) operators.

The aim is clear: reducing the financial incentives that sustain ransomware operations. While disrupting the revenue stream for cybercriminals is a logical step, it raises a critical question: will this make the public sector and CNI more resilient?

The pitfalls of paying ransom

While paying a ransom may seem an appealing way to quickly recover your operations, it is a risky gamble. There is no guarantee that cybercriminals will restore access to systems, refrain from selling your stolen data, or even re-exploit an organization. Furthermore, organizations risk making payments to a sanctioned entity that might have obfuscated their affiliation

If public sector organizations are stripped of the option to pay, they need to be equipped with the resources to defend against and recover from attacks. That might require additional funding to bolster security and resilience programs, timely access to specialist expertise, and the use of real-world threat intelligence to guide decisions. The NHS, for example, presents a particularly complex challenge - could a blanket ban on payments be maintained in cases where a ransomware attack might impact public safety?

Additionally, if ransom payments become increasingly banned, they may be excluded from cyber insurance coverage. Organizations could face steeper premiums as insurers adjust for potentially increased recovery costs. Forensic investigations, system rebuilds, and operational downtime might exceed the cost of a ransom demand.

The supply chain dimension of ransomware attacks

Comprehensive supply chain security should be a critical part of an organization's resilience strategy. Even if an organization has strong cybersecurity defenses, it is still vulnerable if its suppliers do not.

The government is weighing up whether to extend ransom payment prohibitions to critical suppliers of public sector bodies and CNI. If suppliers fall victim to ransomware, how confident can organizations be that those suppliers can recover quickly without paying?

A ransomware attack on a critical supplier can trigger a domino effect. Many businesses lack visibility into these hidden dependencies, only realizing their exposure when a disruption occurs. A single compromised supplier could paralyze multiple organizations downstream, causing widespread outages and significant business challenges.

Without clear visibility of supply chain risks, businesses can only prepare for a limited range of scenarios and are unable to identify and prepare for risks resulting from dependencies from suppliers existing at the 4th party level and beyond, i.e. subcontractors and suppliers’ suppliers.

Industry-wide collaboration can increase resilience

Regardless of whether ransom payments get banned, the key to enhancing operational resilience to ransomware attacks lies in proactive, collaborative defense. When businesses share information about suppliers, they may spot risks that a single company might miss on its own. By exchanging timely insights, organizations can detect and respond to emerging threats before they escalate into serious incidents.

Mapping out these connections help reveal concentration risks where an attack could cause widespread damage. Organizations may then initiate discussions with targeted suppliers on their ability to recover from a ransomware attack without the ability to pay a ransom.

Additionally, by taking a broad view across the industry, this enables organizations to make informed decisions on their overall supplier base. This may include whether to diversify their set of suppliers to reduce concentration risks or introduce additional controls to reduce exposure to ransomware attacks.

Organizations can better prepare for additional risk scenarios that are only illuminated after consolidating supply chain information with their peers and seeing a comprehensive and holistic view of their supply chain. While many businesses recognize that a supplier might be the limiting factor in their overall security, it is imperative for them to understand that this potential limiting factor may be beyond their current visibility.

Banning ransom payments may remove some of the financial incentives for cybercriminals, but it won’t make ransomware disappear. However, organizations are right to scrutinize their suppliers’ ability to resume operations without paying a ransom. Therefore, the real challenge lies in building organizational resilience – and that requires a shift in mindset.

Businesses must move beyond siloed thinking and treat cybersecurity as a shared responsibility. Only by working collaboratively with peers, suppliers, and regulators, and by broadening visibility across the supply chain to identify and address potential risks, can we reduce the impact of ransomware and make it less viable business model for criminals.

We've featured the best malware removal.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Github's new Copilot AI wants to help you code and cut down on your tech debt

Tue, 05/20/2025 - 09:00
  • GitHub's latest Copilot agent is embedded straight into the platform
  • It'll boot a secure dev environment and clone your repo before cracking on
  • If you need to make further changes, just leave a comment in the thread

Announced at Microsoft's annual developer conference, Build 2025, GitHub launched a new and updated version of its Copilot AI assistant designed to streamline the integration of computer-aided coding even further.

"GitHub Copilot now includes an asynchronous coding agent, embedded directly in GitHub and accessible from VS Code," the company wrote.

GitHub CEO Thomas Dohmke explained how the agent gets to work in the background when you assign a GitHub issue to Copilot or prompt it in VS Code, adding that it enhanced productivity without putting organizations' security at risk.

GitHub's Copilot agent sits quietly in the background, ready to spring into action

"Having Copilot on your team doesn’t mean weakening your security posture – existing policies like branch protections still apply in exactly the way you’d expect," Dohmke explained.

The new tool works by booting a secure dev environment via GitHub Actions, cloning the repo, analyzing the codebase and pushing to a draft pull request. Users can observe session logs for greater visibility, validation and progress, with the Copilot agent promising to help across feature implementation, bug fixes, test extensions, refactoring and documentation improvements.

Dohmke also noted that users can give the coding agent access to broader context outside of GitHub by using Model Context Protocol (MCP).

The Copilot agent acts much like a human colleague in that it will tag you for review, where you can then leave a further comment asking it to make more changes, which it processes automatically.

Emphasizing the enterprise-grade security measures, GitHub noted: "The agent’s internet access is tightly limited to a trusted list of destinations that you can customize." GitHub Actions workflows also need developer approval.

Copilot Enterprise and Copilot Pro+ will be the first account types to get access to GitHub's new powerful agent, with each model request the agent makes costing one premium request from June 4, 2025.

GPT-4.1, GPT-4o, Claude 3.5 Sonnet, Claude 3.7 Sonnet and Gemini 2.5 Pro each account for one premium request, however more powerful and complex models have considerably higher multipliers. For example, one question using o1 costs 10 premium requests, and GPT-4.5 has a 50x multiplier. On the flip side, Gemini 2.0 Flash has a 0.25x multiplier, meaning four questions cost one premium request.

You might also like
Categories: Technology

Quordle hints and answers for Wednesday, May 21 (game #1213)

Tue, 05/20/2025 - 09:00
Looking for a different day?

A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Tuesday's puzzle instead then click here: Quordle hints and answers for Tuesday, May 20 (game #1212).

Quordle was one of the original Wordle alternatives and is still going strong now more than 1,100 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.

Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.

SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.

Quordle today (game #1213) - hint #1 - Vowels How many different vowels are in Quordle today?

The number of different vowels in Quordle today is 4*.

* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).

Quordle today (game #1213) - hint #2 - repeated letters Do any of today's Quordle answers contain repeated letters?

The number of Quordle answers containing a repeated letter today is 0.

Quordle today (game #1213) - hint #3 - uncommon letters Do the letters Q, Z, X or J appear in Quordle today?

• No. None of Q, Z, X or J appear among today's Quordle answers.

Quordle today (game #1213 - hint #4 - starting letters (1) Do any of today's Quordle puzzles start with the same letter?

The number of today's Quordle answers starting with the same letter is 2.

If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:

Quordle today (game #1213) - hint #5 - starting letters (2) What letters do today's Quordle answers start with?

• N

• C

• D

• D

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.

Quordle today (game #1213) - the answers

(Image credit: New York Times)

The answers to today's Quordle, game #1213, are…

  • NOVEL
  • CHOSE
  • DIRTY
  • DONUT

If I had chosen COULD as a start word instead of WOULD I would/could have finished today’s Quordle a little more quickly, but that’s my only gripe.

NOVEL was the only word I struggled to find, but with three letters in the correct positions it didn’t take long to uncover it. How was it for you?

The Daily Sequence was far more challenging after I took seven tries to get the first word.

How did you do today? Let me know in the comments below.

Daily Sequence today (game #1213) - the answers

(Image credit: New York Times)

The answers to today's Quordle Daily Sequence, game #1213, are…

  • HOWDY
  • STOCK
  • SILLY
  • SHOWY
Quordle answers: The past 20
  • Quordle #1212, Tuesday, 20 May: DECOY, SHAKE, MAPLE, PURER
  • Quordle #1211, Monday, 19 May: LINK, HANDY, DITCH, WAIVE
  • Quordle #1210, Sunday, 18 May: QUACK, ROACH, PURGE, DOWNY
  • Quordle #1209, Saturday, 17 May: STRIP, RANGE, UNITE, GEESE
  • Quordle #1208, Friday, 16 May: SHEEP, SNUCK, DRIFT, BREAK
  • Quordle #1207, Thursday, 15 May: PAINT, CROUP, PEDAL, FLUKE
  • Quordle #1206, Wednesday, 14 May: FAVOR, METER, PICKY, MAKER
  • Quordle #1205, Tuesday, 13 May: SCENT, AGAPE, POLAR, YEARN
  • Quordle #1204, Monday, 12 May: ROYAL, ARGUE, BUNCH, READY
  • Quordle #1203, Sunday, 11 May: QUASH, MUNCH, ALTER, UNDUE
  • Quordle #1202, Saturday, 10 May: RELIC, BADGE, CHAMP, SATIN
  • Quordle #1201, Friday, 9 May: MINUS, CRIME, NOSEY, SLAIN
  • Quordle #1200, Thursday, 8 May: ELUDE, GREET, POPPY, ELITE
  • Quordle #1199, Wednesday, 7 May: QUOTH, TRUNK, BESET, NAIVE
  • Quordle #1198, Tuesday, 6 May: UNITE, SOGGY, FILET, PORCH
  • Quordle #1197, Monday, 5 May: WREAK, COWER, STEAD, QUEUE
  • Quordle #1196, Sunday, 4 May: PINCH, SMOKE, SCARY, CANNY
  • Quordle #1195, Saturday, 3 May: PLUSH, VERGE, WROTE, CONDO
  • Quordle #1194, Friday, 2 May: CAUSE, RISEN, MACAW, SMELT
  • Quordle #1193, Thursday, 1 May: IDIOM, EXILE, SPOOF, DRAPE
Categories: Technology

NYT Connections hints and answers for Wednesday, May 21 (game #710)

Tue, 05/20/2025 - 09:00
Looking for a different day?

A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Tuesday's puzzle instead then click here: NYT Connections hints and answers for Tuesday, May 20 (game #709).

Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.

What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.

SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.

NYT Connections today (game #710) - today's words

(Image credit: New York Times)

Today's NYT Connections words are…

  • TRASH
  • PATCH
  • LAPTOP
  • BAR
  • REFUSE
  • MUSIC
  • COMPACT
  • DESKTOP
  • CLAM
  • BLOCK
  • TABLET
  • PICTURES
  • CREAM
  • WAFFLE IRON
  • DENY
  • SPRAY
NYT Connections today (game #710) - hint #1 - group hints

What are some clues for today's NYT Connections groups?

  • YELLOW: No way in
  • GREEN: Computer storage
  • BLUE: Pharmacy products
  • PURPLE: Type of hinge

Need more clues?

We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…

NYT Connections today (game #710) - hint #2 - group answers

What are the answers for today's NYT Connections groups?

  • YELLOW: PROHIBIT, AS ENTRY 
  • GREEN: FOLDERS ON A MAC 
  • BLUE: MEDICINE FORMATS 
  • PURPLE: THINGS THAT OPEN LIKE A CLAM 

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.

NYT Connections today (game #710) - the answers

(Image credit: New York Times)

The answers to today's Connections, game #710, are…

  • YELLOW: PROHIBIT, AS ENTRY BAR, BLOCK, DENY, REFUSE
  • GREEN: FOLDERS ON A MAC DESKTOP, MUSIC, PICTURES, TRASH
  • BLUE: MEDICINE FORMATS CREAM, PATCH, SPRAY, TABLET
  • PURPLE: THINGS THAT OPEN LIKE A CLAM CLAM, COMPACT, LAPTOP, WAFFLE IRON
  • My rating: Easy
  • My score: Perfect

I found this Connections to be the easiest for a while – possibly because I own a MacBook that opens like a clam, and I'm forever blocking people, and I take a lot of medicine.

Including CLAM in the category THINGS THAT OPEN LIKE A CLAM seems like a bit of a cheat and not a very Connections thing to do, but I’m struggling to think what could take its place other than describing very particular brands of backpack that open that way rather than the traditional duffel bag style.

Still, it helped me get a purple group very early, which made me feel clever, so zero complaints from me.

I’m guessing that some PC users may have found FOLDERS ON A MAC puzzling through pure dint of the fact that they find anything to do with a Mac puzzling.

As a user of both operating systems I can reveal that having a “recycle bin” instead of TRASH aside they are both the same. Especially if you are just using them to play Connections on!

How did you do today? Let me know in the comments below.

Yesterday's NYT Connections answers (Tuesday, May 20, game #709)
  • YELLOW: ACCOUNT BOOK LEDGER, LOG, RECORD, REGISTER
  • GREEN: SEEN IN A BARN BALE, HORSE, PITCHFORK, TROUGH
  • BLUE: DETECTIVES OF KID-LIT BROWN, DREW, HARDY, HOLMES
  • PURPLE: WORDS BEFORE "BED" CANOPY, DAY, MURPHY, WATER
What is NYT Connections?

NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.

On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.

It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.

It's playable for free via the NYT Games site on desktop or mobile.

Categories: Technology

NYT Strands hints and answers for Wednesday, May 21 (game #444)

Tue, 05/20/2025 - 09:00
Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Tuesday's puzzle instead then click here: NYT Strands hints and answers for Tuesday, May 20 (game #443).

Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.

Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.

SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.

NYT Strands today (game #444) - hint #1 - today's theme What is the theme of today's NYT Strands?

Today's NYT Strands theme is… Three's a crowd

NYT Strands today (game #444) - hint #2 - clue words

Play any of these words to unlock the in-game hints system.

  • PATCH
  • DOME
  • RENT
  • COLT
  • CHIRP
  • TINS
NYT Strands today (game #444) - hint #3 - spangram letters How many letters are in today's spangram?

Spangram has 13 letters

NYT Strands today (game #444) - hint #4 - spangram position What are two sides of the board that today's spangram touches?

First side: left, 3rd row

Last side: right, 1st row

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.

NYT Strands today (game #444) - the answers

(Image credit: New York Times)

The answers to today's Strands, game #444, are…

  • MATCH
  • PAIR
  • PARTNERS
  • TWINS
  • TWOSOME
  • COUPLE
  • SPANGRAM: DOUBLE TROUBLE
  • My rating: Easy
  • My score: Perfect

Sometimes it can take a while to see the spangram in its entirety. I’d tapped out double, doubles, and doublers before I saw DOUBLE TROUBLE.

Today’s theme is, of course, based around the phrase “two’s company, three’s a crowd” but I was uncertain what we were looking for originally – so began by looking for words that would give me a hint.

After seeing the word PATCH I looked for other words with the same A-T-C-H ending and got MATCH, quickly followed by PAIR and PARTNERS.

Incidentally, I asked Google who the most famous TWINS in the world are and it responded with Mary-Kate and Ashley Olsen. My favorite British twins are Xand and Chris van Tulleken, two celebrity British doctors who I struggle to tell apart and whose names I struggle to spell, but who are both wonderful medical mythbusters and podcasters. Not as famous as the Olsens and unlikely to start a boho chic fashion empire, but equally interesting.

How did you do today? Let me know in the comments below.

Yesterday's NYT Strands answers (Tuesday, May 20, game #443)
  • DESSERT
  • SOUP
  • SALAD
  • CHEESE
  • APPETIZERS
  • ENTREE
  • SPANGRAM: FINE DINING
What is NYT Strands?

Strands is the NYT's not-so-new-any-more word game, following Wordle and Connections. It's now a fully fledged member of the NYT's games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.

I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.

Categories: Technology

I tried Marshall’s first soundbar, which rivals the Sonos Arc Ultra with an amp-inspired design and huge Dolby Atmos sound

Tue, 05/20/2025 - 09:00
  • The Marshall Heston 120 soundbar launches on 3 June 2025
  • And it will set you back an almighty $1000 / £899
  • Dolby Atmos, DTS:X and HDMI passthrough

Marshall, known for its amp-making heritage and rock ‘n’ roll-inspired speakers, is taking its first steps into an all-new product category: soundbars.

The audio brand’s very first soundbar, the Marshall Heston 120, is coming to your living rooms from June 3 2025 and will be available for an eye-watering $999 / £899 (about AU$1599). Marshall’s Dolby Atmos-enabled soundbar is over 100cm long – suitable for the best 55-inch TVs and up – and promises a “colossal audio experience” with both “immersive and spacious sound”.

However, it doesn’t harness a separate sub or rear speakers to supply this, with Marshall instead opting for an all-in-one design. As a result, it feels that this is a natural competitor to the excellent-sounding Sonos Arc Ultra, which holds the title of ‘best all-in-one soundbar’ in our guide to the best soundbars available today.

Getting hands on with the Heston 120

(Image credit: Future)

I was lucky enough to be among the very first to hear the Marshall Heston 120 at Marshall’s headquarters in Stockholm. First of all, I was struck by its luxury, retro design – something I’ve always loved about products like the Marshall Monitor III ANC and the Marshall Emberton III.

Its faux leather outer casing combined with sleek golden details makes it stand out in a market full of chunky black plastic bars.

There’s a lot of attention to detail with design, too. For instance, Marshall has installed three tactile dials for controlling volume, EQ and source. These use haptic feedback for a satisfying user experience, and are made of knurled metal – another nod to Marshall’s amp-related roots. There are also buttons for different sound modes such as Music, Movie, Night, or Voice.

But what you’re probably most keen to find out, is how did the Heston 120 sound? Well, I only got a brief demo in a space that almost mimicked a living room. But from what I heard, this thing is pretty impressive.

Marshall showed off the Heston 120’s capabilities across three formats: stereo music; Dolby Atmos music; and Dolby Atmos movies. Ed Camphor, Audio Technology and Tuning Lead at Marshall Group, told me that “our focus was very much on getting a good level of polish on every format”, and that certainly seemed to be the case.

(Image credit: Future)

For instance, when listening to stereo music, I was instantly smacked with punchy, impactful bass – the kind that so many soundbars struggle to replicate, particularly without the help of a dedicated sub.

Dolby Atmos music impressed me too – when tuning into bury a friend by Billie Eilish, vocal pans were tracked accurately with rumbling, deep bass and haunting screams piercing through.

Finally, we watched a portion of Star Wars: Episode I - The Phantom Menace on Disney Plus. The directionality of soaring spaceships in one scene was delivered with precision, and the soundbar recreated big sound effects such as ships overtaking and crashing cleanly, in a true-to-life manner. Unfortunately, Jar Jar Binks’ dialogue was crystal-clear, all the way through the scene.

Of course, these are only my initial impressions from a demo, so if you want my full and unfiltered thoughts, you’ll have to wait for my full review. That’s coming soon…

Into the nitty gritty…

(Image credit: Future)

So, in terms of tech specs, the Marshall Heston 120 makes use of 11 active drivers, which includes height channels to capture the verticality needed for ‘true’ Dolby Atmos and side channels for truly expansive audio. Altogether, you’re getting a maximum power output of 150W in a 5.1.2 configuration. Of course, there’s Dolby Atmos compatibility for movies and music alike, but the Heston 120 also supports DTS:X content as well, which is an advantage it has over the Sonos Arc Ultra (Sonos continues to avoid DTS support).

There are so many ways to play through the Heston 120 too. There are HDMI eARC and HDMI passthrough ports (another plus it has over the Arc Ultra, which only has one HDMI port), RCA stereo and mono slots, as well as both Bluetooth 5.3 and Wi-Fi 6 compatibility.

You can play music over Apple AirPlay 2, and Marshall has also integrated a range of streaming services, including Spotify Connect, Tidal Connect and Airable. These can also be bound to preset buttons for easy access. There’s even Auracast.

One more nice little nugget of info is that Marshall will revamp its companion app in tandem with the launch of the Heston 120 soundbar. This unlocks detailed EQ options, remote control of volume, source and sound modes, as well as room calibration options to get the best sound for your living space.

The app is so fleshed out, in fact, that the Heston 120 will not come with a separate remote – all you need is your phone and you’ll be ready to go.

Marshall may be launching the Heston 120 as a standalone soundbar, but it has confirmed that later down the line, you’ll be able to snap up the Heston Sub 200 – a separate subwoofer – to really feel that low-end eruption.

On top of that, a smaller soundbar, the Heston 60, will be available to those who are working with a little less room. Both will release later in 2025 and we’ll be sure to keep you updated with more details as they come.

The Marshall Heston 120 soundbar is available for pre-order now and will go on sale from June 3rd 2025 via Marshall’s own website. It will later become available with select retailers from September 16th 2025.

(Image credit: Future) You might also like
  • Want to get to grips with the Marshall Heston 120’s competition? Then take a look at our list of the best Dolby Atmos soundbars
  • Want a flashy new soundbar but working on a tight budget? Fear not – we’ve got you covered with our guide to the best cheap soundbars
Categories: Technology

The basis for successful AI: data quality and structure

Tue, 05/20/2025 - 08:56

As artificial intelligence becomes more integral to businesses across all industries, small and medium-sized companies are slowly integrating it. In 2024, only 26% of these types of businesses used the technology, despite 76% recognizing its value.

However, as AI's benefits become more pronounced, these businesses will only benefit from integrating it into their operations. As a critical tool, AI can help these businesses build and foster stronger relationships with clients, develop innovative solutions that allow them to compete better with larger institutions and increase efficiency, allowing them to focus on business growth.

Transformative potential AI has for small-and medium-sized businesses

In the coming years, small and medium-sized businesses must incorporate AI to remain competitive in an ever-changing business landscape. The good news is that AI can enable smaller organizations to break through competitively and provide more personalized offerings to clients across industries.

The impact across industries is telling. In the accounting and finance industry, the shift to AI can empower businesses to move from traditional number-crunching services to personalized advisory relationships. Within sales and marketing, AI can go beyond providing predictive insights and can offer real-time personalization to improve sales conversion rates.

AI can provide employees with seamless service and connectivity in the IT industry, where building a digital workplace is the standard. Furthermore, within customer service, AI-powered agents and chatbots help maintain a consistent brand voice across all client engagements while automating inquiries and communications to provide answers at a previously impossible pace.

The bottom line: no matter the industry, AI is improving business outcomes, and small- to medium-sized businesses have much to gain from the technology.

Where to start: clean data is crucial for successful AI integration

It is clear that AI has many benefits, but an AI algorithm is only as good as the data it learns from. Clean, well-structured data is essential for AI models to function accurately and efficiently. AI systems can produce biased, misleading or outright incorrect results without it. Data must be accurate, complete, consistent, unique, valid and timely.

One of the most significant risks of poor data quality is bias. If an AI model is trained on incomplete, inconsistent or skewed data, it will replicate and even amplify those biases. Overrepresenting data from one source while unnecessarily reducing the representation of another can have serious consequences, from discriminatory hiring practices to inaccurate medical diagnoses.

In short, AI models rely on patterns within datasets to make predictions and decisions. The outputs will be unreliable if the data contains errors—such as duplicates, missing values or incorrect labels.

Furthermore, inconsistent and inaccurate data could slow down processing times, increase business costs, and require extensive human intervention to correct mistakes. On the other hand, when data is clean, AI models can train faster and operate more effectively, saving both time and resources. Whether it’s customer interactions, financial transactions or healthcare records, people need to know that AI-driven decisions are based on reliable information.

Poor data quality erodes confidence, while clean data strengthens the credibility of AI systems. Good, clean data is the foundation of successful AI. Without it, even the most sophisticated models will fail to deliver meaningful results. Ensuring high data quality should be a top priority for any organization looking to use AI.

Steps to take to improve data quality

For small and medium-sized businesses to reap the benefits of AI, they must use modern data management tools to guard data quality, including implementing high data quality standards, data structuring and data governance policies.

The first step is to clean and structure the data into a format that AI algorithms can efficiently process and analyze to extract meaningful insights and make accurate predictions. Data is gathered from various sources, including databases, files and application programming interfaces. Once collected, data cleaning is performed to remove inconsistencies, errors and irrelevant information. The data is then converted into a format suitable for AI algorithms, such as numerical values, vectors or graphs.

Data structuring techniques vary based on the type and purpose of the data. For example, relational databases store data in tables with rows and columns, making them ideal for structured data. In contrast, NoSQL databases offer more flexibility by storing data in various formats, making them suitable for unstructured or semi-structured data.

Finally, data storage ensures that the structured data is efficiently organized and accessible for AI processing. Each step is critical for optimizing AI performance and delivering accurate, meaningful insights.

Ensuring data governance

Organizations need a robust data governance framework to maintain high data quality. This internal governance structure—like a cross-functional committee or task force—sets policies, processes and accountability measures.

First, this framework should assign clear roles and responsibilities for managing data, which will help ensure accountability and safeguard critical information. Next, businesses must enforce data controls and standardize formatting and data structures across systems to promote consistency.

Once organizations have established their framework, they should maintain real-time updates and scheduled data refreshes, keeping data relevant. Additionally, compliance with validation rules and predefined formats must be maintained.

Finally, businesses of all sizes must provide user-friendly interfaces, clear documentation and efficient retrieval systems to ensure data is accessible and valuable. Comprehensive data coverage across all relevant systems and processes is necessary.

AI has the potential to transform small and medium-sized businesses significantly. However, the success of these AI initiatives depends heavily on the quality and structure of the data they utilize. By improving data quality through robust standards, effective structuring, comprehensive governance policies and modern management tools, these businesses can fully leverage AI to gain a competitive edge and drive innovation.

We've featured the best small business software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Pages