AI companies like Google, OpenAI, and Anthropic want you to believe we’re on the cusp of Artificial General Intelligence (AGI)—a world where AI tools can outthink humans, handle complex professional tasks without breaking a sweat, and chart a new frontier of autonomous intelligence. Google just rehired the founder of Character.AI to accelerate its quest for AGI, OpenAI recently released its first “reasoning” model, and Anthropic’s CEO Dario Amodei says AGI could be achieved as early as 2026.
But here’s the uncomfortable truth: in the quest for AGI in high-stakes fields like medicine, law, veterinary advice, and financial planning, AI isn’t just “not there yet,” it may never get there.
The Hard Facts on AI’s ShortcomingsThis year, Purdue researchers presented a study showing ChatGPT got programming questions wrong 52% of the time. In other equally high-stakes categories, GenAI does not fare much better.
When people’s health, wealth, and well-being hang in the balance, the current high failure rates of GenAI platforms are unacceptable. The hard truth is that this accuracy issue will be extremely challenging to overcome.
A recent Georgetown study suggests it might cost a staggering $1 trillion to improve AI’s quality by just 10%. Even then, it would remain worlds away from the reliability that matters in life-and-death scenarios. The “last mile” of accuracy — in which AI becomes undeniably safer than a human expert — will be far harder, more expensive, and time consuming to achieve than the public has been led to believe.
AI’s inaccuracy doesn’t just have theoretical or academic consequences. A 14-year-old boy recently sought guidance from an AI chatbot and, instead of directing him toward help, mental health resources, or even common decency, the AI urged him to take his own life. Tragically, he did. His family is now suing—and they’ll likely win—because the AI’s output wasn’t just a “hallucination” or cute error. It was catastrophic and it came from a system that was wrong with utter conviction. Like the reckless ‘Cliff Clavin’ (who wagered his entire Jeopardy winnings on the TV show ‘Cheers’) AI brims with confidence while spouting the complete wrong answer.
The Mechanical Turk 2.0—With a TwistToday’s AI hype recalls the infamous 18th-century Mechanical Turk: a supposed chess-playing automaton that actually had a human hidden inside. Modern AI models also hide a dirty secret—they rely heavily on human input.
From annotating and cleaning training data to moderating the content of outputs, tens of millions of humans are still enmeshed in almost every step of advancing GenAI, but the big foundational model companies can’t afford to admit this. Doing so would be acknowledging how far we are from true AGI. Instead, these platforms are locked into a “fake it till you make it” strategy, raising billions to buy more GPUs on the flimsy promise that brute force will magically deliver AGI.
It’s a pyramid scheme of hype: persuade the public that AGI is imminent, secure massive funding, build more giant data centers that burn more energy, and hope that, somehow, more compute will bridge the gap that honest science says may never be crossed.
This is painfully reminiscent of the buzz around Alexa, Cortana, Bixby, and Google Assistant just a decade ago. Users were told voice assistants would take over the world within months. Yet today, many of these devices gather dust, mostly relegated to setting kitchen timers or giving the day’s weather. The grand revolution never happened, and it’s a cautionary tale for today’s even grander AGI promises.
Shielding Themselves from LiabilityWhy wouldn’t major AI platforms just admit the truth about their accuracy? Because doing so would open the floodgates of liability.
Acknowledging fundamental flaws in AI’s reasoning would provide a smoking gun in court, as in the tragic case of the 14-year-old boy. With trillions of dollars at stake, no executive wants to hand a plaintiff’s lawyer the ultimate piece of evidence: “We knew it was dangerously flawed, and we shipped it anyway.”
Instead, companies double down on marketing spin, calling these deadly mistakes “hallucinations,” as though that’s an acceptable trade-off. If a doctor told a child to kill himself, should we call that a “hallucination?” Or, should we call it what it is — an unforgivable failure that deserves full legal consequence and permanent revocation of advice-giving privileges?
AI’s adoption plateauPeople learned quickly that Alexa and the other voice assistants could not reliably answer their questions, so they just stopped using them for all but the most basic tasks. AI platforms will inevitably hit an adoption wall, endangering their current users while scaring away others that might rely on or try their platforms.
Think about the ups and downs of self-driving cars; despite carmakers’ huge autonomy promises – Tesla has committed to driverless robotaxis by 2027 – Goldman Sachs recently lowered its expectations for the use of even partially autonomous vehicles. Until autonomous cars meet a much higher standard, many humans will withhold complete trust.
Similarly, many users won’t put their full trust in AI even if it one day equals human intelligence; it must be vastly more capable than even the smartest human. Other users will be lulled in by AI’s ability to answer simple questions and burned when they make high-stakes inquiries. For either group, AI’s shortcomings won’t make it a sought-after tool.
A Necessary Pivot: Incorporate Human JudgmentThese flawed AI platforms can’t be used for critical tasks until they either achieve the mythical AGI status or incorporate reliable human judgment.
Given the trillion-dollar cost projections, environmental toll of massive data centers, and mounting human casualties, the choice is clear: put human expertise at the forefront. Let’s stop pretending that AGI is right around the corner. That false narrative is deceiving some people and literally killing others.
Instead, use AI to empower humans and create new jobs where human judgment moderates machine output. Make the experts visible rather than hiding them behind a smokescreen of corporate bravado. Until and unless AI attains near-perfect reliability, human professionals are indispensable. It’s time we stop the hype, face the truth, and build a future where AI serves humanity—instead of endangering it.
We've compiled a list of the best recruitment platforms.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Amid the Cold War, the possibility of a nuclear attack was deeply feared, yet at the same time, weirdly unimaginable. The stark terror of nuclear disaster persisted for years, highlighted in the 1984 BBC drama film “Threads”.
The film explored the hypothetical event of a nuclear bomb being dropped on a British city, and the societal breakdown that followed. People were horrified by the film, and it showcased everyone’s deepest and darkest fears around nuclear fallout.
Fast-forward nearly 40 years, and while nuclear fear still abounds, cybersecurity catastrophe is the new background dread – and in July 2024 we received our first major warning sign.
The CrowdStrike outage highlighted the widespread chaos that could ensue if millions of computers crashed simultaneously – reminding many people of the fear instilled during the Y2K bug.
Now imagine this chaos, but instead of a software update gone wrong, it’s a cybercriminal targeting critical systems within a power station, resulting in a city losing power for a week. Or perhaps a vulnerability in a piece of fintech software triggering a 2008-style financial meltdown.
Whilst such an event may be difficult to envisage, the interconnectedness of modern systems makes it a real possibility. Achieving operational resilience must be the goal and this means prioritizing keeping business-critical functions running in the event of a serious incident. But to do so organizations first need to understand their minimum viable operation (MVO).
What is MVO?MVO refers to the absolute minimum number of systems a business needs to remain operational or continue delivering services. This includes mapping out detailed rebuild protocols and establishing recovery measures to minimize downtime.
Many organizations have come to realize that simply reducing the probability of a cyberattack to zero is impossible. Regardless of how much money organizations spend on security, it doesn’t make their systems or data less attractive to cybercriminals.
Whilst money can’t reduce the probability, it can reduce the impact of an attack when spent correctly. Instead of focusing solely on breach prevention, organizations are increasingly shifting their investments to prioritize breach containment and impact mitigation, ensuring they can maintain their MVO.
In the power station example mentioned earlier, the organization's MVO would include the SCADA and ICS systems that control energy creation, monitoring, and distribution. By identifying their MVO, the power station can build a cyber resilience strategy that protects these critical systems and keeps the power on when the inevitable breach occurs.
This approach is not an admission that cybercriminals have beaten us, but an acceptance of the reality that it’s impossible to guarantee immunity from breaches. Instead, it’s about limiting the impact when they do occur. There’s no shame in being breached; however, a lack of preparedness is inexcusable, especially for businesses in critical sectors.
Putting the MVO approach into practiceSo where should you start? The first step in understanding your MVO is identifying the systems critical to maintaining operations, and this is unique to each business. For example, the systems considered part of an organization's MVO will be completely different in retail compared to energy.
Once these have been identified, you need to then identify the risks surrounding or linked to these systems. What are they communicating with and how? Consider risk vectors, the supply chain, and any third parties connecting to your MVO systems.
Like most organizations, it’s likely you rely on a significant number of third parties to operate – just look at the vast number of suppliers and contractors keeping the NHS running, and the impact of the attack on pathology supplier Synnovis. It’s critical that you understand which third-party systems are connected to your networks and limit and control what they have access to. Best practice is to enforce a policy based on least privilege to limit connectivity to the bare minimum required.
This is also where having an “assume breach” mentality is essential. Assume breach shifts the focus from solely trying to prevent unauthorized access to ensuring that, once inside, attackers' movements are severely restricted and their impact is minimized. This not only helps you to strategically manage and mitigate risks, but also safeguard MVO assets and critical operations.
How Zero Trust supports an MVO approachOne of the best ways to adopt an assume breach mindset and protect MVO assets is by embracing Zero Trust.
Zero Trust is a security strategy based on the principle of "never trust, always verify." It enforces stringent least-privilege principles at all access points, minimizing the risk of unauthorized access. This approach significantly reduces the impact of attacks and aligns with a MVO approach by identifying critical assets, their usage, and data flows within the network.
Micro-segmentation technologies like Zero Trust Segmentation (ZTS) are foundational to Zero Trust as they divide networks into isolated segments with dedicated controls. With Micro-segmentation in place, you can restrict user access, monitor traffic, and prevent lateral movement in case of unauthorized access, isolating and safeguarding your critical assets.
Not all cyberattacks need to result in suspension of operationsThe UK government has warned about the economic disaster that could unfold if a cyberattack on critical infrastructure was successful. However, for the reality is that the impact could be catastrophic for any enterprise or business that fails to safeguard its critical operations.
In Richard Horne’s debut speech as the NCSC CEO, he spoke about the increasing hostility faced by the UK, with attackers wanting to cause maximum disruption and destruction. And while a cyberattack might not immediately seem as scary as the nuclear attack in “Threads,” its disastrous impact on society is as significant as that of a weapon of mass destruction.
Therefore, securing the assets that keep society and businesses running is essential. Not all cyberattacks need to end in business or operational failure. By prioritizing an MVO approach with Zero Trust and micro-segmentation at its core, you can ensure your organization avoids catastrophic fallout from attacks.
We've compiled a list of the best identity management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
As artificial intelligence tools rapidly evolve, businesses face growing challenges in managing AI models, balancing costs, and ensuring reliable performance.
Nexos.ai, a new unified AI orchestration platform from the founders of business VPN vendor Nord is designed to help enterprises deploy AI at scale by addressing these challenges; providing access to over 200 AI models to simplify their integration into enterprise
The company has secured $8 million in funding from investors, including Olivier Pomel, CEO of Datadog; Sebastian Siemiatkowski, CEO of Klarna; Ilkka Paananen, CEO of Supercell; and Avishai Abrahami, CEO of Wix.com.
Nexos.ai launchTomas Okmanas and Eimantas Sabaliauskas, co-founders of Nord Security and now Nexos.ai, faced challenges in integrating AI across various companies even after spending over $100,000 on large language models (LLMs) in some cases.
Feedback from businesses also revealed a lack in infrastructure capable of supporting scalable, high-quality, and cost-effective AI applications. Nexos.ai also includes models from providers such as OpenAI, Google, and Meta, to assist enterprises in managing their AI operations.
“Companies know that AI is an operational and competitive necessity, but they’re drowning in the challenges of managing multiple models, controlling costs and ensuring accurate and reliable performance,” Okmanas said.
“At the same time, AI models are becoming increasingly autonomous and capable of handling complex tasks with minimal human intervention. We’ve built nexos.ai to be the enterprise-grade platform that makes working with AI as intuitive as working with human teams – providing the infrastructure and oversight to make sure these models perform at their best while remaining cost-effective and secure."
Scheduled for release in early 2025, the platform is already being tested by international companies to cater for automated customer support.
You might also likeMost of the attention at Samsung Galaxy Unpacked was obviously being devoted to the new phones – those in the Samsung Galaxy S25 range – but there was something quite important that slipped under the radar, and that’s the adoption of Content Credentials.
In 2024, the adoption of a standard for marking the creation of imagery and digital content was a hot topic, particularly due to the rise of generative AI and the plague of art theft that ensued to train large language models. Tech companies began adopting their own metadata markers and watermarks to signify AI altering, but a standard for identifying the legitimacy of an image has often been skipped.
One of the front runners for such a standard is Content Credentials, backed by the Content Authenticity Initiative (CAI). The tool is developed by Adobe and the Initiative counts Microsoft, Getty Images and Nvidia as members to name a few.
With this announcement, Samsung has joined the Coalition for Content Provenance and Authenticity (C2PA), which unifies the work of the CAI and its Content Credentials standard with Project Origin, another organization combatting misinformation but anchored in a news ecosystem that can verify the authenticity of content.
“We are excited to share that Samsung will implement #ContentCredentials for AI-generated images on the #GalaxyS25!” the C2PA wrote on LinkedIn. “Samsung has committed to a consequential step in bringing transparency to the digital ecosystem.”
(Image credit: Samsung)If you suspect that an image has been altered with AI, then you can drop it into a tool built by Adobe to check its authenticity.
Think of Content Credentials as a ledger that contains content information; what device it has been captured on, what program (or AI tool) it has been altered with, even what settings were activated when the original image was created.
With this standard in tow, AI-generated and AI-altered images produced on Samsung Galaxy S handsets will receive a metadata-based label, basically noting that AI has tampered with what you’re seeing. The ‘CR’ watermark will also be added to the image. While the S25 family is the very first set of phones to carry the metadata marking on images, it follows camera companies Nikon and Leica who have also signed up to the standard.
The standard is, speaking broadly, a win for creatives looking to protect their work, but the obvious problem with any standard is a lack of enthusiasm. If not enough companies producing AI tools adopt standards that allow AI-altered content to be easily flagged, then such a system is worthless.
With more than 4,000 members under the wing of the Content Authenticity Initiative, here's hoping tools to effectively flag the use of AI keep pace with the increasing capabilities of such tools.
You might also like...Canon has unveiled its latest ultra-wide angle zoom lens for it's full-frame mirrorless cameras, the RF 16-28mm F2.8 IS STM, and I got a proper feel for it during a hands-on session hosted by Canon ahead of its launch.
It features a bright maximum F2.8 aperture across its entire 16-28mm range, and is a much more compact and affordable option for enthusiasts than Canon’s pro RF 15-35mm F2.8L IS USM lens. Consider the 16-28mm a sensible match for Canon’s beginner and mid-range full-frame cameras instead, such as the EOS R8.
Design-wise, the 16-28mm is a perfect match with the RF 28-70mm F2.8 IS STM lens - the pair share the same control layout and are almost identical in size, even if the 28-70mm lens is around 10 percent heavier.
The new lens is seemingly part of a move by Canon to deliver more accessible fast aperture zooms that fit better with Canon's smaller mirrorless bodies – the 16-28mm weighs just 15.7oz / 445g and costs £1,249 – that's much less than the comparable pro L-series lens.
Image 1 of 3Alongside the RF 28-70mm F2.8 lens – the two lenses are clearly designed to pair up. (Image credit: Tim Coleman)Image 2 of 3Attached to the EOS R8 (Image credit: Tim Coleman)Image 3 of 3The maximum F2.8 aperture is available whatever focal length you set the lens to. (Image credit: Tim Coleman) The right fit for enthusiastsDespite its lower price tag, the 16-28mm still feels reassuringly solid – the rugged lens is made in Japan and features a secure metal lens mount. You get a customizable control ring, autofocus / manual focus switch plus an optical stabilizer switch, and that's the extent of the external controls.
When paired with a Canon camera that features in-body image stabilization, such as the EOS R6 Mark II, you get up to 8 stops of stabilization, although the cheaper EOS R8 isn't blessed with that feature, and for which the lens offers 5.5 stops of stabilization alone.
I tested the 16-28mm lens with an EOS R8 and the pair is a perfect match, as is the EOS R6 Mark II which is only a little bit bigger.
Image 1 of 5It's official – the 16-28mm is made in Japan. (Image credit: Tim Coleman)Image 2 of 5The physical controls include a control ring, zoom ring, AF / MF switch plus optical stabilizer switch. (Image credit: Tim Coleman)Image 3 of 5The lens packs away smaller with the zoom ring rotated to the off position (Image credit: Tim Coleman)Image 4 of 5At 16mm the lens is physically at its longest. (Image credit: Tim Coleman)Image 5 of 5Zoom to 28mm and the lens barrel retracts a little. (Image credit: Tim Coleman)I didn't get too many opportunities to take pictures with the new lens during my brief hands-on, but I have taken enough sample images captured in raw and JPEG format to get a good enough idea of the lens' optical qualities and deficiencies.
For example, at the extreme wide angle 16mm setting and with the lens aperture wide open at F2.8, raw files demonstrate severe curvilinear distortion and vignetting. Look at the corresponding JPEG, which was captured simultaneously, and you can see just how much lens correction is being applied to get you clean JPEGs out of the camera (check out the gallery of sample images below).
Image 1 of 10An unprocessed raw file with the lens set to 16mm and F2.8. You can see severe vignetting in the corners and barrel distortion (Image credit: Tim Coleman)Image 2 of 10That exact same photo but the processed JPEG version. See how much the camera has done to correct all those distortions. (Image credit: Tim Coleman)Image 3 of 10Here I'm shooting a selfie at 28mm and F2.8. Barrel distortion is less obvious, although light fall off is. (Image credit: Tim Coleman)Image 4 of 10And here's the same photo but the processed JPEG. The detail in sharply focused areas; my eyes, stubble and clothing, is super sharp (Image credit: Tim Coleman)Image 5 of 10Again, another uncorrected raw file with the lens set to 16mm and F2.8 (Image credit: Tim Coleman)Image 6 of 10And here's the processed JPEG. (Image credit: Tim Coleman)Image 7 of 1028mm F2.8, unedited raw file. (Image credit: Tim Coleman)Image 8 of 10Once again, the JPEG version of the image with 28mm F2.8 lens settings. Much cleaner. (Image credit: Tim Coleman)Image 9 of 10The detail in this JPEG image, shot at 16mm, is super sharp everywhere in the frame. (Image credit: Tim Coleman)Image 10 of 10This image was taken with the lens set to 28mm and the aperture to f/8. Optically this is the optimum settings for the lens and overall the image quality majorly impresses. (Image credit: Tim Coleman)Those lens distortions really are quite severe, but when you look at the JPEG output, all is forgiven – even with such heavy processing taking place to correct curvilinear distortion and vignetting, detail is consistently sharp from the center to the very edges and corners of the frame, while light fall off in the corners is mostly dealt with.
I'll go out on a limb and suggest the target audience for this lens will be less concerned with these lens distortions, so long as it's possible to get the end results you like, and my first impressions are that you can certainly do that – I've grabbed some sharp selfies and urban landscapes, with decent control over depth of field, plus enjoyed the extra wide perspective that makes vlogging a whole lot easier.
(Image credit: Tim Coleman) A worthy addition to the Canon RF-mount family?I expect most photographers and filmmakers will mostly use the 16-28mm lens' extreme ends of its zoom range; 16mm and 28mm. The former is particularly handy for video work thanks to its ultra wide perspective, while it's a versatile range for landscape and architecture photography.
That zoom range is hardly extensive, however, and I'm not sure if it's a lens that particularly excites me, even if it does make a sensible pairing with the RF 28-70mm F2.8 for enthusiasts.
It is much cheaper than a comparable L-series lens, but I'd hardly call a £1,249 lens cheap. Also, why not just pick up the RF 16mm F2.8 STM and the RF 28mm F2.8 prime lenses instead? These are Canon's smallest lenses for full-frame cameras and the pair combined costs half the price of the 16-28mm F2.8.
As capable as the 16-28mm appears to be on my first impressions – it's a super sharp lens with versatile maximum aperture – I'm simply not convinced how much extra it brings to the RF-mount table, and if there's enough of a case for it for most people.
You might also likeThe Claude AI chatbot will receive major upgrades in the months ahead, including the ability to listen and respond by voice alone. Anthropic CEO Dario Amodei explained the plans to the Wall Street Journal at the World Economic Forum in Davos, including the voice mode and an upcoming memory feature.
Essentially, Claude is about to get a personality boost, allowing it to talk back and remember who you are. The two-way voice mode promises to let users speak to Claude and hear it respond, creating a more natural, hands-free conversation. Whether this makes Claude a more accessible version of itself or will let it mimic a human on the phone is questionable, though.
Either way, Anthropic seems to be aiming for a hybrid between a traditional chatbot and voice assistants like Alexa or Siri, though presumably with all the benefits of its more advanced AI.
Claude’s upcoming memory feature will allow the chatbot to recall past interactions. For example, you could share your favorite book, and Claude will remember it the next time you chat. You could even discuss your passion for knitting sweaters and Claude will pick up the thread in your next conversation. While this memory function could lead to more personalized exchanges, it also raises questions about what happens when Claude mixes those memories with an occasional hallucination.
Claude demandStill, there's no lack of interest in what Claude can do. Amodei mentioned that Anthropic has been overwhelmed by the surge in demand for AI over the past year. Amodei explained that the company’s compute capacity has been stretched to its limits in recent months.
Anthropic’s push for Claude’s upgrades is part of its effort to stay competitive in a market dominated by OpenAI and tech giants like Google. With OpenAI and Google integrating ChatGPT and Gemini into everything they can think of, Anthropic needs to find a way to stand out. By adding voice and memory to Claude’s repertoire, Anthropic hopes to stand out as an alternative that might lure away fans of ChatGPT and Gemini.
A voice-enabled, memory-enhanced AI chatbot like Claude may also serve as a leader, or at least a competitor, among the trend of making AI chatbots seem more human. The aim seems to be to blur the line between a tool and a companion. And if you want people to use Claude to that extent, a voice and a memory are going to be essential.
You might also likeSure, Shazam and the Google Assistant, or even Gemini, can help you identify a song that’s playing in a coffee shop or while you’re out and about. But what about that tune you have stuck in your head that you’re desperate to put a name to?
Suffice it to say, that’s not a problem I have for anything by Springsteen, but it does happen for other songs, and Samsung’s latest and greatest – the Galaxy S25, S25 Plus, and S25 Ultra – might just be able to cure this. It’s courtesy of the latest expansion of Google’s Circle to Search on devices.
Launched on the Galaxy S24 last year and then expanded to other devices like Google’s own family of Pixel phones, you can long press at the bottom and then circle something on the screen to figure out what it is or find out more.
For instance, it could be a fun hat within a TikTok or Instagram Reel video, a snazzy button down, or even more info on a concert happening or a location like San Jose – where Samsung’s Galaxy Unpacked took place.
Circle to Search for songs (Image credit: Future/Jacob Krol)Now, though, when you long-press the home button – or engage the assistant in another way – you’ll see a music note icon.
From there, you can just start singing as Google will tell you it is listening. I as well as my colleague, TechRadar’s Editor-at-Large Lance UIanoff, then hummed two tracks – “Hot To Go” by Chapell Roan, which the Galaxy S25 Ultra took tries to identify it properly – and then it got “Fly Me To The Moon” (a classic) on its first try.
While Lance did have to hum a good bit, it did in fact figure out what that song inside our head was, and this could make the latest facet of Circle to Search a pretty handy function. It will, of course, also do the job of Shazam and listen to whatever is playing when you select it via the microphone built into your device as well.
Further, you can use it to circle a video on screen and figure out what was playing – as you can see in the hands-on embed below, it was able to do this for a TikTok. That ultimately doesn’t seem quite as helpful given a video on TikTok – or an Instagram Reel – will note the audio it is using. But this could be particularly useful for a long YouTube video that uses a variety of background music or if you’re streaming a title and can’t figure out the song.
(Image credit: Future)Google’s latest tool expansion for Circle to Search will be available from day one on the Galaxy S25, Galaxy S25 Plus, and Galaxy S25 Ultra, but it’s worth pointing out that the search giant – turned AI giant – has been teasing this feature for a bit, and some even found it hiding in existing code. After our demo of it on the S25 Ultra, we had a hunch it would arrive elsewhere and it should be arriving on other devices with Circle to Search.
As for when it will arrive on the Galaxy S24, Z Flip 6, or Z Fold 6, that remains to be seen, and we’re also wondering that same question for Samsung’s other new Galaxy AI features. And if you’re keen to learn more about the Galaxy S25 family, check out our hands-on and our Galaxy S25 live blog for the event.
@techradarSamsung is adding an entirely new way to 'Circle to Search' and it could help you figure out that song that is stuck in your head
♬ original sound - TechRadar You might also likeSamsung finally made the Galaxy S25, Galaxy S25 Plus, and Galaxy S25 Ultra official at its first Galaxy Unpacked event of 2025, and along with the new hardware, a number of new features for Galaxy AI were unveiled.
While the Samsung Galaxy S25 preorder deals are impressive, you might be reading this very news story on a Galaxy S24 Ultra, Galaxy Z Flip 6, or even a Galaxy Z Fold 6, and thinking that these are still pretty new phones – and wondering if some of these new features might be arriving on your device in a future update. Well, we already know that One UI 7 with call transcriptions will be arriving on the S24 lineup.
As for other AI-powered features such as Samsung's Personal Data Engine, Now Brief, and improvements to generative image features, it’s not yet clear which devices these features might eventually land on.
Personal Data Engine is basically a dedicated core on the device for handling AI tasks and building out a personal large language model (LLM) to the phones owner, to help the AI serve up better suggestions and implement them. Now Brief is an app that changes through the day to show pertinent information.
Speaking to TechRadar, a Samsung spokesperson told us the company is “assessing which features” can come to which devices.
(Image credit: Future/Jacob Krol)In full, Samsung states: “Nothing to share right now, but Samsung is committed to providing the best possible Galaxy experience to all our users, and we are assessing which features will be available on which devices.”
Clearly, the focus is on the S25 range, and it seems that a lot of these new Galaxy AI features were tailor-made for the new lineup thanks to the Qualcomm Snapdragon 8 Elite for Galaxy chip, which has a specific processor unit dedicated for AI tasks. That processor is paired with 12GB of RAM across the lineup – no more 8GB for the 'standard' model.
Samsung's really aiming to integrate Galaxy AI throughout the entire phone, allowing it to learn how you use it and the other apps on it. Ideally, the Now Brief app will work with Galaxy AI at its core, and in the dedicated part of the processor acting as a personal LLM to serve up the right suggestions and cards to you. It could be that you have a busy day with a look at your calendar, a reminder that it's someone's birthday and to create a digital card, or even a suggestion about your commute home. In a demo, I also saw cards for the weather and even news stories that might interest you, but as with most AI features it'll take some time for these features to learn your habits and routines.
The Galaxy S25 lineup follows the idea of 'agentic AI' that we’ve been hearing about, and will likely see more of in 2025. It remains to be seen how much of this relies on that new processor, or if Samsung can figure out a way to trickle this down further.
Even so, the ability to ask Gemini to complete chain requests – for instance, asking when the next New York Jets game is, adding it to your calendar, and sharing that invite with a friend – seems like it could arrive on other devices, and should be easy to roll out to them with Gemini – Google has even confirmed that. Integrating Gemini with, say, Samsung Notes and other third-party apps will likely take a bit longer, but could likely be introduced via an update.
The same thought process could apply to the improvements to generating images, and improvements to Samsung’s native tool for removing people from the background of photos that were unveiled for the Galaxy S25 family. Samsung so far has a good track record of rolling its AI features out to older phones, so we’ll keep our fingers crossed and hope that some of these new Galaxy AI features trickle down.
You might also likeSamsung has announced a new update for Knox Suite, its enterprise security and management solution for Galaxy business smartphones.
The updates introduce a tiered plan system, designed to cater to businesses of various sizes across multiple industries, from small businesses with a cybersecurity checklist through large enterprises.
This shift marks a departure from the previous single-plan model, aiming to broaden Knox Suite's appeal beyond the enterprise.
New Knox plansThe revised Knox Suite now comprises three distinct plans: Base, Essentials, and Enterprise, each of which is tailored to address varying levels of security and management needs.
The Base Plan is available at no additional cost and offers essential features like Knox Mobile Enrollment and Knox Platform for Enterprise; simplifying device setup and providing foundational security.
For companies seeking more robust management capabilities, the Essentials plan provides unified device management and real-time troubleshooting via Knox Manage and Knox Remote Support.
The Enterprise Plan, designed for organizations with extensive device fleets, adds features such as OS version control, intelligent insights, and advanced security tools like Knox E-FOTA and Asset Intelligence.
According to Samsung, businesses can scale their Knox Suite usage according to operational needs. For example, a significant advantage of Knox Suite is its compatibility with existing Enterprise Mobility Management (EMM) systems, allowing businesses already using EMM platforms to incorporate Knox Suite without disrupting their current workflows.
Samsung also partners with leading EMM providers to enhance accessibility and integration for enterprises with diverse IT setups.
“Enterprises of varying sizes and industries have diverse device management needs, but ultimately are looking toward the same end goal – enabling secure, productive mobile workspaces,” said Samsung's EVP and Head of B2B Team and Mobile eXperience Business, Jerry Park.
"Through these new scalable solutions, Knox Suite is now optimized for all types of operational use cases, empowering businesses to comprehensively and intelligently manage enterprise ecosystems."
You might also likeQuordle was one of the original Wordle alternatives and is still going strong now more than 1,000 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.
SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.
Quordle today (game #1095) - hint #1 - Vowels How many different vowels are in Quordle today?• The number of different vowels in Quordle today is 3*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Quordle today (game #1095) - hint #2 - repeated letters Do any of today's Quordle answers contain repeated letters?• The number of Quordle answers containing a repeated letter today is 3.
Quordle today (game #1095) - hint #3 - uncommon letters Do the letters Q, Z, X or J appear in Quordle today?• No. None of Q, Z, X or J appear among today's Quordle answers.
Quordle today (game #1095) - hint #4 - starting letters (1) Do any of today's Quordle puzzles start with the same letter?• The number of today's Quordle answers starting with the same letter is 0.
If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:
Quordle today (game #1095) - hint #5 - starting letters (2) What letters do today's Quordle answers start with?• R
• S
• W
• B
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
Quordle today (game #1095) - the answers (Image credit: Merriam-Webster)The answers to today's Quordle, game #1095, are…
A rare day without an E.
I was thrilled when I got RUGBY in three goes – mainly because after deciding to go with a word that began RU I couldn’t think of any others – but then I really laboured over the next word.
Words beginning in S and ending in Y are pretty common – or at least it feels like it. Getting the second letter narrowed it down a little, but it still took me three goes before I guessed SASSY – another Qourdle deja-vu word that I’m sure was used recently.
Daily Sequence today (game #1095) - the answers (Image credit: Merriam-Webster)The answers to today's Quordle Daily Sequence, game #1095, are…
Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.
SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.
NYT Strands today (game #326) - hint #1 - today's theme What is the theme of today's NYT Strands?• Today's NYT Strands theme is… Udderly delicious
NYT Strands today (game #326) - hint #2 - clue wordsPlay any of these words to unlock the in-game hints system.
• Cow classics
NYT Strands today (game #326) - hint #4 - spangram position What are two sides of the board that today's spangram touches?First side: bottom, 5th column
Last side: top, 3rd column
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Strands today (game #326) - the answers (Image credit: New York Times)The answers to today's Strands, game #326, are…
Being lactose intolerant and also, despite this condition, a turophile, I found today’s Strands enjoyable but, much like my beloved cheese, hard to stomach.
I put my diminutive stature down to a dislike of creamy creations, as height and milk protein have been shown to be linked. Researchers have attributed an obsession with DAIRY PRODUCTS as the reason why people from the Netherlands are better at reaching things on high shelves than any other nation in the world. In a year the average Dutch person consumes over 25% more CHEESE and other milk-based products than their American or British counterparts and this has resulted in a growth spurt over the past century, taking the Dutch from the shortest people in Europe to the tallest – the average Dutchman is more than 6ft tall and the average Dutch woman about 5ft 7in.
Anyway, a lovely easy Strands with a tasty subject matter.
Yesterday's NYT Strands answers (Wednesday, 22 January, game #325)Strands is the NYT's new word game, following Wordle and Connections. It's now out of beta so is a fully fledged member of the NYT's games stable and can be played on the NYT Games site on desktop or mobile.
I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.
Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need clues.
What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.
SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.
NYT Connections today (game #592) - today's words (Image credit: New York Times)Today's NYT Connections words are…
What are some clues for today's NYT Connections groups?
Need more clues?
We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…
NYT Connections today (game #592) - hint #2 - group answersWhat are the answers for today's NYT Connections groups?
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Connections today (game #592) - the answers (Image credit: New York Times)The answers to today's Connections, game #592, are…
Oh my gosh I found today’s Connections difficult.
Maybe if the RHYMES OF U.S. PRESIDENT NAMES had included Chump I would have got there, but this wasn’t the only group I was mentally grappling with.
On my third attempt I managed to link BOMBER, FEDORA, SATCHEL, and WHIP, but it wasn’t because I thought they had anything to do with PARTS OF AN INDIANA JONES COSTUME – if I’m honest, I’d forgotten his bag preference.
Cluelessly, I thought they were accessories named after a person, based on the incorrect assumption that Fedora was someone famous in the 1920s. In fact, the history of the Fedora is much more interesting and culminates in a 2016 article that described the fedora hat as the world’s “most-hated fashion accessory”. Yes, this is the same year as a certain red cap rose to prominence.
Yesterday's NYT Connections answers (Wednesday, 22 January, game #591)NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.
It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It's playable for free via the NYT Games site on desktop or mobile.
GMK, an emerging Chinese brand in the mini PC market, has announced (originally in Chinese) the upcoming launch of a new product powered by the AMD Ryzen AI Max+ 395.
The company claims this will be the world’s first mini PC featuring the Ryzen AI Max+ 395 chip. It also plans to offer versions with non-Plus Ryzen AI Max APUs.
According to ITHome (originally in Chinese), the device is part of GMK's “ALL IN AI” strategy and is expected to debut in the first or second quarter of 2025.
AMD’s Ryzen AI Max+ 395 chipThe AMD Ryzen AI Max+ 395 processor boasts 16 Zen 5 cores, 32 threads, and a 5.1 GHz peak clock speed. Additionally, it integrates 40 RDNA 3.5 compute units, delivering solid graphics performance via the Radeon 8060S iGPU.
According to benchmarks, the Ryzen AI Max+ 395 outpaces the Intel Lunar Lake Core Ultra 9 288V in CPU tasks by threefold and surpasses NVIDIA’s GeForce RTX 4090 in AI performance tests.
With a configurable TDP of 45-120W, the processor balances efficiency and performance, positioning itself as a competitive choice for AI workloads, gaming, and mobile workstations.
This platform adopts LPDDR5x memory, achieving a bandwidth of up to 256GB/s. It also integrates a 50TOPS “XDNA 2” NPU, providing impressive AI performance tailored towards Windows 11 AI+ PCs.
The Max+ 395 specs suggest that the new GMK mini PC will likely surpass the performance of the current Evo X1 model, which features a Ryzen Strix Point HX 370 APU and is priced at $919.
You might also likeDuring Samsung's Galaxy Unpacked 2025 event, its SmartThings division unveiled new AI technology that could be set to supercharge the smart-home experience – provided that you have a Samsung-based ecosystem, that is.
The new tools will fall under the banner of Samsung's Home AI, and include 'ambient sensing', a feature that gathers insights from connected devices around your home and adapts to your everyday life to make your smart home more efficient.
We don't have a confirmed release date yet, other than a broad 2025-2026 rollout window, which means there's plenty of time to kit your smart home out with SmartThings-enabled hardware; just bear in mind that it's likely most features will be exclusive to Samsung's devices, at least in the short term.
Here are the answers to all your burning questions…
What is ambient sensing?Chief among these new developments is ambient sensing, whereby SmartThings devices will be able to leverage advanced sensor technology such as motion and sound detection to monitor your daily activities and create the perfect environment for every moment.
Many of Samsung's devices feature such sensors, from the new Bespoke JetBot Combo AI robot vacuum to Samsung's large appliances and the Samsung Music Frame, meaning you just might already have a few devices in your home that will benefit from the new ambient sensor technology.
What will Samsung's ambient sensing do?Samsung provided a few examples of what its ambient sensing technology will be capable of:
So what might this look like in practice? For example, while you're working out, Samsung says SmartThings will be able to detect which kind of exercise you’re doing, offering guidance on your form and giving recommendations for how to up your gains by changing the length of exercise.
If you've just hopped in the shower, the sound and motion made as you dry your hair could trigger your robot vacuum to collect any hair you shed in the process, or create a more ambient mood as you approach your favorite reading chair by switching on the nearby lamp and adjusting the room's temperature.
Or, if you've got a particularly fluffy friend at home that emits wafts of fur as it jumps up on furniture, SmartThings could even recognize this and activate your air purifier to remove allergens from the air.
Indeed, it's a development I discussed with a number of executives at CES 2025, though I couldn't quite get a sense for how soon these features might manifest; now I know, and I'm delighted that it's set to happen so much sooner than I'd anticipated.
Generative AI Map ViewThe fun doesn't stop there; SmartThings is also set to upgrade its AI Home arsenal with Generative AI technology, namely by adding further personalization to your Map View.
Now, Samsung says you'll be able to use your phone camera to capture images of furnishings around your home to make Map View more accurate to your styling.
That in turn means you'll have a better user experience when it comes to navigating around and interacting with your smart home, as Map View will know where your furniture is, and be capable of leveraging the new ambient sensing technology based on proximity.
Will SmartThings keep my home data secure?The short answer is, Samsung says, yes.
The longer answer is that Samsung will store all information locally on your network, offering privacy by keeping the data within Samsung's appliances and devices instead of being dependent on the cloud. That means, Samsung says, that your data won't be accessible to third parties without your consent.
Samsung is, frankly, light years ahead of its smart home competition, owing to its combination of wide-ranging product categories across home and lifestyle devices, its worldwide popularity, and its various partnerships with the likes of Google for its AI tools as well as its collaboration with the Connectivity Standards Alliance on Matter.
Samsung’s first Galaxy Unpacked event was packed, and keeping with the brand's tradition, it went through all of its news in a zippy fashion. The Galaxy S25, S25 Plus, and S25 Ultra were all made official, alongside deeper partnerships with Google for new Gemini tricks, a bevy of new Galaxy AI features, major improvements to content creation, and a tease of what the company is cooking up with Google for its Android XR headset.
It was a lot, and while you can read through our live blog of the event – including on-the-ground moments captured by the TechRadar team – here we’re sharing the nine most significant things we learned from the January 22, 2025, Galaxy Unpacked.
And it all starts with, you guessed it, AI.
1. Galaxy AI is getting even smarter and more personalized (Image credit: Samsung)Just like the Galaxy S24 family, the S25 is all about Galaxy AI, and for 2025, Samsung is doubling down on the performance of these features and their breadth. It starts with the Qualcomm Snapdragon 8 Elite for Galaxy chipset, which comes with 12GB of RAM and a dedicated core for AI tasks dubbed the Personal Data Engine.
The idea here is that inside the S25, S25 Plus, and S25 Ultra is a core that can be dedicated to handling AI tasks, and eventually create a sort of personalized LLM for you. One that can learn your habits and the other devices you have and serve helpful AI – in the form of Bixby, Gemini, or the new Now Brief functionality – to help you get things done faster or complete them for you without you needing to do much.
Samsung wants its devices to do more for you – not just the latest Galaxy phone, but other devices within the ecosystem too, such as a Galaxy Ring, Watch, or even a connected appliance. Ideally, it could turn off your TV for you when your watch tells your phone that you’re asleep, or it could make a recommendation to turn on a sleep mode to let you stop doom scrolling on TikTok and put the phone down.
2. The Galaxy S25 Ultra aims to deliver the complete package (Image credit: Future / Roland Moore-Colyer)The headline hardware announcement from Galaxy Unpacked was the Samsung Galaxy S25 Ultra, aka Samsung’s biggest, baddest new flagship smartphone.
At first glance, it doesn’t look too dissimilar to its predecessor, but there are some important design differences worth mentioning. For starters, the S25 Ultra has much bolder camera rings, which now look more like they do on the Galaxy Z Fold 6, and are consistent across the entire Galaxy S25 lineup. The new phone has a slightly bigger display than the S24 Ultra too; it now measures 6.9 inches, up from 6.8 inches on last year’s model, which is an increase made possible by a 15% thinner bezel.
The S25 Ultra is also thinner than its predecessor more generally, and it weighs 15g less, but the biggest difference comes to the corners, which are now rounded rather than sharp (iPhone fans, rejoice).
Under the hood, Samsung’s latest flagship boasts a For Galaxy version of Qualcomm’s Snapdragon 8 Elite chipset, which is more powerful than the S24 Ultra’s Snapdragon 8 Gen 3 chipset and should deliver even better gaming and AI performance. Speaking of which, the S25 Ultra gets a larger vapor-cooling chamber than its predecessor, and you’ll also get instant access to some new Galaxy AI features like Now Brief and Audio Eraser.
For our first impressions of this new best Android phone contender, check out our hands-on Samsung Galaxy S25 Ultra review. Your move, Google and Apple!
@techradarMeet the Samsung Galaxy S25 Plus: same great design, beware chip and more AI!
♬ original sound - TechRadar 3. The Galaxy S25 and S25 Plus step things up in terms of value (Image credit: Philip Berne / Future)Compared to the Ultra, this year’s new standard models aren’t all that exciting, but they are objectively better than their predecessors and come with a host of future-facing upgrades.
Design-wise, you’re looking at the same fancy new camera rings as on the Galaxy S25 Ultra, and both the Galaxy S25 and Galaxy S25 Plus are 7% thinner than last year’s models.
The big news for these two phones is the RAM capacity: it’s now 12GB instead of 8GB, which brings both models in line with the S25 Ultra, and all three new devices also share the same Snapdragon 8 Elite chipset. There’s no Qualcomm/Exynos split this year, which will come as good news for European buyers.
Other hardware upgrades for the S25 include a larger vapor-cooling chamber, which should facilitate better gaming performance alongside that 8 Elite chipset, and on the software front you’ll get instant access to several new Galaxy AI features.
For an early look at both devices, check out our hands-on Samsung Galaxy S25 review and hands-on Samsung Galaxy S25 Plus review.
4. The Galaxy S25 Edge is official, and it’s crazy thin (Image credit: Future)While the phone rumor mill has been talking about an iPhone 17 Air for quite some time, Samsung beat the Cupertino-based tech giant to the punch. Just like it teased the Galaxy Ring at the end of the January 2024 Unpacked, Samsung closed out this year's Unpacked with a glimpse of an ultra-thin smartphone.
The Galaxy S25 Edge shows various components stacking together in a shockingly slim build for a phone that seemingly promises the Galaxy AI powers of the rest of the S25 lineup in an ultra-light build. We got to see it from afar at Galaxy Unpacked, and yes, it’s crazy, and super thin, but still has room for a main camera bump and seems to boast matt titanium sides.
Of course, nothing more than a quick look and a name was made official, but the minute Samsung gives us more information on the Galaxy S25 Edge, we’ll be sure to update you.
@techradar ♬ original sound - TechRadar 5. We got another look at Samsung and Google’s Project Moohan headset (Image credit: Lance Ulanoff / Future)Google and Samsung formally unveiled the Project Moohan Android XR mixed-reality headset in December of 2024, but it wouldn’t have been an Unpacked without a tease, right? It was only a brief mention, but Samsung did indeed show off a fresh look at the forthcoming headset.
The two brands are still partnering on the Android XR platform, but also on the headset poised to deliver a complete range of XR experiences with eye- and hand-tracking. Samsung again confirmed the headset is in the works, though nothing more concrete was shared except that it will integrate with the existing Samsung ecosystem.
Separately, speaking to Bloomberg, Samsung’s TM Roh confirmed that the brand is also working on glasses with Google, and that the two companies want to ship them as soon as they’re ready. It's safe to say AR, XR, and smart glasses are still heating up.
@techradar ♬ original sound - TechRadar 6. You’ll get 6 months of Gemini Advanced with an S25, S25 Plus, or S25 Ultra (Image credit: Lance Ulanoff / Future)Considering Samsung highlighted a number of new Gemini features during Galaxy Unpacked, it’s only right that folks ordering the Galaxy S25, S25 Plus, or S25 Ultra are getting a freebie. With the purchase of Samsung’s latest flagship, you’ll get six months of Gemini Advanced at no additional cost, which should let you use all the phone's AI capabilities to the fullest without worrying about limits.
The deal also stretches the value of the Galaxy S25 lineup; Gemini Advanced is $19.99 a month in the United States, so a six-month subscription is just short of $120 in value.
7. The S25 series phones are getting the iPhone and Pixel’s best camera features (Image credit: Samsung)The race for the title of best camera phone is going to be tight again in 2025, with Samsung revealing that its S25 clan will get some powerful features we’ve mostly seen from Apple and Google before now.
That includes the ability to shoot log video (which is ideal for color grading) and Samsung’s take on Google’s Best Take for Pixels, which it’s calling Best Face. That’s ideal if your group shots usually contain someone with unfortunate blinking timing.
If you prefer to tweak and color-grade your still photos, there’s also an equivalent of Apple’s Photographic Styles. This lets you select a picture and create a filter based on its look, before fine-tuning its white balance, saturation, and grain.
Interestingly, a demo of Gemini Live showed a presenter getting some photo editing tips from an AI assistant by talking to them about their dog photo. Snaps of your furry friend will never have poor composition again.
8. Samsung’s SmartThings ecosystem is getting new AI tools (Image credit: Samsung)While it wasn’t a huge portion of the keynote, SmartThings had its moment in the sun with the official announcement of new ambient sensing technology and Generative AI Map View tools to help you personalize your smart home, all under the banner of Home AI.
Ambient sensing is arguably the most exciting feature, marking the first ecosystem-wide sensor-based technology that will allow your smart home devices not only to detect where you are and what you’re doing, but also optimize your environment accordingly.
Doing some press-ups? Well, your refrigerator might just be watching you, ready to give personalized tips on how to improve your form, or suggest adjustments to the duration of your workout.
While the second update might sound less exciting, it’s actually part of how ambient sensing can be made even more effective. Samsung’s new Gen AI Map View will allow you to photograph and upload your real furnishings into Map View, meaning your Home AI will not only know where the furniture is, but also what the furniture is. This is already somewhat possible with the Bespoke JetBot Combo AI robot vacuum, but Gen AI Map View will open the gates for even more personalization and detail.
Given that Samsung is already discussing its vision of bringing devices like the Samsung Galaxy Ring and even SmartTag 2 into the SmartThings fold, it’s not hard to imagine just how intelligent your Samsung smart home might be about to become.
Both ambient sensing and Gen AI Map View are set to roll out throughout 2025 and 2026.
9. There might be a Samsung tri-fold phone in the future (Image credit: Future)Before closing out the keynote with the Galaxy S25 Edge, Samsung showed off what looked like a roadmap that included a tri-fold phone.
While Samsung didn't share anything further, it likely shows where Samsung is heading with its foldable smartphone lineup. We already have the Flip and Fold, but there will need to be a new form factor to push the category further and deliver something new. It seems that tri-fold is that build type, and Samsung might ship it sooner than we expected.
You might also likeNvidia’s RTX 5070 Ti hasn’t been given an official release date beyond February, but a European retailer has revealed when it thinks the GPU will go on sale – namely February 20.
Add your own salt now, but the retailer is Proshop over in Finland (which recently aired purported details on third-party RTX 5080 pricing, too), and it has that on-sale date for all of the many third-party RTX 5070 Ti graphics cards that it'll be selling.
With the RTX 5090 and 5080 hitting shelves on January 30, that would theoretically mean a three-week gap between these higher-end graphics cards, and the mid-range Blackwell offering, going on sale.
I was hoping for a smaller gap between these launches, as it’s the mid-range I have my eye on for my PC upgrade early this year. Although as ever, we must be skeptical about any retailer leak such as this, as Proshop could have wrong or outdated info, or might just be guessing and have shoved in a placeholder date. And to be clear, Team Green has so far only told us the GPU will arrive in February.
(Image credit: Nvidia) Analysis: One date present, one date missingWhat’s interesting to note is that while the RTX 5070 Ti has had this date of February 20 attached to the GPU, the vanilla RTX 5070 hasn’t. Proshop hasn’t pinned a date on this lower-tier flavor yet.
Does that mean anything? Well, maybe not (and we can’t even assume the date means anything for the RTX 5070 Ti either). However, dropping into indulgence mode here, I guess it’s possible to read it as a hint that the RTX 5070 could be further out. If that GPU was arriving before February 20, or on that day as well, it seems likely Proshop would’ve displayed that too. If it’s later and still to be confirmed, the retailer would just leave it blank, as it has done.
It's also worth bearing in mind that we’ve just reported on a rumor that fits with this line of speculation. Namely that the RTX 5070 Ti is apparently set to arrive mid-to-late February, which February 20 matches up with nicely – and furthermore, that the RTX 5070 might not go on sale until early March.
Granted, I feel the latter rumor remains very tenuous, and I’d strongly caution against going too far with this idea right now. But it’s not unthinkable that the RTX 5070 might turn up later than the RTX 5070 Ti, and there’s been a rumor in the past that this is the plan.
It’s quite possible that Nvidia hasn’t made the final decision yet, and is still waiting to make a definitive call, which is (of course) why we weren’t treated to any specific dates at CES 2025 beyond just February.
Whatever the case, I hope the RTX 5070 makes the cut for late February, as promised, rather than sliding to March (and Nvidia will surely want this too – as the latter scenario means a direct clash with AMD’s RX 9070 GPUs, rather than getting out ahead of them).
Via VideoCardz
You might also like...Buffalo has introduced a new USB flash drive, the RUF3-KEV (via PC Watch, originally in Japanese), designed to provide physical security against malware and virus infections.
The USB 3.2 Gen 1 drive comes with in-built endpoint protection, the "DiXiM Security Endpoint," a security service that continuously monitors files saved or updated on the USB drive for any signs of infection.
This is in addition to a real-time antivirus feature which automatically isolates and removes infected files when detected, and a "heuristic function" that identifies potentially malicious programs by analyzing their behavior.
Buffalo RUF3-KEV security mechanism and pricingThough less eye-catching, the RUF3-KEV also supports password authentication, preventing unauthorized access.
Given the potential perils of its compact design (measuring just 19.8 x 10 x 68 mm and weighing approximately 11 grams), the drive uses a cap-less design and supports an "auto-return mechanism" which automatically retracts the connector when the USB drive is removed from the computer, offering protection from dust and physical damage.
The drive series will have three models of modest caacity; 64GB, 32GB and 16GB. Pricing has only currently been announced in yen, but each model will cost 10,000 yen, 8,300 yen and 6,600 yen each.
You might also likeSamsung has just unveiled its new Galaxy S25 series smartphones at its Galaxy Unpacked event, alongside a slew of brand-new AI features coming to its devices, such as the handy Now Brief. You can check out our coverage here at TechRadar.com including our hands-on thoughts with the new Samsung Galaxy S25, Samsung Galaxy S25 Plus, and Samsung Galaxy S25 Ultra, and find out more about everything announced via our Galaxy Unpack event liveblog.
But if you want us to truly unpack everything Samsung just revealed, as well as what we think this event means for Samsung as a whole in 2025, then you’ll need to watch our brand-new Samsung Unpacked January 2025 special episode of the TechRadar podcast.
In it, Josie Watson and I are joined by phone expert Axel Metz, fitness tech guru Matt Evans, and as always the wonderfully wise Lance Ulanoff to break down everything we saw so you can get to grips with the latest tech news.
We take a deep dive into the new phones and AI features, give you our thoughts on Samsung’s continued efforts to build an interconnected internet of things ecosystem – which goes beyond anything Apple is currently capable of – and discuss what Samsung needs for Project Moohan and its XR efforts to succeed where others have failed.
You can catch our latest podcast episode via our YouTube channel – or the embedded video above – and you can also check it out on Spotify and Apple Podcasts. You can find all our other episodes there too, including our CES 2025 special.
You might also likeSamsung took a few moments – literal seconds – out of its Samsung Galaxy S25 launch event to talk about Project Moohan (its upcoming VR headset) and Android XR, and how the platform will leverage multimodal AI to bring awesome (but currently nebulous) upgrades to XR systems. Thankfully it had more to say in a separate interview with TM Roh, the president of Samsung’s Mobile Experience division, including one detail which makes me believe Samsung’s tech won’t crash and burn like the Apple Vision Pro.
Out the gate we have some bad news courtesy of the interview conducted by Bloomberg (behind a paywall): we still don’t have a release date for Samsung’s headset or AR glasses. Roh did reaffirm the consumer version of Moohan is coming this year, though he didn’t reveal precisely when, or how much it’ll cost at launch.
@techradar ♬ original sound - TechRadarRoh also added that Samsung is working on AR glasses – though again, he refused to elaborate on when they might launch, just that they would arrive eventually once they reach the quality and readiness Samsung wants (which Roh hopes is “as soon as possible”).
However, the good news is that Samsung and its partner Google seem to have understood their core focus shouldn’t just be hardware, but software too.
An important lesson learnedRoh reportedly said that one key part of launching the XR devices will be having enough exclusive, original, worthwhile content ready for launch. To achieve this goal Samsung and Google are apparently working with third-parties to develop XR software for Android.
Thank goodness.
Samsung is learning from Apple's Mistakes (Image credit: Surreal Interactive)I’m not the only one to say this, but a huge issue with the Apple Vision Pro’s launch wasn’t intrinsically that it cost $3,500 / £3,499 / AU$5,999, it was that it didn’t justify costing $3,500 / £3,499 / AU$5,999. Sure, it boasted incredible specs, but fundamentally it couldn’t do anything you couldn’t just do with a Mac or iPad and a Meta Quest 3 – pairings that would cost you significantly less. And it could do less than either of those pairings in some ways, because the Quest platform is brimming with exclusive software.
Apple had a couple of impressive exclusives, like its Disney Plus 3D content, but nowhere near enough to compete with the market at the price it attempted to demand. That’s why a year on from its release it just hasn’t had the staying power anyone hoped it might.
TM Roh’s comments at least show Samsung is aware of the importance of software, though given how badly people have been burned previously by other brands, I’m hesitant to take the comments at face value – not until we can see and try the software he’s teasing. Don’t get me wrong, I’m desperate for Samsung to succeed so Meta can face some proper competition – right now, the closest thing we have to a Quest-killer is the rumored Asus Tarius headset (which uses the Quest’s operating system because it’s a collab between Asus and Meta) – but until Samsung and Google show us the goods I’ll remain cautiously optimistic.
For now, we’ll have to make do with Samsung talking the talk, and wait and see if it can walk the walk when it shows us what Project Moohan has in store for us later in 2025.
You might also likeGL.iNet has unveiled the Comet (GL-RM1), an open source remote KVM (keyboard, video, mouse) device running a Linux distribution based on the open-source project Buildroot.
Designed for users who require remote access to PCs and servers, the Comet connects via HDMI and USB for KVM functionality, while its Ethernet port integrates with your network for remote access.
GL.iNet's Comet product page is currently only offering a mailing list subscription, but it's notable that the company is describing it as, minus power blips, a failsafe alternative to remote desktop software.
Remote control at a new levelWith the Comet's BIOS/UEFI level compatibility, users can perform tasks like OS installations and troubleshooting without requiring the target system to be operational.
It's powered by a quad-core 1.5 GHz CPU and a 2.0 TOPS neural processing unit (NPU) which supports lightweight AI applications.
One of the key features of the Comet is its built-in permanent free VPN service, providing secure remote access. GL.iNet also equips the Comet with multiple remote boot options, including Wake-on-LAN (WOL), a mechanical button, and an ATX control board.
You might also like