Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 59 min 19 sec ago

Directive 8020's creative director says the game features 'two critical aspects' that change the formula of the Dark Pictures game, including real-time threats and the ability to rewind critical story choices

Thu, 06/12/2025 - 06:06
  • Directive 8020 will feature "real-time threats" that will ramp up players' fear
  • Creative director, Will Doyle, says the game's Turning Point narrative system will allow players to explore the story's branching outcomes
  • Doyle explains that these two changes to the Dark Pictures formula are "critical" to how players interact with the story

Supermassive Games has introduced two "critical" changes to Directive 8020 that will affect how players interact with the game's branching narrative.

Speaking in an interview with TechRadar Gaming during Summer Game Fest, Directive 8020's creative director, Will Doyle, explained that the team has switched up the survival horror aspect for the next Dark Pictures anthology entry by implementing "real-time threats".

"There are two big things that have fundamentally changed what this game is," Doyle said. "One of them is the real-time threats, the effect that that's had on changing the control system, and the fear that it puts into you when you're playing it really, really gets your heart pounding."

The Until Dawn creators have also changed how players make important story choices by introducing something called the Turning Point system. This mechanic allows players to rewind at certain points in the narrative, which Doyle says will allow players to explore different outcomes, unlike in any other Dark Pictures entry.

"The other is the turning point system and the ability to explore our branching story. Because for me, that's kind of two critical aspects of what we've done," he said.

"We're a narrative game, and you've got a new system that lets you explore the narrative in a fun way, and we are a game that lets you move around and have action in it, and we've pushed that forward as well."

Directive 8020 is the fifth entry in the eight-planned Dark Pictures series and arrives on October 2 for PlayStation 5, Xbox Series X, Xbox Series S, and PC.

In the game, a colony ship called the Cassiopeia has crash-landed on the planet Tau Ceti f in the middle of an expedition to save Earth. The crew soon discovers they aren't alone and must survive being hunted by an alien organism capable of mimicking its prey.

You might also like...
Categories: Technology

Major Interpol operation takes thousands of infostealer sites offline, dozens arrested

Thu, 06/12/2025 - 06:03
  • Interpol and international partners conducted Operation Secure
  • In four months, the police arrested dozens of people and seized vital data
  • Thousands of IP addresses hosting infostealers were taken down, as well

Dozens of people have been arrested, and thousands of IP addresses seized, in an Interpol-led international law enforcement operation aimed at disrupting a network of infostealers and other malware.

The international law enforcement agency said Operation Secure took place between January and April 2025, and saw police agencies in 26 countries work together to locate servers, map physical networks, and move in to disrupt cybercriminal campaigns.

As a result, 32 people were arrested: 18 in Vietnam, 12 in Sri Lanka, and two in Nauru, including the individual suspected to have been running the entire operation, who was found with around $11,500 in cash, several SIM cards, and business registration documents which, according to the Interpol, point to a scheme “to open and sell corporate accounts”.

Servers, IP addresses, and gigs of data

In Hong Kong, the police analyzed more than 1,700 pieces of intelligence shared by Interpol, which helped them identify 117 command-and-control servers hosted on 89 internet service providers.

These servers were used as central hubs for different cybercriminal campaigns, including phishing, fraud, and social media scams.

Aside from the arrests, the police also seized 41 servers and obtained more than 100 GB of data.

Furthermore, it took down more than 20,000 malicious IP addresses linked to information stealers, and identified 216,000 victims and potential victims, some of whom have already been notified.

A few private cybersecurity companies also participated in the campaign, by providing valuable intelligence: Group-IB, Kaspersky, and Trend Micro.

“Interpol continues to support practical, collaborative action against global cyber threats,” commented Neal Jetton, the agency’s Director of Cybercrime.

“Operation Secure has once again shown the power of intelligence sharing in disrupting malicious infrastructure and preventing large-scale harm to both individuals and businesses.”

You might also like
Categories: Technology

Sam Altman doesn’t think you should be worried about ChatGPT’s energy usage - reveals exactly how much power each prompt uses

Thu, 06/12/2025 - 05:52
  • Sam Altman says a ChatGPT prompt uses "0.34 watt-hours" of electricity, roughly one second of an oven
  • He also says a single ChatGPT prompt uses "0.000085 gallons of water; roughly one-fifteenth of a teaspoon"
  • While that's not a lot in isolation, ChatGPT has over 400 million weekly users, with multiple prompts per day

OpenAI CEO, Sam Altman has revealed ChatGPT's energy usage for a single prompt, and while it's lower than you might expect, on a global scale, it could have a significant impact on the planet.

Writing on his blog, Altman said, "The average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon."

While that might not sound like a lot as an isolated prompt, ChatGPT has approximately 400 million active weekly users, and that number is growing at a rapid rate. Bear in mind there's a growing amount of AI tools and chatbots on the market, including Google Gemini and Anthropic's Claude, so general AI energy usage will be even higher.

Last month, we reported on a study from MIT Technology Review which found that a five-second AI video uses as much energy as a microwave running for an hour or more. While Altman's ChatGPT prompt energy usage reveal is nowhere near as high as that, there are still concerns considering how much people interact with AI.

We rely on AI, so is this energy consumption a concern?

There's a constant concern about ChatGPT's energy consumption, and it is becoming increasingly vocal as AI usage continues to rise. While Altman's blog post will put some minds at ease, considering the relatively low energy and water usage in isolation, it could also spark more uproar.

Earlier this week, a mass ChatGPT outage led to millions of people unable to interact with the chatbot. Over the 10 hour plus period, I received emails from thousands of readers who gave me a new perspective on AI.

While I'd be lying if I said AI's energy consumption doesn't concern me, it would be unfair to overlook the positives of the technology and how it is improving the lives of millions.

The climate crisis is not limited to me and you, but unfortunately, it's the working class that ultimately pays the price. ChatGPT's energy consumption at a mass scale may be a severe problem in the future, but then again, so are the private jets flying 10-minute flights.

The AI climate concerns are not black and white, and those who criticise the impact of the technology on the planet are equally vocal about the impact of other technologies. That said, we're only at the beginning of the AI revolution, and energy consumption will continue to rise. At what point should we be worried?

You might also like
Categories: Technology

Making the case for a unified threat intelligence model

Thu, 06/12/2025 - 05:22

If the AI Action Summit in Paris earlier this year is the sign of things to come, establishing a coordinated approach to regulation and governance will be no easy task in the short or long term. To an extent, this is understandable – these processes rarely operate at pace, particularly when stakeholders are still trying to understand the impact of important trends, such as the emergence of advanced AI.

The problem this creates, however, is that without consensus, organizations must work against the backdrop of a more complex and unpredictable threat landscape, with the tools used by threat actors more advanced and accessible than ever before. In fact, the pace of change around AI tools is so rapid that it’s difficult to properly predict exactly what new cyber risks will emerge.

Not all one-way traffic

Thankfully, it’s not all one-way traffic. On one hand, threat actors are using generative tools to automate phishing campaigns, identify system vulnerabilities and write malicious code. On the other, forward-thinking organizations are using the same techniques to stay one step ahead. Nevertheless, security teams are in a difficult position – expected to respond to a growing range of threats with the same or fewer resources, while also managing a greater range of more serious risks.

The result is a situation where many organizations are constantly playing catch-up. Threat intelligence may be available, but without the right tools and frameworks in place to both distil and make use of it, much of that insight stays hidden or goes underutilized.

This plays out in different ways. In some organizations, core security functions – from threat intelligence and automation to incident response – remain siloed, with limited coordination or shared visibility. In others, strategies are developed in isolation, missing the opportunity to tap into the wealth of experience and insight already available across the broader security community.

The result? Individual businesses are left to fend off highly organized, fast-moving threat groups that thrive on shared intelligence and agile tactics, and are often several steps ahead.

The power of collaboration

To address these essential issues, organizations are relying more heavily on security collaboration and collective defense, with Information Sharing and Analysis Centers (ISACs) among the most established and effective approaches.

Operating across sectors, these groups are designed to collect, analyze and distribute actionable threat intelligence, while also equipping members with the tools and resources needed to strengthen resilience. Today, the National Council of ISACs, for example, includes nearly 30 sector-specific organizations – a clear sign of how far this model has evolved.

The industry clearly sees the value. According to recent research, more than 90% of respondents say collaboration and information sharing are either very important or critical to their cybersecurity strategy. The problem? Almost three-quarters (70%) believe they could be doing more, with nearly one in five admitting they could share significantly more intelligence than they currently do.

Worryingly, more than half of those surveyed (53%) said their organization doesn’t engage with an ISAC at all. Perhaps even more concerning, 28% weren’t even aware that ISACs exist, underlining how much ground there is still to cover in building a truly collaborative cyber defense ecosystem.

But, for any effective approach to collective defense to succeed, the goal is to establish workflows that allow for the rapid, structured exchange of indicators of compromise (IoCs), tactics, techniques and procedures (TTPs) and real-world incident reports. Communities that get this right create a multiplier effect – the more each participant shares, the stronger the whole becomes.

Proactive security

All of this supports a shift from reactive to proactive security. Today’s teams must be able to identify risks before they escalate and take preventive action in near real time.

But that’s easier said than done. Security operations are often flooded with data from multiple sources, making it hard to separate the signal from the noise. That’s why threat intelligence platforms (TIPs) are becoming vital in helping ingest and operationalize threat data and in the process, reducing manual overhead and improving decision-making.

The best TIPs also enable automated intelligence sharing with external communities. In doing so, they act as a nerve center, connecting internal teams with trusted partners outside the organization.

This can transform the levels of sophistication and speed that can be applied to threat intelligence, empowering security teams to replace their reliance on manual processes while boosting efficiency, time savings and improved accuracy.

TIPs can also broaden the different types of data used in the threat intelligence process, including integrating structured and unstructured data, which can then be delivered as standardized output.

Looking at the overall security landscape at present, the challenges posed by AI-powered cybercrime are already prompting regulators to push for higher standards. Across multiple industries, new rules demand that organizations go beyond point solutions and build resilience into their day-to-day security strategies.

In practical terms, that means engaging with trusted partners and building response frameworks that are ready for action. If and when international standards emerge, organizations that embrace the collective defense approach will be strongly placed to ensure their networks and data remain safe.

Train up with the best online cybersecurity courses.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

4 crucial Superman movie details you might have missed in the final trailer for James Gunn's next DC comic book film

Thu, 06/12/2025 - 05:01
  • DC Studios has released one final trailer for Superman
  • The James Gunn-directed superhero movie releases on July 11
  • Its latest teaser contains some crucial details about its plot and the odd Easter egg

James Gunn's Superman movie is officially less than a month away (at the time of publication, anyway). And, as the countdown towards its July 11 release gathers pace, DC Studios has debuted one final trailer for the David Corenswet-led comic book movie.

The trailer, which also confirms tickets are now on sale for one of 2025's most anticipated new movies, doesn't just contain footage we've seen in the DC Universe (DCU) film's other teasers.

Indeed, there are numerous new clips that hint at the direction of its plot and contain the odd DC Comics Easter egg. So, here are four crucial details and/or references to previous Superman projects you might have missed upon first viewing. Potentially big spoilers follow, so proceed at your own risk.

1. A glimpse at Mister Handsome

Who's the mysterious individual standing behind Lex Luthor? (Image credit: DC Studios/Warner Bros. Pictures)

If your initial reaction to this individual is "...who?", you're not alone. Mister Handsome isn't an established DC Comics character – in fact, he appears to be a brand-new creation for this DCU Chapter One movie.

Okay, so who is he? We don't actually know, but that hasn't stopped eagle-eyed fans from thinking they've spotted him in Superman's latest trailer.

Look behind Nicholas Hoult's Lex Luthor in the above image, and you'll see what appears to be a pale skinned individual standing on some form of podium. Viewers think this human-looking creature is the aforementioned Mister Handsome.

Superman's behind-the-scenes vignette gave us a better look at Mister Handsome (Image credit: DC Studios/Warner Bros. Pictures)

There's further evidence to suggest this is the case. In Superman's behind-the-scenes featurette, we catch another glimpse of a being who not only resembles the character in Superman's final trailer, but also has a visibly different posture and appearance to most humans.

Additionally, peaking as part of a recent Fandango interview, Hoult and Gunn revealed that Hoult's son, who occasionally visited the Superman set, had a fondness for Mister Handsome. Elaborating on who this character is, Gunn teased: "Mister Handsome is Lex’s creature that he created in a petri dish who drives around on this flying platform that is the ugliest, grossest creature in the world."

This doesn't confirm that the pale skinned individual we briefly see is Mister Handsome, but Gunn's description lines up with what little we've seen of them. Throw in the incredibly ironic name for someone who's not attractive to look at and this has to be Mister Handsome.

2. Ultraman unmasked

Things are heating up between Ultraman and Superman (Image credit: DC Studios/Warner Bros. Pictures)

There's been plenty of speculation about Ultraman's actual identity. I've extensively covered the biggest fan theory about this secondary villain, so I won't do so again here. If you want more details on said hypothesis, read my pieces on Ultraman's supposed ties to another villain called the Hammer of Boravia, my breakdown of Superman 's second trailer, and the biggest Ultraman fan theory seemingly being confirmed by some Funko Pop figures.

Superman's third trailer doesn't deliver much in the way of new Ultraman footage, but there are a couple of missable shots of this mysterious individual without his mask. The first shows Ultraman blasting the Man of Steel with his own heat vision ability. Later, we see Supes whacking a demasked Ultraman with a large metallic object.

Neither clip is slow or long enough to give us a good look at Ultraman's face. Nonetheless, we know he'll lose his face covering at some point, which suggests we'll find out who he really is. My money is still on him being a Superman clone.

3. Green Lantern's 'Big Blue' call back

A superpowered humanoid alien is enough to make anyone green with envy (Image credit: DC Studios/Warner Bros. Pictures)

At the trailer's 1:20 mark, we see Nathan Fillion's Guy Gardner/Green Lantern squaring up to Kal-El. Clearly, something's happened between the pair and it seems Gardner is trying to goad Supes to attack him. You don't mock someone by saying "make a move, Big Blue" if you're not trying to antagonize them.

"Big Blue" might sound like a form of derision on Gardner's part, but it's actually a call back to one of Superman's oldest nicknames. Indeed, it's a reference to 'The Big Blue Boy Scout' moniker that the Son of Krypton also went by in the 1950s. It's also an alias that's been shortened to 'Big Blue' in countless other comic books.

This, then, appears to be one of many homages that'll be paid to the Man of Tomorrow's literary roots in his latest big-screen adventure.

4. Who is baby Joey in Superman?

One Kryptonian and a baby (Image credit: DC Studios/Warner Bros. Pictures)

The final big reveal in Superman's latest trailer concerns a character called Joey. He's the alien baby we see Clark Kent's superpowered alter-ego holding/rescuing as some form of cosmic explosion erupts around them.

We already know Joey's father is, too. Superman's Funko Pop figurine collection has already ruined that surprise, with Joey being the progeny of Anthony Carrigan's Metamorpho. For the uninitiated: essentially, he's a metahuman who can create any type of element out of thin air.

The prevailing fan theory – one strengthened by footage shown in Superman's second trailer – is Luthor is exploiting Metamorpho in order to keep Joey safe. And, given Metamorpho's unique powers, he's tailor-made to craft kryptonite, aka the only substance in the known galaxy that can harm/weaken someone like Superman. It seems, then, that Metamorpho will be coerced to create some kryptonite, which Luthor can use against Supes, so Luthor doesn't harm Joey.

You might also like
Categories: Technology

Pixel 6a phones keep catching fire, so Google is going to put limits on their batteries

Thu, 06/12/2025 - 04:48
  • Some Pixel 6a phones have been overheating while charging
  • Google says it will address the issue with a software update
  • The update will limit battery capacity and charging speed

You may have spotted several reports of Google Pixel 6a phones catching fire and burning up in recent weeks, and Google is now taking steps to stop any future incidents by limiting the battery capacity and charging speed on the 2022 handsets.

In a statement to Android Authority, a Google spokesperson said that a "subset" of Pixel 6a phones will soon get a "mandatory" software update, reducing battery capacity and charging performance once 400 cycles have been reached.

This should "reduce the risk of potential battery overheating" according to Google, though it will leave you with a phone that charges up more slowly and doesn't last as long between charges – not great for a handset that's only been out three years.

Google says users with affected phones will be contacted next month with details of what they need to do. Meanwhile, the Android Authority team has spotted a warning about a potential battery overheating issue on the Pixel 6a in the latest Android 16 beta.

Keep an eye on your phone

Evidence of a burned out Pixel 6a (Image credit: Ariella / Android Authority)

It's difficult to gauge just how widespread this problem is, but Android Authority has already recorded two separate incidents, and there are pictures as proof. It's scary to think that your phone could suddenly burst into flames while it's been left charging.

Given that Google only mentions a "subset" of Pixel 6a phones, it's possible that only certain handsets are affected. We may not see this update roll out for every single Pixel 6a, but right now it's not clear either way.

If you own a Google Pixel 6a, it's a good idea to keep an eye on it while it's charging: look out for any kind of deformation, and check that the handset isn't getting too hot to the touch. Very short battery life can also indicate a battery health problem.

Swelling and overheating can potentially happen to any lithium-ion battery, if it gets physically damaged or somehow malfunctions, but it's an issue that keeps happening with Pixel phones – most recently with the Google Pixel 7a.

You might also like
Categories: Technology

The Google Pixel 10 might have MagSafe-like charging – and do it better than the Samsung Galaxy S25

Thu, 06/12/2025 - 04:46
  • The Google Pixel 10 series could have built-in magnets for magnetic wireless charging
  • A new line of 'Pixelsnap' accessories might launch to take advantage of this feature
  • The Pixel 10 series could also have improved speakers

Android phones have technically been capable of supporting MagSafe-like charging for a while now, but so far, no big-name handsets have fully embraced the technology. The Samsung Galaxy S25 series, for instance, offers a half-measure solution that requires a magnetic case – but the Google Pixel 10 series might actually have magnets built in.

This rumor comes from Android Authority, and if true, would mean you’d be able to magnetically attach wireless chargers and other accessories to the Google Pixel 10, with no special cases required.

Android Authority claims to have seen “credible marketing materials intended for retailers” that show Google is working on magnetic accessories for the Pixel 10. These accessories would use the Qi2 standard (which enables magnetic wireless charging), and they include a ‘Pixelsnap Charger’, a ‘Pixelsnap Charger with Stand’, and a ‘Pixelsnap Ring Stand’.

The first two are self-explanatory, while the ‘Pixelsnap Ring Stand’ might be a stand for your phone that doesn’t include a charger.

Given that Google has previously been rumored to be working on a ‘Hub Mode’ for its phones, these stands would make sense, as Hub Mode would work like a smart display, where a stand would come in handy.

Faster charging and better speakers

The Google Pixel 9 (Image credit: Philip Berne / Future)

In any case, Android Authority also found evidence of an upcoming Google wireless charger in a trade database, and its specs allow, in theory, for 60W wireless charging – though since current Pixel models can’t even charge that fast with wires, it’s likely that Google would restrict the speeds.

The database also mentions that this upcoming charger will come in ‘Rock Candy’ and ‘Mist’ shades, which Android Authority speculates will translate to black and off-white, respectively.

And as well as a charging upgrade, the Google Pixel 10 series could also get improved speakers, with Android Headlines claiming that they will be the best speakers ever on a Pixel phone, albeit ones lacking Dolby Atmos.

As ever with leaks, we’d take all of this with a pinch of salt for now, but it’s surprising that Android device makers aren’t yet embracing magnetic charging, so we’re hopeful that the Google Pixel 10 series will. The phones will probably launch in August, so we should find out whether they do then.

You might also like
Categories: Technology

Nvidia expands AI infrastructure reach in major push across Europe

Thu, 06/12/2025 - 04:45
  • Tens of thousands of Nvidia chips will be deployed across many European countries
  • Nvidia is partnering with key telecom firms across the continent
  • AI centers will further research and offer training opportunities

Nvidia has announced significant new plans to support European customers as the region looks to bolster sovereignty requirements amid ongoing global trade concern

The chipmaker has revealed France, Italy, Spain and the UK are all deploying Nvidia Blackwell systems to build their own sovereign AI infrastructure, and it has also pledged to build an AI factory in Germany for industrial manufacturing applications, together with further AI technology centers in Germany, Sweden, Italy, Spain, the UK and Finland.

Nvidia's expansion across Europe comes amid partnerships with a number of European firms, including Mistral AI, Orange, Swisscom, Telefónica and Telenor.

Nvidia European expansion

Though confirmed in an Nvidia press release, many of the infrastructure expansions rely on local partnerships.

Mistral AI will deploy 18,000 Grace Blackwell systems in France, Nebius and Nscale will roll out 14,000 Blackwell GPUs via new data centers in the UK, Germany will launch the world's first industrial AI cloud with 10,000 Blackwell GPUs and Italy is to develop the "Large Colosseum" reasoning model on Grace Blackwell Superchips.

Further partnerships with European telecommunication companies will see Orange, Fastweb, Telenor, Swisscom and Telefónica launch their own AI models and tools.

"Every industrial revolution begins with infrastructure. AI is the essential infrastructure of our time, just as electricity and the internet once were," CEO Jensen Huang explained.

UK Tech Secretary Peter Kyle expanded on Huang's notion: "Just as coal and electricity once defined our past, AI is defining our future."

A series of AI centers are also to be established across many European countries, including Germany, Sweden, Italy, Spain, the UK and Finland, with the aim of accelerating AI research and offering local upskilling options.

"With bold leadership from Europe’s governments and industries, AI will drive transformative innovation and prosperity for generations to come," Huang added.

The news follows Huang's recent appearance at London Tech Week 2025, where he joined UK Prime Minister Sir Keir Starmer on stage to announce new expansions in the country, and hail the impact of AI investment.

You might also like
Categories: Technology

I get annoying spam calls all the time, so I can’t wait for the Call Screening feature in iOS 26

Thu, 06/12/2025 - 04:41
  • iOS 26 adds the new Call Screening feature to iPhone
  • The new feature automatically asks callers for a name and reason for calling, and provides a real-time transcript
  • New data suggests more than 1 billion hoax calls could be intercepted

A new feature in iOS 26 could block more than a billion spam calls each year, according to new data.

Apple is adding a new Call Screening feature to iPhone with iOS 26, and with Call Screening enabled, your iPhone will ask for a name and reason for calling before sending the call through, building on the Live Voicemail feature added with iOS 17.

Similarly to Live Voicemail, Call Screening provides a real-time transcription of the caller’s response to those initial questions, and then gives the user the choice to pick up or ignore the call.

The feature will also be available in the Wi-Fi calling-enabled Phone apps coming with iPadOS 26 and macOS Tahoe 26.

A billion blocked calls

The iPhone 16 family will all get Call Screening, whether its powered by Apple Intelligence or not. (Image credit: Future)

Analysis by second-hand phone marketplace Compare and Recycle suggests that this could block more than a billion scam calls each year in the UK alone.

Though the report provided by Compare and Recycle is UK-specific, it tracks that this figure could increase quite substantially with other countries factored in.

The report estimates that the average person in the UK gets four spam calls per month. The report also estimates that just shy of 24 million people in the UK will get access to Call Screening, working out to more than 1.1 billion intercepted calls each year.

Personally, I can’t stand spam calls, and there have been days – and even whole months – where I seem to get much more than the estimated four calls.

Additionally, I get plenty of calls that drop as soon as I pick up, or as soon as I say something. It’s never nice to imagine that my number’s been scraped or marked as ‘active’ in a database for scammers.

To be honest, the issue is so prevalent that I sometimes don’t pick up the phone at all, instead waiting for a voicemail or follow-up email to know exactly who's trying to reach me.

The addition of Call Screening could see me pop the SIM card out of my trusty Oppo Find X8 Pro and back into one of the best iPhones, for the sake of using my phone as an actual phone again.

With that said, the official Apple press release for iOS 26 doesn't make it clear whether Call Screening is an Apple Intelligence feature, in which case my iPhone 15 would be out of luck.

WWDC saw the announcement of plenty more features for iPhone, iPad, and Mac – head over to our WWDC2025 recap for a full rundown.

And be sure to let us know whether you’re looking forward to using Call Screening on your iPhone in the comments below.

You might also like
Categories: Technology

This external RTX 4090 GPU comes with Thunderbolt 5 connectivity, but sadly, I don't think that you can connect it to an Apple MacBook Pro

Wed, 06/11/2025 - 17:45
  • Dual Thunderbolt 5 ports and OCuLink elevate this eGPU beyond typical external GPU standards
  • Nvidia's Ada Lovelace cards shine in this unit
  • Compact design and bold specs make FEVM FNGT5 Pro a tempting power upgrade for PCs

External GPUs have long served as a way to upgrade a laptop’s graphical capabilities, particularly for users whose machines lack discrete GPUs.

The FNGT5 Pro from Chinese manufacturer FEVM is the latest entrant in this niche category, bringing an ambitious mix of high-end GPU options and modern connectivity features.

The FNGT5 Pro supports three RTX 40-series laptop GPUs, specifically the RTX 4060, 4080, and 4090. This might raise eyebrows, but it appears to be a calculated decision to balance power and heat management within such a compact enclosure.

RTX eGPU aims high

Measuring 142 x 100 x 60 mm and with a total volume of 0.86 liters, the FNGT5 Pro is compact and travel-friendly, though not quite pocket-sized.

Despite its portability, the device features dual Thunderbolt 5 ports (100W upstream and 30W downstream), a high-speed USB-A port, and an OCuLink interface.

Offering both Thunderbolt 5 and OCuLink sets it apart from most rivals, which typically offer just one of the two.

Display connectivity is handled by HDMI 2.1 and DisplayPort 1.4a outputs.

If you're part of the Apple ecosystem, however, don’t get too excited, you likely can’t use this eGPU with a MacBook Pro.

Apple has not supported external GPUs since its transition to Apple Silicon, and even earlier Intel-based Macs were only compatible with Thunderbolt 3 eGPUs using officially supported AMD GPUs.

Despite Thunderbolt 5 being theoretically backward-compatible and extremely fast, macOS lacks the driver-level support needed for Nvidia cards, especially those housed in non-certified enclosures.

So, while you could physically connect the FNGT5 Pro to a MacBook via Thunderbolt, it’s highly unlikely to function as intended.

As for pricing, the top-tier RTX 4090 Laptop GPU, with 16GB of memory and 9,728 CUDA cores, costs $1,374, steep, but in line with desktop equivalents.

The RTX 4080, featuring 7,424 CUDA cores and 12GB of memory, is priced at $1,040, while the entry-level RTX 4060, with 3,072 CUDA cores and 8GB of RAM, comes in at $555.

For users seeking the best laptop for video editing or for Photoshop, pairing a compatible system with a powerful eGPU like the FNGT5 Pro can help close the performance gap without committing to a full desktop setup.

Via TomsHardware

You might also like
Categories: Technology

Meet the 'Duracell Bunny' of SSDs that can withstand 50 drive writes per day for five whole years - but it won't come cheap

Wed, 06/11/2025 - 17:09
  • InnoGrit N3X SSD delivers 50 DWPD endurance but costs more than typical enterprise drives
  • Built for caching, inference, and workloads that punish ordinary SSDs
  • Runs entirely in SLC mode, sacrificing capacity for serious performance gains

The InnoGrit N3X SSD introduces a high-endurance storage solution aimed at enterprise workloads with extreme write demands.

Unveiled at Computex 2025, and featuring Kioxia’s second-generation XL-Flash operating in SLC mode, the drive is engineered to deliver 50 drive writes per day (DWPD) over five years, far exceeding the endurance of typical enterprise SSDs.

This level of durability is impressive, but it also raises questions about the cost of the device and whether its performance will justify the expected premium.

SCM roots and a specialized architecture

At the heart of the N3X is storage class memory (SCM), a memory tier designed to bridge the performance gap between DRAM and traditional NAND flash.

When used in SLC mode, Kioxia’s XL-Flash functions as a type of SCM, promising ultra-low latency and high endurance.

Unlike standard NAND, which stores multiple bits per cell, operating XL-Flash in SLC mode prioritizes speed and reliability over capacity.

This design choice closely mirrors the original goals of Intel’s now-discontinued Optane memory, positioning the N3X as a potential successor in that specialized niche.

While SCM technologies like XL-Flash are not new, they remain relatively rare due to their high cost and specialized applications.

InnoGrit’s use of the IG5669 PCIe 5.0 controller, with NVMe 2.0 support, allows for impressive performance claims: up to 14 GB/s read and 12 GB/s write speeds, along with 3.5 million random read IOPS.

Latency is where the N3X particularly stands out - read latency under 13 microseconds and write latency as low as 4 microseconds.

If consistently achieved, these figures would place the N3X among the fastest SSDs in development.

The drive is marketed for workloads involving sustained writes, in-memory computing, and real-time inference, areas where traditional NAND SSDs often struggle with latency and wear.

However, the decision to operate entirely in SLC mode significantly reduces the available capacity per die, resulting in smaller drive sizes and a higher cost per gigabyte.

While the drive is offered in capacities ranging from 400GB to 3.2TB, these fall short of what is expected from today’s largest SSDs.

Although the N3X possesses many of the technical qualities of the best portable SSDs, it is not intended for mainstream use.

Its reliance on SCM architecture, while enabling exceptional performance, places it firmly in the domain of niche enterprise deployments.

You might also like
Categories: Technology

When Apple didn’t go hard on AI at WWDC, I let out a sigh of relief – here's why

Wed, 06/11/2025 - 16:45

I’ve been covering artificial intelligence, or at least topics that touch upon it, for most of my technology journalism career, and long before generative AI was something the public could just access with relative ease. But like it or lump it, AI is very much the buzz of the moment in and beyond the technology world. So it was surprising that at WWDC 2025, Apple kind of played down the subject.

Sure, Apple Intelligence was present and would appear to be more integrated into Cupertino’s various software platforms than it was previously. But many of these features appear to augment existing tools rather than create all-new ones; AI can figure out the regular routes you take in the iOS 26 Maps app, for example.

I’d also argue that Apple added smart features, such as Live Translation in the Messages and FaceTime apps, almost as a way to keep up with Google’s and Samsung’s AI efforts in their flagship smartphones, rather than lead the way or hone existing tech into something special.

Instead, Apple played up the redesign of iOS, macOS, and more with the use of its ‘Liquid Glass’ material design. And Apple Intelligence appeared to take a backseat; as my colleague Matt Bolton pointed out, Siri was properly absent from WWDC and indicative of failure for the virtual assistant.

Now I won’t argue against Mr Bolton, as he raises some good points, but I’m also low-key grateful AI didn’t dominate WWDC.

User experience first, AI smarts second

(Image credit: Apple)

I’ve always felt that Apple’s strength comes from its user experience. As locked down as some of Cupertino's software can be, and the walled garden approach to its ecosystem, once you’re in said garden, everything does work really rather well. From easy, secure payments and authentication, to quick file transfer between Apple devices and users, and much more.

As an aside, I’ve argued before that I want AI to be used for genuinely transformational things that benefit society, not generate images of a dog on the moon or write my emails for me. I reckon humanity is better off going through the challenges of learning how to better string sentences together or wait to frame the perfect camera shot, than let AI do everything for them, as that could take us down a dark path (check out Black Mirror on Netflix).

Bringing things back to Apple and WWDC, I feel that a redesign and the neat addition of useful features to iOS and macOS will resonate more with Apple device users than some smart AI tools that could feel a little bolted on to a core phone or laptop experience.

As a user of the iPhone 16 Pro Max and a MacBook Air M2, I have access to several Apple Intelligence tools. But aside from a bit of sporadic flirtation with them and the occasional nod of appreciation towards AI-generated summaries of voicemails, Apple Intelligence hasn’t come close to changing the way I use my iPhone.

I’ve said in the past that I find the recent iPhones to be boring but brilliant; they lack the do-anything vibe of the Samsung Galaxy S25 Ultra or the intriguing AI-lead experience of the Google Pixel 9 family, but simply serve as smartphones that get stuff done quickly and well.

I treat my iPhone as a tool rather than a gadget, which doesn't make it exciting but does make it one of the best phones I’ve used, as there’s precious little getting in my way or distracting me from doing what I need to do.

I think many other Apple users share the same mindset. There’s been a huge amount of people who’ve checked out our how to download the iOS 26 developer beta article, which to me shows there’s a big interest in the Liquid Glass redesign.

Furthermore, in an article I wrote about wanting Samsung to add more AI into its next-generation foldable phone to truly make them more effective, one commenter said they don't find AI on phones to be useful at all and wants options to turn off such tools.

(Image credit: Future)

So while tech luminaries wax lyrical about AI and some people use it to do a lot of things for them, I get the feeling others would just prefer to have tech that does indeed ‘just works’ with each improvement, incremental or otherwise, being about users, not technological expertise.

As such, I think Apple may have been smart to focus WWDC more on visual and slick functional changes to its core software than on putting AI in the limelight. After all, I still feel AI hasn’t become sufficiently foolproof and accurate to make it a must-have right now.

I think, as it stands, if you are after an AI phone, then the best Google Pixel phones are the ones to look at, given they are built from the hardware up to be all about AI. And Google’s phones have always been the devices to push more esoteric features, be that the radar sensors in the Pixel 4 phones or the AI focus of the past few generations of Pixels.

In contrast, I’ve always seen Apple as the brand that fully embraces emergent technology only when it has reached a point of maturity and consumer understanding.

Given the rocky launch of Apple Intelligence, AI still being for enthusiasts than everyone (albeit that could be changing rapidly), and how iOS and macOS are finely curated platforms, I think eschewing AI at this year’s WWDC will prove to have been the smart move for Apple, even if various tech commentators and analysts see it as being behind the curve. Now onwards to the iPhone 17

Do you want more AI in iPhones? Let me know in the comments below.

You might also like
Categories: Technology

Major data breach at popular hookup app leaks data on millions of users - see if you're safe

Wed, 06/11/2025 - 15:29
  • Cybernews found an unescured MongoDB instance belonging to Headero
  • The database contained millions of records and PII
  • It has since been locked down, but users should still be on their guard

Security researchers from Cybernews have reported uncovering a massive MongoDB instance belonging to a dating and hookup app called Headero.

The database contained more than 350,000 user records, more than three million chat records, and more than a million chat room records.

Among the exposed data are names, email addresses, social login IDs, JWT tokens, profile pictures, device tokens, sexual preferences, STD status, and - extra worryingly - exact GPS locations.

No evidence of abuse

Cybernews reached out to the app’s developers, a US-based company named ThotExperiment, which immediately locked the database down. The company told the researchers that it was a test database, but Cybernews’ analysis indicates that it could have been actual user data, instead.

Unfortunately, we don’t know for how long the database remained open, and if any threat actors accessed it in the past. So far, there is no evidence of abuse in the wild.

Human error leading to exposed databases remains one of the most common causes of data leaks and security breaches.

Researchers are constantly scanning the internet with specialized search engines, finding massive non-password-protected databases almost daily.

These leaks can put people at risk, since cybercriminals can use the information to tailor highly convincing phishing attacks, through which they can deploy malware, steal sensitive files, and even commit wire fraud.

Headero users are advised to be extra vigilant when receiving unsolicited messages, both via email and social platforms.

They should also be careful not to download any files or click on any links in such messages, especially if the messages carry a sense of urgency with them. If they are using the same password across multiple services, they should change them, and clear sessions / revoke tokens in apps, where possible.

You might also like
Categories: Technology

Millions of patients possibly at risk due to poor passwords at healthcare orgs - here's how to stay safe

Wed, 06/11/2025 - 14:19
  • NordPass and NordStellar reviewed terabytes of data
  • The analysis uncovered poor password practices in the healthcare industry
  • Organizations are lacking staff training and strong policies

Hygiene in hospitals and clinics is essential, but cyber-hygiene - despite being equally important - is constantly being neglected, experts have warned.

A report from NordPass and NordStellar has claimed weak password practices are “dangerously common” in the healthcare industry.

Based on a review of 2.5TB of data extracted from various publicly available sources (including the dark web), the two organizations found that different medical institutions, including private clinics and hospital networks, all rely on “predictable, recycled, or default passwords” to protect critical systems. As a result, sensitive patient data, and possibly their health, is placed at immense risk.

Carelessness

“When the systems protecting patient data are guarded by passwords like ‘123456’ or ‘P@ssw0rd,’ that’s a critical failure in cybersecurity hygiene. In a sector where both privacy and uptime are vital, this kind of carelessness can have real consequences,” said Karolis Arbaciauskas, head of business product at NordPass.

The report also lists the most frequently used passwords identified in the healthcare sector. If you’re using any of these (or a variant), make sure to change them for something tougher to crack:

  • fabrizio19
  • 123456
  • Melu3@12345
  • @Vow2017
  • Mercury9.Venus8
  • password
  • Marty1508!
  • Carlton@1988
  • 12345678
  • @Vowcomm2018
  • papa
  • 12345
  • Durson@123
  • P@ssw0rd
  • Simetrica
  • Raffin2209!
  • Asspain28#
  • Smith
  • neuro
  • default
Policies and training

The teams warn passwords that reflect personal names, simple number patterns, or default configurations, are all prime targets for brute-force and dictionary attacks, in which cybercriminals automate the process, and try out countless combinations until they break in.

To make matters even worse - one break-in is more than enough to wreak havoc, as lateral movement can compromise entire networks, expose sensitive data, and result in different malware and ransomware infections.

The report stresses that healthcare institutions “lack clear password management policies or staff training,” which is why they are recommended to enforce strong password policies, eliminate the use of default or role-specific passwords, use a business-grade password manager, train the staff, and introduce 2FA wherever possible.

You might also like
Categories: Technology

A worrying Windows SecureBoot issue could let hackers install malware - here's what we know, and whether you need to update

Wed, 06/11/2025 - 13:34
  • Binarly spotted a legitimate utility, trusted on most modern systems utilizing UEFI firmware, carrying a flaw
  • The flaw allowed threat actors to deploy bootkit malware
  • Microsoft patched it the June 2025 Patch Tuesday cumulative update

Microsoft has fixed a Secure Boot vulnerability that allowed threat actors to turn off security solutions and install bootkit malware on most PCs.

Security researchers Binarly recently discovered a legitimate BIOS update utility, signed with Microsoft’s UEFI CA 2011 certificate. This root certificate, used in the Unified Extensible Firmware Interface (UEFI) Secure Boot process, plays a central role in verifying the authenticity and integrity of bootloaders, operating systems, and other low-level software before a system boots.

According to the researchers, the utility is trusted on most modern systems utilizing UEFI firmware - but the problem stems from the fact it reads a user-writable NVRAM variable without proper validation, meaning an attacker with admin access to an operating system can modify the variable and write arbitrary data to memory locations during the UEFI boot process.

Microsoft finds 13 extra modules

Binarly managed to use this vulnerability to disable Secure Boot and allow any unsigned UEFI modules to run. In other words, they were able to disable security features and install bootkit malware that cannot be removed even if the hard drive is replaced.

The vulnerable module had been circulating in the wild since 2022, and was uploaded to VirusTotal in 2024 before being reported to Microsoft in late February 2025.

Microsoft recently released the June edition of Patch Tuesday, its cumulative update addressing different, recently-discovered, vulnerabilities - among which was the arbitrary write vulnerability in Microsoft signed UEFI firmware, which is now tracked as CVE-2025-3052. It was assigned a severity score of 8.2/10 (high).

The company also determined that the vulnerability affected 14 modules in total, now fixing all of them.

"During the triage process, Microsoft determined that the issue did not affect just a single module as initially believed, but actually 14 different modules," Binarly said. "For this reason, the updated dbx released during the Patch Tuesday on June 10, 2025 contains 14 new hashes."

Via BleepingComputer

You might also like
Categories: Technology

I know which TV tech is the best for watching sports, and these 3 sets are my top picks for your next upgrade

Wed, 06/11/2025 - 13:09

If you’re a sports fan like me, you may have had some complaints in the past about your TV when trying to watch sports. Whether it’s reflections while watching a game in the afternoon or blurring during fast motion, something always seems to need tweaking.

Another issue: a TV that appears dim, with a flat-looking image, particularly for field sports such as football and rugby.

Even the best TVs can struggle with sport, but thankfully, there’s a TV tech that’s ideal for sports fans: mini-LED.

Mini-LED: perfect for sports fans

Mini-LED TVs are not only becoming increasingly popular but also more affordable. This tech delivers an improved picture over standard LED by using backlights with smaller LEDs (hence the mini part).

By miniaturizing the LEDs, a higher number can be used, which results in increased brightness. It also allows for a higher number of local dimming zones in the backlight, which helps to boost contrast and improve black uniformity.

Mini-LED TVs can hit significantly higher brightness levels than other TV panel types, with 2,500 - 4,000 nitspeaks possible in flagship models. But for sports fans, it’s fullscreen brightness – the level of brightness that the TV can sustain over its entire screen area – that matters most, and once again, mini-LED TVs here regularly beat other panel types, including the best OLED TVs.

To provide an example of that from our TV testing, we regularly measure fullscreen brightness levels of between 580 - 800 nits on the best mini-LED TVs. But even the brightest OLED TV we’ve tested, the LG G5, topped out at 331 nits in our fullscreen measurement.

I’ve picked three models below that are examples of the best mini-LED TVs for sports.

1. Samsung QN90F

(Image credit: Future)

The Samsung QN90F is the perfect TV for sports. Not only does it deliver exceptionally high brightness levels – 2,086 nits peak and 667 nits fullscreen in Filmmaker Mode – but it has a Glare-Free screen (first introduced in the Samsung S95D OLED) that effectively eliminates reflections, making it perfect for afternoon sports watching.

The QN90F also delivers the superb motion handling that's essential for fast-paced sports. Even for movies, we found we could get smooth motion, with no sign of the dreaded ‘soap opera effect’, by setting both Blur Reduction and Judder Reduction to 3.

The QN90F delivers vibrant colors, strong contrast and realistic textures for a brilliant picture. And when viewing from an off-center seat, there’s little sign of the backlight blooming that results in contrast fade, meaning it’s great for watching in large groups.

The QN90F is a premium-priced TV, with the 65-inch model we tested priced at $2,499.99 / £2,499 / AU$3,499, but if you’re a sports fanatic, it’s worth the investment. Plus, you can expect prices to drop at some point in the near future.

2. Amazon Fire TV Omni Mini-LED

(Image credit: Future)

When I first began testing the Amazon Fire TV Omni Mini-LED, I didn’t anticipate it would be such a good TV for sports. But in its preset Sports mode with Smoothness (Judder Reduction) set to 4 and Clarity (Blur Reduction) set to 10, sports looked impressively smooth. Color was also surprisingly accurate in that mode, which is unusual as I’ve found the Sports mode makes colors look oversaturated and garish on most TVs.

Something unique about the Omni Mini-LED is that it’s nearly ready out of the box for sports. In contrast, I found when testing competing models such as the Hisense U6N and Hisense U7N that more setup was required to get sports looking right.

The Amazon Omni mini-LED is a significantly more affordable TV than the Samsung QN90F, with its 65-inch model often discounted down to $949.99 / £949.99. It may not have the same level of sports prowess as the Samsung QN90F, but it’s great for the money.

3. TCL QM7K / TCL C7K Image 1 of 2

TCL QM7K - US (slide 1) & TCL C7K - UK (slide 2) (Image credit: Future)Image 2 of 2

(Image credit: Future)

This entry is a hybrid as the TCL model name (and specs) will vary depending on which side of the pond you’re on. Either way, it’s the mid-range model in TCL’s 2025 mini-LED lineup.

Both of these TVs deliver exceptional brightness at a mid-range price, with the TCL QM7K and TCL C7K hitting 2,350 nits and 2,784 nits HDR peak brightness, respectively. More importantly, they hit 640 nits and 678 nits HDR fullscreen brightness, respectively – very good numbers for watching sports in bright rooms.

These TVs require some motion setup. Since I'm based in the UK, I tested the C7K, and I found that I needed to tweak the Sports or Standard picture mode by setting Blur Reduction to 3 and Judder Reduction to 6. I also needed to lower the color setting in Sports, as it was oversaturated in its default settings.

Once this was completed, the C7K was a solid TV for sports. It isn’t quite as effective as the two models above, but it is still a very good mid-range option overall. If the QM7K is anything like its UK counterpart, then the story for that model will be the same.

Again, for the 65-inch models of these two sets, you’re looking at paying $999 / £1,099. That’s a similar price to the Amazon Omni Mini-LED, which has the best motion of the two, but with the TCL, you’re getting that extra hit of brightness.

You might also like
Categories: Technology

I just watched the world's first ‘haptic’ trailer for Apple's F1 movie and my fingers are still tingling

Wed, 06/11/2025 - 13:00
  • Apple has just released the world's first 'haptic' trailer for its F1 movie
  • The trailer vibrates your phone in time with action sequences
  • The F1 movie pulls into theaters internationally from June 25

I thought I'd seen every movie trailer gimmick by now, but Apple has just produced a novel one for its incoming F1 movie – a 'haptic' trailer that vibrates your iPhone in time with the on-screen action.

If you have an iPhone (Android fans are sadly excluded from the rumble party) head to the haptic trailer for F1: The Movieto open it in the Apple TV app. You'll then be treated to two minutes of vibrations that's probably also a taste of what it's like to being a celebrity in the middle of a social media storm.

The trailer's 'haptic' experience was actually better than I was expecting. I assumed it would be a simple, one-dimensional rumble that fired up during race sequences, but it's a little more nuanced than that.

To start with, you feel the light vibration of a driver's seat belt being fastened, before the vibrations ramp up for the driving and crash sequences. There's even a light tap to accompany Brad Pitt's character Sonny Hayes moodily bouncing balls against a wall as he ponders coming out of retirement for one last sports movie trope.

Sure, it isn't exactly an IMAX experience for your phone, but if ever there was a movie designed for a haptic movie trailer, it's Apple's F1 movie...

One last Pitt stop

Apple's F1 movie was also the star of its recent WWDC 2025 event, with the livestream opening with Craig Federighi (Apple's Senior Vice President of Software Engineering) donning a helmet before doing a lap around the roof of its Apple Park building.

There's currently no date for the movie to stream on Apple TV+, with the focus currently on its imminent theater premiere. It officially opens internationally on June 27, but there are some special, one-off screenings in IMAX theaters on June 23 (in North America) and June 25 (internationally) for keen fans who signed up on the movie's official website.

The trailers so far suggest that F1 is going to effectively be Top Gun: Maverick set on a race track – and with both movies sharing the same director (Joseph Kosinski) and screenplay writer (Ehren Kruger), that seems like a pretty safe bet. F1 World Champion Lewis Hamilton was also involved to help amp up the realism.

If the haptic-powered trailer has whetted your appetite, check out our interview with Damson Idris who also stars in F1 and gave us a behind-the-scenes look at what the movie was like to film. Hint; they used specialized tracking cars to help nail the demanding takes flawlessly.

You might also like
Categories: Technology

This is what a 1000TB SSD could look like next year: New E2 Petabyte SSD could accelerate transition from hard drives

Wed, 06/11/2025 - 12:33
  • E2 SSDs aim to balance storage performance capacity and efficiency
  • New form factor fits rising demand for warm tier data storage
  • High density flash could reduce reliance on hard drives long term

As workloads shift and cold data heats up under AI and analytics demands, the traditional split between high-speed SSDs and cost-effective hard drives is no longer serving every use case.

A new SSD form factor known as E2 is being developed to tackle the growing gap in enterprise data storage. Potentially delivering up to 1PB of QLC flash per drive, they could become the middle-ground option the industry needs.

StorageReview claims the E2 form factor is being designed with support from key players including Micron, Meta, and Pure Storage through the Storage Networking Industry Association and Open Compute Project.

Solid speeds, but not cutting-edge

E2 SSDs targets “warm” data - information that’s accessed often enough to burden hard drives but which doesn’t justify the cost of performance flash.

Physically, E2 SSDs measure 200mm x 76mm x 9.5mm. They use the same EDSFF connector found in E1 and E3 drives, but are optimized for high-capacity, dense deployments.

A standard 2U server could host up to 40 E2 drives, translating into 40PB of flash in a single chassis. StorageReview says these drives will connect over PCIe 6.0 using four lanes and may consume up to 80W per unit, although most are expected to draw far less.

Performance will reach 8-10MB/s per terabyte, or up to 10,000MB/s for a 1PB model. That’s faster than hard drives but not in the same class as top-end enterprise SSDs. E2’s priorities will instead be capacity, efficiency, and cost control.

Pure Storage showed off a 300TB E2 prototype in May 2025 featuring DRAM caches, capacitors for power loss protection, and a flash controller suited for this scale. While current servers aren't yet ready for this form factor, new systems are expected to follow.

It’s fair to say E2 won't replace hard drives overnight, but it does signal a shift. As the spec moves toward finalization this summer, vendors are already rethinking how large-scale flash can fit into modern infrastructure.

You might also like
Categories: Technology

First look – Apple's visionOS 26 fixes my biggest Persona problem and takes the mixed reality headset to unexpected places

Wed, 06/11/2025 - 12:00

Apple Vision Pro is unquestionably one of the most powerful pieces of consumer hardware Apple has ever built, but the pricey gadget is still struggling to connect with consumers. And that's a shame because the generational-leaping visionOS 26 adds even more eye-popping features to the $3,500 headset, which I think you'd struggle to find with any other mixed reality gear.

Apple unveiled the latest Vision Pro platform this week as part of its wide-ranging WWDC 2025 keynote, which also introduced a year-OS naming system. For some platforms like iOS, the leap from, say, 18 to 26 wasn't huge, but for the toddler visionOS 2, it was instantly thrust into adulthood and rechristened visionOS 26.

This is not a reimaging of visionOS, and that's probably because its glassiness has been amply spread across all other Apple platforms in the form of Liquid Glass. It is, though, a deepening of its core attributes, especially around spatial computing and imagery.

I had a chance to get an early hands-on experience with the platform, which is notable because Vision Pro owners will not be seeing a visionOS 26 Public beta. Which means that while iPhone, iPad, Apple Watch, and Apple TV owners are test-driving OS 26 platform updates on their favorite hardware, Vision Pro owners will have a longer wait, perhaps not seeing these enhancements until the fall. In the interim, developers will, of course, have access for testing.

Since much of the Vision Pro visionOS 26 interface has not changed from the current public OS, I'll focus on the most interesting and impactful updates.

See "me"

(Image credit: Apple)

During the keynote, Apple showed off how visionOS 26 Personas radically moves the state of the art forward by visually comparing a current Persona with a new one. A Vision Pro Persona is a virtual, live, 3D rendering of your head that tracks your movements, facial expressions, and voice. It can be used for communicating with other people wearing the headgear, and it's useful for calls and group activities.

Apple has been gradually improving Personas, but visionOS 26 is a noticeable leap, and in more ways than one.

You still capture your Persona using the front-facing 3D camera system. I removed my eyeglasses and held the headset in front of my face. The system still guides you, but now the process seems more precise. I followed the audio guidance and looked slowly up, down, left, and right. I smiled and raised my eyebrows. I could see a version of my face faintly on the Vision Pro front display. It's still a bit creepy.

(Image credit: Future)

I then put the headset back on and waited less than a minute for it to generate my new Persona. What I saw both distressed and blew me away.

I was distressed because I hate how I look without my glasses. I was blown away because it looked almost exactly like me, almost entirely removing the disturbing "uncanny valley" look of the previous iterations. If you ever wonder what it would be like to talk to yourself (aside from staring at a mirror and having a twin), this is it.

There was a bit of stiffness and, yes, it fixed my teeth even though part of my setup process included a big smile.

It was easy enough to fix the glasses. The Personas interface lets you choose glasses, and now the selection is far wider and with more shades. I quickly found something that looked almost just like mine.

With that, I had my digital doppelganger that tracked my expressions and voice. I turned my head from side to side and was impressed to see just how far the illusion went.

Facing the wall

(Image credit: Apple)

One of the most intriguing moments of the WWDC Keynote was when they demonstrated visionOS 26's new widget capabilities.

Widgets are a familiar feature on iPhones, iPads, and Macs, and, to an extent, they work similarly on Vision Pro, but the spatial environment takes or at least puts them in new and unexpected places.

In my visionOS 26 demo experience, I turned toward a blank wall and then used the new widget setup to pin a clock widget to the wall. It looked like an actual clock hanging on the wall, and with a flip of one setting, I made it look like it was inset into the wall. It looked real.

On another wall, I found a music widget with Lady Gaga on it. As I stepped closer, a play button appeared in the virtual poster. Naturally, I played a little Abracadabra.

Another wall had multiple widgets, including one that looked like a window to Mount Fiji; it was actually an immersive photo. I instinctively moved forward to "look out" the window. As the vista spread out before me, the Vision Pro warned me I was getting too close to an object (the wall).

I like Widgets, but temper the excitement with the realization that it's unlikely I will be walking from room to room while wearing Vision Pro. On the other hand, it would be nice to virtually redecorate my home office.

An extra dimension

(Image credit: Apple)

The key to Vision Pro's utility is making its spatial capabilities useful across all aspects of information and interaction.

visionOS 26 does that for the Web with spatial browsing, which basically can turn any page into a floating wall of text and spatially-enhanced photos called Spatial Scenes.

(Image credit: Apple)

visionOS 26 handles the last bit on the fly, and it's tied to what the platform can do for any 2D photo. It uses AI to create computational depth out of information it can glean from your flat image. It'll work with virtually any photo from any source, with the only limitation being the source image's original resolution. If the resolution is too low, it won't work.

I marveled at how, when staring at one of these converted photos, you could see detail behind a subject or, say, an outcropping of rock that was not captured in the original image but is inexplicably there.

It's such a cool effect, and I'm sure Vision Pro owners will want to show friends how they can turn almost all their photos into stereoscopic images.

Space time

I love Vision Pro's excellent mixed reality capabilities, but there's nothing quite like the fully immersive experience. One of the best examples of that is the environments that you enable by rotating the crown until the real world is replaced by a 360-degree environment.

visionOS 26 adds what may be the best environment yet: a view of Jupiter from one of its moons, Amalthea. It's beautiful, but the best part of the new environment is the control that lets you scroll back and forth through time to watch sunrises and sunsets, the planet's rotation, and Jupiter's dramatic storms.

This is a place I'd like to hang out.

Of course, this is still a developer's beta and subject to significant change before the final version arrives later this year. It's also another great showcase for a powerful mixed reality headset that many consumers have yet to try. Perhaps visionOS 26 will be the game changer.

You might also like
Categories: Technology

ChatGPT's 10-hour outage has given me a new perspective on AI – it's genuinely helping millions of people get through life

Wed, 06/11/2025 - 11:35

OpenAI servers experienced mass downtime yesterday, causing chaos among its most loyal users for well over 10 hours.

For six hours straight, I sat at my desk live-blogging the fiasco here on TechRadar, trying to give as many updates as possible to an outage that felt, for many, as if they had lost a piece of themselves.

You see, I write about consumer AI, highlighting all the best ways to use AI tools like ChatGPT, Gemini, and Apple Intelligence, yet outside of work, these incredibly impressive platforms have yet to truly make an impact on my life.

As someone who’s constantly surrounded by AI news, whether that’s the launch of new Large Language Models or the latest all-encompassing artificial intelligence hardware, the last thing I want to do outside of work is use AI. The thing is, the more AI develops at this rapid pace, the more impossible it becomes to turn a blind eye to the abilities that it unlocks.

In the creative world, you’ll stumble across more AI skeptics than people who shout from the rooftops about how great it is. And that’s understandable, there’s a fear of how AI will impact the jobs of journalists like me, and there’s also a disdain for the sanitized world it’s creating via AI-slop or robotically-written copy.

But the same skepticism often overlooks the positives of this ever-evolving technology that gives humans new ways to work, collect their thoughts, and create.

After six hours of live blogging and thousands of readers reaching out with their worries surrounding the ChatGPT server chaos, as well as discussing what they use the chatbot for, I’ve come away with a completely new perspective on AI.

Yes, there are scary elements; the unknown is always scary, but there are people who are truly benefiting from AI, and some in ways that had never even crossed my mind.

More than a chatbot

An hour into live blogging the ChatGPT outage, and I was getting bored of repeating, “It’s still down” in multiple different ways. That was when I had an idea: if so many people were reading the article, they must care enough to share their own reasons for doing so.

Within minutes of asking readers for their opinions on the ChatGPT outage, my inbox was inundated with people from around the globe telling me how hard it was to cope without access to their trusty OpenAI-powered chatbot.

From Canada to New Zealand, Malaysia to the Netherlands, ChatGPT users shared their worries and explained why AI means so much to them.

Some relied on ChatGPT to study, finding it almost impossible to get homework done without access to the chatbot. Others used ChatGPT to help them with online dating, discussing conversations from apps like Tinder or Hinge to ensure the perfect match. And a lot of people reached out to say that they spent hours a day speaking with ChatGPT, filling a void, getting help with rationalizing thoughts, and even helping them to sleep at night.

One reader wrote me a long email, which they prefaced by saying, “I haven’t written an email without AI in months, so I’m sorry if what I’m trying to say is a bit all over the place.”

Those of us who don’t interact with AI on a regular basis have a basic understanding of what it can do, often simplifying its ability down to answering questions (often wrongly), searching the web, creating images, or writing like a robot.

But that’s such an unfair assessment of AI and the way that people use it in the real world. From using ChatGPT to help with coding, allowing people who have never been able to build a program an opportunity to do so, to giving those who can’t afford a professional outlet for their thoughts a place to speak, ChatGPT is more capable than many want to accept.

(Image credit: Shutterstock/Rokas Tenys)

ChatGPT and other AI tools are giving people all around the world access to something that, when used correctly, can completely change their lives, whether that’s by unlocking their productivity or by bringing them comfort.

There’s a deeply rooted fear of AI in the world, and rightfully so. After all, we hear on a regular basis how artificial intelligence will replace us in our jobs, take away human creativity, and mark the beginning of the robot uprising.

But would we collectively accept it more if those fears were answered? If the billionaires at the top were to focus on highlighting how AI will improve the lives of the billions of people struggling to cope in this hectic world?

AI should be viewed as the key to unlocking human creativity, freeing up our time, and letting us do less of the mundane and more of enjoying our short time on this planet. Instead, the AI renaissance feels like a way to make us work harder, not smarter, and with that comes an intense amount of skepticism.

After seeing just how much ChatGPT has impacted the lives of so many, I can’t help but feel like AI not only deserves less criticism, but it deserves more of an understanding. It’s not all black and white, AI has its flaws, of course it does, but it’s also providing real practical help to millions of people like nothing I've seen before.

You might also like
Categories: Technology

Pages