Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
Updated: 2 hours 37 min ago

How startups can achieve outsized results by leveraging multi-agent systems

Thu, 05/22/2025 - 01:44

In March, AWS announced the general availability of its new multi-agent capabilities, bringing the technology into the hands of businesses across almost every industry. Until now, organizations have mostly relied on single-agent AI systems, which handle individual tasks but often struggle with complex workflows.

These systems can also break down when businesses encounter unexpected scenarios outside their traditional data pipelines. Google also recently announced ADK (Agent Development Kit) for developing multi-agent systems and A2A (Agent to Agent) protocol for agents to communicate with each other, signaling a broader industry shift toward collaborative AI frameworks.

The general availability of multi-agent systems changes the game for startups. Instead of a single AI managing tasks in isolation, these systems feature robust and manageable networks of independent agents working collaboratively to divide skills, optimize workflows and adapt to shifting challenges. Unlike single-agent models, multi-agent systems operate with a division of labor, assigning specialized roles to each agent for greater efficiency.

They can process dynamic and unseen scenarios without requiring pre-coded instructions, and since the systems exist in software, they can be easily developed and continuously improved.

Let's explore how startups can leverage multi-agent systems and ensure seamless integration alongside human teams.

Unlocking value for startups

Startups can leverage multi-agent systems across several critical business functions, beginning with research and analysis. These systems excel at data gathering, web searches, and report generation through the process of retrieving, organizing and dynamically refining information.

This allows systems to streamline complex research workflows, enabling startups to operate more efficiently and make informed decisions at scale. Meanwhile, in sales processes, multi-agent systems improve efficiency by automating lead qualification, outreach and follow-ups. AI-driven sales development representatives (AI SDRs) can automate these repetitive tasks, reducing the need for manual intervention while enabling teams to focus on strategic engagement.

Many startups may also need to extract structured data from unstructured sources. For example, multi-agent systems automate web scraping and adjust to website format changes in real time, eliminating the need for continuous manual maintenance.

Unlike traditional data pipelines that require constant debugging, multi-agent systems autonomously manage tasks, reducing the need for large development teams. This is particularly useful for startups as they can ensure up-to-date data without expanding technical teams too quickly.

How businesses can implement multi-agent systems

Startups seeking to gain outsized results by leveraging these systems can do so through two impactful approaches.

One option is purchasing existing solutions to replace complex data flows and human-driven processes. This is the most cost-effective choice for many startups, as they can automate and replace complex sales pipelines and make data workflows more robust, reducing reliance on humans for repetitive tasks.

But for startups with unique operational needs, developing a multi-agent system in-house is ideal. Traditional systems require coding for every possible scenario – a rigid and time-consuming approach that is prone to human error. Multi-agent systems, in contrast, are tailored for all possible scenarios and dynamically adapt to complexities, making them a more flexible and scalable alternative.

Regardless of whether startups buy or build, multi-agent systems provide a game-changing opportunity to streamline operations, reduce manual workloads and improve scalability.

Overcoming challenges in AI integration

Despite its advantages, integrating multi-agent systems comes with certain challenges. Decision-making by agents within the multi-agent system isn’t always transparent since the systems often rely on large language models (LLMs) that have billions of parameters. This makes it challenging to diagnose failures, especially when a system works in one case but fails in another.

Additionally, multi-agent systems deal with dynamic, unstructured data, meaning they must validate AI-generated outputs across various input sources - from websites to documents, scanned documents and chat and meeting transcripts. This makes it a greater challenge to balance robustness to changes and accuracy. Beyond this, multi-agent systems face difficulties in maintaining effectiveness and require monitoring and updates in response to input source changes, which often break traditional scraping methods.

Startups can overcome these challenges by embracing new tools, such as LangFuse, LangSmith, HoneyHive and Phoenix, which are designed to enhance monitoring, debugging, and testing in multi-agent environments. Equally important is fostering a workplace culture that embraces AI agents as collaborators, not replacements. Startups should ensure buy-in across stakeholders and educate employees on the value of AI augmentation to allow a smooth adoption.

Transparency is also key. Founders must be open with staff about how multi-agent systems will be used to ensure a smooth collaboration between human and AI coworkers.

Achieving outsized results

The AI field is moving fast, making it difficult for experts, let alone everyday users, to keep up to date with each new model or tool that is released. Some small teams may therefore see multi-agent systems as unattainable.

However, the startups that successfully implement them into their workstreams – whether by purchasing or building custom solutions – will gain a competitive edge. Multi-agent systems bridge the gap between AI and human collaboration that can’t be achieved with traditional single-agent systems.

For startups focused on growth, multi-agent systems are the best tool in their arsenal to compete with incumbents who might be stuck with an outdated tech stack. The ability to streamline operations, reduce manual workload, and scale intelligently makes multi-agent systems an invaluable tool in achieving outsized results.

We've compiled a list of the best landing page creators.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Fujifilm’s X half is a tiny retro compact that’s big on wacky film photography features, and I love it

Thu, 05/22/2025 - 00:01
  • Retro compact with unique vertical sensor and LCD screen
  • It reimagines the half-frame film camera experience
  • It's available globally from June 12 in three colors, priced $849 / £699 / AU$1,349

Fujifilm has outdone itself with the new X half – a retro compact camera that packs some of its wackiest and outright funnest ideas yet, all inspired by film photography.

There’s a clue to the X half’s inspiration in the name – it’s a digital reimagining of half-frame film cameras like the Pentax 17. I've already tried the X half, and it was a much needed dose of fun – check out my X half hands-on review.

To facilitate half frame, the X half's 18MP JPEG photos are taken in 3 x 4 vertical format, recorded onto a vertical 1-inch sensor, and composed using the unique vertical LCD.

Alongside that fixed screen is a secondary screen that mimics the film canister window you see on many film cameras, and there's a fun surprise here – it’s touch sensitive, and allows you to swipe up or down to select one of Fujifilm’s Film Simulations. If this charming feature doesn't make its way into future Fujifilm cameras, I'd be shocked.

Film Simulation color effects are well known – they're inspired by Fujifilm film stock, and have helped to cement Fujifilm’s popularity over the last 10 years through cameras like the X100VI. The X half offers a stripped-back selection of 13 popular Film Simulations, including Provia and Astia.

You'd think all of the above would be enough to secure the X half's unique status, but Fujifilm has really let loose, with even more features for film photography fans to enjoy.

The LCD emulating a film cannister window with Velvia Film Simulation, and the vertical LCD (Image credit: Tim Coleman)Simulating film to another level

Going one step further from that twin-screen combo and vertical shooting, there’s a Film Camera mode. This locks in your chosen Film Simulation and camera settings such as ISO, and disables the screen preview, leaving you to compose your shots via the optical viewfinder instead, as if you're shooting with film.

Once your ‘film’ is used up – either 36, 54 or 72 shots – you can exit the mode and view the screen once more, and make changes to settings again.

Film Camera mode is such a fun feature, and for me is the closest experience to film photography that I've had using a digital camera – and it's optional.

The film wind lever tucked in with the camera off. In the on position, the lever sticks out for an easy reach. (Image credit: Tim Coleman)

Then there's what is in effect a film wind lever, which in this case, through 'cranking', is used to create diptychs – that’s two vertical shots side by side. These are recorded individually through the vertical 1-inch sensor, but then composited afterwards and displayed just like you'd get with a half-frame film camera on a roll of 35mm film.

Again, you can take or leave the diptych feature. I reckon it's a nice to have – working out how image pairs complement each other stretches your creative muscles.

We also get some completely new picture effects, almost all of which are film photography-inspired and include light leak, expired film and halation.

Full HD video capture is also possible, and the diptych effect can be applied to both photos and videos, which is really neat.

This is all packaged in a palm-sized, premium-feel compact that features a fixed 32mm f/2.8 lens with a mechanical aperture, plus the same battery as used in cameras like the X100VI for an 880-shot life, and which weighs just 240g.

Fujifilm X half in charcoal (left), silver (middle) and black (right). (Image credit: Tim Coleman)

Fujifilm has created a dedicated app for the X half, which can be used to make diptychs, and upload and view images, plus the camera can connect wirelessly to one of Fujifilm’s Instax printers for on-the-go printing.

The app wasn't available when I tested the camera, but will be downloadable from early June. Meanwhile, the Fujifilm X half itself will be available globally from June 12 in silver, charcoal and black, and costs $849 / £699 / AU$1,349.

I’ve been reviewing digital cameras for 15 years, and the Fujifilm X half has to be one of the funnest yet – a compact camera with a difference. You can configure it in a way that’s as close to a film camera as you’re going to get with digital, plus it packs the retro look and feel that we’ve come to expect from Fujifilm.

What do you think of the Fujifilm X half? Let us know in the comments below.

You might also like
Categories: Technology

Dyson's new vacuum is as thin as a broom handle and appears to float across the floor

Wed, 05/21/2025 - 21:02
  • Dyson has announced the new ultra-thin, ultra-light PencilVac
  • Its new Fluffycones floorhead is designed to avoid hair tangling
  • It looks like a specialist model for hard floors

I write about vacuum cleaners for a living, and while performance varies, most new models these days tend to look roughly the same.

So when news of a new addition to the Dyson vacuum lineup landed in my inbox, I expected to see something similar to its existing models: slick and high-quality, but not especially distinctive or surprising.

How wrong I was.

The newly unveiled Dyson PencilVac doesn't just have an unusual name, it's all-round one of the most unique vacuums I've seen. This brand knows what it's doing in this marketplace – it makes some of the best cordless vacuums you can buy, and today's very best Dyson vacuums include features you still can't find anywhere else.

So while the PencilVac strays a long way from the tried-and-tested formula of what works for vacuum cleaners, I'm very optimistic about its performance. Here's a rundown of the most intriguing features in this new launch...

1. It's ridiculously thin

The most immediately noticeable thing about the PencilVac is that it's incredibly streamlined. Without the floorhead, the whole thing is 1.5 inches / 3.8cm in diameter. To make that possible, the brand had to develop a tiny new motor – the Dyson Hyperdymium 140k motor is just 1.1 inches / 2.8cm wide, and hidden entirely within the handle.

The PencilVac is also impressively lightweight, clocking in at 4lbs / 1.8kg. For context, the lightest option in our best cordless vacuum roundup right now is 5.7lbs / 2.6kg, and there are a number of models that weigh over 6.6lbs / 3kg.

All the PencilVac's mechanics are shrunk down and fitted inside the handle (Image credit: Dyson)

Generally, when you shrink down a vacuum, you sacrifice power. That's why handheld vacuums tend to be much less 'sucky' than full-sized options. That holds true for the PencilVac – there's 55AW of suction, compared to 115AW for the V8 (the oldest Dyson stick vacuum in the current range) and a massive 280AW for the latest-and-greatest Gen5detect. However, while it's unlikely to be suitable for a truly deep clean, that's still a decent amount of suction for the size and weight.

As a side note, the 1.5-inch / 3.8cm diameter isn't incidental. Brand founder James Dyson says, "I have long wanted to make a vacuum of only 38mm diameter (the same as my latest hair dryer, the Supersonic r)". The Dyson Supersonic r is the pipe-shaped dryer that was originally released for professionals only, but recently joined the main consumer range.

2. There are cones instead of rollers

Moving down to the business end, and you'll find the new 'Fluffycones' floorhead. It sounds like a Pokémon, but it's actually a reimagined cleaner head. Vacuums traditionally have one brush roll, maximum two, and they're tube-shaped. The Dyson PencilVac has four brushrolls, and they're all conical.

There's logic to the tapering shape: it helps direct long hair along the roll and into the dust cup, whereas with parallel rollers the hair tends to just wrap around and stay there, until your rip it off or attack it with scissors. Dyson's hair screw tool also has a conical brush roll, and works exactly as it's meant to when it comes to tackling long hair.

Rather than one parallel brushroll, the PencilVac has four tapering rollers (Image credit: Dyson)

The cones project out at the sides so they can clean right to the edges of rooms, and the whole thing can lie flat to the ground, with a clearance of just 9.5cm / 3.75 inches off the floor.

I'm interested in Dyson's description of the rollers as 'fluffy', because in the brand's vocabulary that usually indicates a soft roller for use on hard floors only. In fact, the more I look at this vacuum, the more I'm convinced it's a specialist model just for use on hard floor. It's not specified in the press material I have so far, but it would make sense with the lower suction and smaller dust capacity.

3. There's no visible dust cup

One of the most baffling things about the PencilVac is that it doesn't appear to have a dust cup. Of course, there is one – like the motor, it's hidden away inside the handle.

The capacity is next-to-nothing: just 0.08L. However, Dyson has introduced a dust compression system, which uses air to squish down the particles so they take up as little room as possible. Dyson claims that means it can hold five times the physical volume.

The dust cup is also hidden within the handle (Image credit: Dyson)

The emptying process has also been reimagined, with a push-lever system replaced by an exciting-sounding "syringe, no-touch bin ejection mechanism".

As it pushes out dust and debris, the mechanism simultaneously wipes the 'shroud'. I'm not totally clear what the 'shroud' is in this context, but I do know that keeping the internal mechanisms clean is key to efficient vacuum performance, so this seems like a good thing.

4. The floorhead glows and appears to float

As well as siphoning off hair as you clean, the floorhead cones have another trick up their sleeve. The cones rotate in opposite directions, the aim being that this vacuum cleans just as well when it's pushed forward as when it's pulled back. This is a bit of a weak spot on the regular Fluffy floorhead – it has no trouble sucking things up when moving forwards, but pull it back and debris will pool behind it.

I'm intrigued to see how this new approach works in practice – especially because Dyson describes it as "floating" across the floor. I wonder, too, if it might make this vacuum reversible altogether, given the fact that the handle section looks very symmetrical.

(Image credit: Dyson)

Dyson has also added "laser-like" illumination to both the front and back of the floorhead. This is another feature borrowed from the exsiting Fluffy floorhead, and helps create big shadows on the tiniest bits of dust, which otherwise might go missed. It only works on hard floors, which is another indication this vac is likely not for carpet.

5. There's a tool that looks like a chimney brush

There's an intriguing addition to the tool lineup in the form of a 'Rotating combi-crevice tool', designed for cleaning in awkward gaps. This seems especially geared towards cleaning high-up, where it can be tricky to get your angles correct. It makes particular sense for an ultra-light vacuum like this one, which is far easier to lift above your head than your average stick vacuum.

As an aside, it looks like the PencilVac is button- rather than trigger-operated. That's dictated by the streamlined shape, but it's also great news for maneuverability and easy of use – the fact that many Dyson vacs still use a trigger to turn on is a perpetual bugbear of mine.

You'll also get a Conical hair screw tool, similar to the one included with the newest Dyson stick vacuums, for tackling long hair on furniture. Both can be stored on the magnetic charging dock.

The Rotating combi-crevice tool looks perfect for cleaning up high (Image credit: Dyson )6. It's app-connected

I'm much less excited about this feature, but it feel like I should point out that this is the first Dyson cordless vacuum to connect to the MyDyson app. The app will provide more information about battery life and also report on filter status. However, there's also a screen on the vacuum itself showing remaining battery, so I'm hoping the app connection is an optional extra rather than an essential.

There's a companion app, but key information is also shown on the vac's screen (Image credit: Dyson)Price & availability

The PencilVac will arrive in Australia first, with launch scheduled for August 2025. It's due to go on sale in the UK sometime in 2026, and I'm awaiting info as to if/when it will come to the US. As of yet I don't have any pricing info at all – I'll update this article with more details when I have them.

You might also like...
Categories: Technology

I tried Google's new AI try-on feature, and it's given me some new fashion ideas

Wed, 05/21/2025 - 18:00

Google has rolled out a new AI-powered shopping feature to help you figure out what the clothes you are interested in buying might look like when you wear them. It's dubbed "try it on" and it's available right now in the US through Google Search Labs.

To get started, you just need to switch it on in the lab. Then, you upload a full-length photo of yourself and start looking for clothes in the Google Shopping tab.

When you click on an image of some outfit from the search results, you'll see a little "try it on" button in the middle of the enlarged version of the outfit in the right-hand panel. One click and about ten seconds later, you'll see yourself wearing the outfit. It may not always be a perfect illusion, but you'll at least get a sense of what it would look like on you.

Google claims the whole thing runs on a model trained to see the relationship between your body and clothing. The AI can, therefore, realistically drape, stretch, and bunch material across a variety of body types.

The feature doesn't work with every piece of clothing you might see, or even every type of outfit. The clothing retailer has to opt into the program, and Google said it only works for shirts, pants, dresses, and skirts.

I did notice that costumes and swimwear both had no usable images, but I could put shorts on myself, and costumes that looked enough like regular clothes were usable. The AI also didn't seem to have an issue with jackets and coats as categories.

Elvis looks

(Image credit: Photo/Google AI)

For instance, on Google Shopping, I found replicas of the outfits Elvis wore for his 1966 comeback and one of his jumpsuits from the 1970s. With a couple of clicks, I could imagine myself dressed as the King in different eras.

It even changed my shoes in the all-black suit. I'd always wondered if I could pull off either look. The images are shareable, and you can save or send them to others from the Google mobile app and see how much of an Elvis your friends think you are.

Super summer

(Image credit: Photo/Google AI)

The details that the AI changes to make the photos work are impressive. I used the AI to try on a fun summer look and the closest to a superhero costume I could try. The original photo is me in a suit and jacket with a bowtie and black dress shoes. But the shoes and socks on both AI-generated images not only match what was in the search result, but they're shaped to my stance and size.

Plus, despite wearing long sleeves and pants, the AI found a way to show some of my arms and legs. The color matches reality, but its imperfections are noticeable to me. My legs look too skinny in both, like the AI thinks I skipped leg day, and my legs in the shorts have not been that hairless since I turned 13.

Imperfections aside, it does feel like this will be a major part of the next era of e-commerce. The awkward guessing of whether a color or cut works for your skin tone and build might be easier to resolve.

I wouldn't say it can make up for trying them on in real life, especially when it comes to sizing and comfort, but as a digital version of holding an outfit up against you while you look in a mirror, it's pretty good.

Ending unnecessary returns

(Image credit: Photo/Google AI)

Uncanny as some of the resulting images are, I think this will be a popular feature for Google Shopping. I'd expect it to be heavily imitated by rivals in AI development and online retail, where it isn't already.

I particularly like how the AI lets you see how you'd look in more outlandish or bold looks you might hesitate to try on at a store. For example, the paisley jacket and striped pants on the left or the swallowtail jacket and waistcoat with Victorian trousers on the right. I'd hesitate to order either look and would almost certainly plan on returning one or both of them even before they arrive.

Returns are a plague on online retailers and waste tons of packaging and other resources. But if Google shows us how we’d look in clothes before we buy them, it could chip away at return rates; retailers will race to sign up for the program.

It could also open the door to more personalized style advice from AI. You could soon have an AI personal dresser, ready to give you a virtual fit check and suggest your next look, even if it isn't something Elvis would have worn.

You might also like
Categories: Technology

"This is a once in a lifetime opportunity" - Nvidia CEO Jensen Huang says it's time to get on board with AI now, or be left behind

Wed, 05/21/2025 - 14:41
  • "AI is here", Jensen Huang tells Dell Technologies World 2025
  • Speaking to Michael Dell, Huang once again extols virtue of AI tech
  • Nvidia and Dell combine to launch "AI Factory 2.0"

Nvidia CEO Jensen Huang has once again looked to highlight the huge potential AI can offer companies of all sizes in the coming months and years.

Speaking at the recent Dell Technologies World 2025 event, Huang noted “AI is here - this is unquestionably the single biggest platform shift.”

In conversation with Dell Technologies CEO Michael Dell, Huang added how, “from a technology perspective…we’re now in perception to generative to now reasoning AI models, and that’s at the raw technology level.”

"The biggest reinvention"

Huang highlighted how Nvidia and Dell are teaming for enterprise AI, which he called, “one of the largest opportunities ahead of us”.

“These are companies that are essentially building a digital workforce of AI agents, which can be working in cybersecurity, software engineering, marketing and sales operations, and forecasting, and supply chain management - all these different AI agents are being created now, that can augment our human workforce with a digital workforce.”

(Image credit: Dell Technologies)

One of the biggest announcements at Dell Technologies World 2025 concerned the expansion of Dell’s AI Factory platform, which has received some significant updates thanks to Nvidia.

Initially launched at DTW 2024, the next iteration of the Dell AI Factory, unsurprisingly called Dell AI Factory with Nvidia 2.0, encompasses client devices, servers, storage, data protection and networking

The new iteration includes six new servers, including the air-cooled PowerEdge XE9780 and XE9785, and the liquid-cooled XE9780L and XE9785L, all of which support up to 192 Nvidia Blackwell Ultra GPUs with direct to chip cooling.

These new releases can also be customized with up to 256 Nvidia Blackwell Ultra GPUs per Dell IR7000 rack, which Dell claims can deliver up to four-times faster large language model training than its predecessor.

The two companies also announced Dell Managed Services for the Dell AI Factory with Nvidia, which looks to simplify AI operations with the management of the full Nvidia AI stack, and 100-times faster token generation per second for distributed AI inferencing, with more than 80% reduction in latency, to help support the growth of agentic AI.

(Image credit: Future / Mike Moore)

Noting that he and Huang had known each other “for some 30 plus years”, Dell asked the Nvidia CEO if he wanted to give any advice to the Dell Technologies World audience.

"This is a once in a lifetime opportunity - in the last 60 years, this is the biggest reinvention that you and I have seen,” Huang noted.

“This is incredibly exciting technology - you want to engage it. The impact to your company is incredible. And you want to be an early adopter.”

“This is the beginning of a decade of transformation. But you don't want to be second - this is the time, and you want to be first.”

You might also like
Categories: Technology

iPhone designer Jony Ive joins OpenAI, but don't expect a new ChatGPT smartphone

Wed, 05/21/2025 - 13:41

Jony Ive, who famously designed the iPhone (among other iconic Apple devices), is about to become the design lead for OpenAI, the chatCPT AI giant that, for now, does not make a single hardware device.

The Wall Street Journal on Wednesday reported the impending deal, which sees OpenAI acquire Ive's io company in a deal valued at $6.5 billion. As part of that, Ive becomes the design lead for OpenAI, a role he's been slowly-stepping into for some time.

Ive, who famously led Apple's design for decades, left the company in 2019 and, in recent months, has expressed some misgivings about the possible negative impact of the previous products he's worked on (which might include the iPhone).

"I think when you’re innovating, of course, there will be unintended consequences, You hope that the majority will be pleasant surprises. Certain products that I’ve been very, very involved with, I think there were some unintended consequences that were far from pleasant,” said Ive earlier this month, according to the Verge.

While reports indicate that Ive and OpenAI CEO Sam Altman are interested in building AI-capable consumer hardware, a smartphone is probably not on that menu.

Instead, most expect the duo to focus on wearables like earbuds and smartwatches that could be enhanced with, for instance, cameras that could see your surroundings and use onboard AI to help you act on and react to them.

A soft approach

Ive's focus will also apparently be on upgrading OpenAI software's visual appeal. So expect an infusion of Ive-ness on ChatGPT on mobile and the desktop (where it has a particularly techy or dev-friendly look), as well as on Sora and Dall-E interfaces.

In the latter part of his career at Apple, Ive was most responsible for stripping away skeuomorphism – making digital icons look like their real-world counterparts – across Apple's platforms. OpenAI's software doesn't suffer from the skeuomorphic scourge, but some could argue its overall look is less than elegant.

If you're curious if Ive's design skills are still up to snuff, just take a look at the updated Airbnb, which Ive's Loveform firm redesigned. Loveform, by the way, is set to remain a stand-alone company and will, according to The Wall Street Journal, work with OpenAI as a client.

The news must sting Apple a little bit. The company, which partnered with OpenAI to include ChatGPT access in Apple Intelligence, has not only failed to deliver its own generative AI, but is falling behind the industry in delivering a true, combined hardware/software AI experience.

Open AI CEO Sam Altman (Image credit: Getty Images / Tomohiro Ohsumi / Stringer)Hints of hardware to come

It'll be fascinating to see what Altman and Ive cook up, and we already have some hints.

Altman announced the deal by tweeting that he's "excited to try to create a new generation of AI-powered computers." Taken literally, we might expect an AI PC from the team, but I think here Altman means "computers writ large" in that most intelligent consumer electronics could be considered computing devices.

The tweet was accompanied by a video featuring a conversation between Ive and Altman, in which Altman described developing "a family of devices that would let people use AI to create all sorts of different things."

Without disclosing the product, Ive revealed that "the first one we've been working on has almost completely captured our imagination." Further, Altman added that Ive handed him the device to take home. "I've been able to live with it and I think it's the coolest piece of technology that the world will have ever seen."

No matter what they're building, it's worth remembering that the road to AI hardware success is already littered with the rotting carcasses of failed ventures like Human AI. Regular people have not shown great interest in wearing AI hardware that doesn't align with their current fashion choices.

thrilled to be partnering with jony, imo the greatest designer in the world.excited to try to create a new generation of AI-powered computers. pic.twitter.com/IPZBNrz1jQMay 21, 2025

That said, there may be an opportunity for OpenAI, Ive, and Altman in the smart glasses space. It's the one AI-connected device area that appears to be showing some real signs of life. That's mostly down to Meta's efforts with Ray Ban Meta Smart Glasses, but also evidenced by the upcoming influx of Android XR competitors from Google partners Samsung, Warby Parker, and others. Some were announced this week at Google I/O 2025, and all of them will feature Gemini at their core.

OpenAI and ChatGPT may be leading in the generative AI space, but Google Gemini is close behind. And if Android XR partners can deliver stylish Gemini Smart Glasses this year, it could quickly vault Gemini into the lead. At the very least, this puts pressure on OpenAI to deliver something.

Is Jony Ive the secret sauce that will make ChatGPT AI glasses, earbuds, smart watches, and other consumer hardware possible and desirable? Maybe. OpenAI says we'll see their work next year. Just don't expect a ChatGPT Phone.

You might also like
Categories: Technology

The Bear season 4 gets a trailer, and one detail suggests it could be the Hulu show's final installment

Wed, 05/21/2025 - 13:30
  • The Bear season 4 trailer has been released
  • The timer suggests it could be the final challenge for the stressful cooking drama
  • Season 4 arrives on Hulu and Disney+ on June 25

The Bear season 4 has an intense trailer ahead of its Hulu (US) and Disney+ (international) release on June 25.

Maybe I'm reading too much into things, but deciding to include the ever-stressful clock could indicate we're counting down to the finale.

There are also moments of Carmy (Jeremy Allen White) trying to remember why he loves what he does and approaching a meeting with his mother (Jamie Lee Curtis), so even if it isn't the final season, there's still plenty of drama ahead, and it seems to be building to something big.

Previously, I noted that The Bear made Emmys history for most comedy wins, but it's still the most stressful Hulu show on TV, and I'm sure season 4 will be no different.

Check out the new trailer below.

What is The Bear season 4 about?

(Image credit: Hulu)

An official plot for The Bear season 4 reveals: "Carmen 'Carmy' Berzatto (Jeremy Allen White), Sydney Adamu (Ayo Edebiri), and Richard 'Richie' Jerimovich (Ebon Moss-Bachrach) push forward, determined not only to survive, but also to take The Bear to the next level. With new challenges around every corner, the team must adapt, adjust and overcome."

There's still quite a lot to speculate on regarding one of the best Hulu shows, so we'll have to wait patiently to find out more about season 4. It won't be long, though, and much like the clock in the trailer, I'm counting down the hours.

Until then, why not check out the biggest Hulu movies to stream in May 2025 or 5 new Hulu movies with over 91% on Rotten Tomatoes to tide you over.

You might also like
Categories: Technology

You can now auto-change compromised passwords with Chrome's Credential Manager

Wed, 05/21/2025 - 13:00
  • Chrome's password manager will now allow for automated password change
  • The new feature helps reduce friction, Google says
  • Passwords remain the number one authentication method

Users can now change compromised passwords directly in their Chrome browser, in just a few clicks. This is the promise given in a new Google blog discussing the extensive changes the company is bringing to user authentication and identity verification.

Most browsers already come with a (rudimentary form of) password manager, allowing users to generate strong passwords, store their credentials, and auto-fill them for speed and convenience.

Now, Google’s Chrome devs, Ashima Arora, Chirag Desai, and Eiji Kitamura, said the company is building on that foundation to “fix compromised passwords in one click”.

TechRadar Pro readers can get 60% off Premium Plans at RoboForm now!

New users can take advantage of RoboForm’s exclusive deal and get 60% off the Premium Plan. With this deal, you can get unlimited password storage, one-click login & autofill, password sharing, two-factor authentication for added protection, cloud backup, and emergency access for trusted contacts. To claim this deal, visit this link and sign up for the Premium Plan to lock in this huge discount.

Preferred partner (What does this mean?)View Deal

Changing passwords

“Automated password change makes it easier for users to respond when their credentials are at risk,” the blog reads. “When Chrome detects a compromised password during sign in, Google Password Manager prompts the user with an option to fix it automatically. On supported websites, Chrome can generate a strong replacement and update the password for the user automatically. This reduces friction and helps users to keep their account secure, without hunting through account settings or abandoning the process partway.”

Passwords are still, by far, the most common and popular form of authentication. They are also the least secure form, as people tend to create weak, easy-to-guess passwords, tend to share them with friends, family and coworkers, or store them in insecure locations that hackers can easily access.

The community has rallied behind alternatives such as passkeys, biometric authentication, or physical security keys. Google is also working on all of these (and then some), but stressed that passwords were “still the world’s most common authentication method,” suggesting that it’s not abandoning the practice any time soon.

The full blog is a rather interesting read, discussing a unified sign-in experience, improved identity verification, and enhanced session security. You can read it in more detail on this link.

You might also like
Categories: Technology

3D X-DRAM technology will bring bigger, faster memory by 2026, but there's no way these 512GB modules will sell to consumers

Wed, 05/21/2025 - 12:32
  • 512GB DRAM sounds huge, but don’t hold your breath for consumer availability
  • NEO’s 3D X-DRAM stacks layers sky-high, but price and practicality remain unclear
  • AI and enterprise systems will get the speed, regular users probably won’t

NEO Semiconductor’s push into 3D X-DRAM memory marks an ambitious attempt to rethink DRAM design for the AI and high-performance computing era.

While the promises - stacked layers, enhanced bandwidth, and reduced power consumption - are impressive, the practicality and consumer accessibility of these technologies remain open to scrutiny.

With the company projecting that its most advanced modules could reach densities of up to 512GB, it’s hard not to ask: who is this memory really for?

Complex architectures with limited consumer impact

At the core of NEO’s approach is a vertically stacked architecture that mimics the structure of 3D NAND.

In NEO’s own words, the array is “segmented into multiple sectors by vertical slits,” with “word line layers connected through staircase structures.”

The company compares its 3D X-DRAM density to the current 0a-node planar DRAM’s 48GB and claims to reach 512GB, but the implication that such capacities will trickle down into mainstream consumer products seems tenuous at best.

The proof-of-concept chips are still in the early stages. NEO is currently developing a test version of the simpler 1T0C architecture, with the more complex, and more promising, 1T1C version planned for 2026.

The 1T1C variant utilizes IGZO transistors paired with a cylindrical high-k dielectric capacitor. It promises improved retention time, reportedly beyond 450 seconds, and supports stacking up to 128 layers.

With further refinements, including the addition of 5nm-thick spacers to reduce parasitic capacitance, NEO claims stacking could exceed 512 layers.

The 3T0C design, which incorporates dual IGZO layers, is geared toward in-memory computing and AI applications.

Still, NEO’s statements about eliminating the need for TSV and enabling up to 32K-bit bus widths raise eyebrows.

Such bandwidth sounds transformative, especially compared to the projected 2K-bit bus width of HBM4, but scaling this level of performance in real-world systems is a non-trivial task.

From a broader perspective, the DRAM market hasn’t shifted significantly in terms of cost-per-GB over the past decade. Despite some fluctuations, the downward trend slowed considerably after 2012.

One might expect the MacBook Pro, for instance, to ship with far more RAM by default today than it did a decade ago, but that hasn’t happened.

Even with some price drops - DDR3 vs. DDR5 comparisons show modest improvement - the advances haven’t been revolutionary.

Commodity pricing may fluctuate, but the overall curve has flattened. Forecasts suggest we may be near a low point before another upswing.

So while 3D X-DRAM may indeed deliver bigger, faster memory by 2026, it’s unlikely these 512GB modules will be available to consumers anytime soon.

More likely, such capacity and speed will be reserved for AI servers and enterprise systems, rather than everyday desktops or laptops.

You might also like
Categories: Technology

It's 2025 and another tech publisher has broken the Pi calculation world record with 300 trillion digits

Wed, 05/21/2025 - 12:31
  • Linus Tech Tips and Kioxia have smashed the Pi calculation world record
  • 300 trillion digits of Pi were calculated using Kioxia NVMe SSD cluster
  • The seven-month compute effort ended with Guinness recognition

After StorageReview previously claimed the Pi calculation world record with over 202 Trillion Digits, now Linus Media Group, the creators of the Linus Tech Tips YouTube channel, has taken it even further.

Working with Kioxia, LMG officially set a new Guinness World Record for the "Most Accurate Value of Pi", reaching an incredible 300 trillion digits.

This milestone was achieved using a high-performance storage setup featuring 2.2PB of Kioxia’s CM Series 30.72TB and CD Series 15.36TB PCIe NVMe SSDs.

(Image credit: Kioxia)Seven months and no SSD failures

The drives were organized in a NAS system connected to a dual-CPU compute server. The entire operation ran continuously for nearly seven and a half months.

“We knew breaking the Pi record with distributed network storage was going to be difficult - no one had really done it before due to the performance challenges associated with remote storage," said Jake Tivy, Writer & Host, Linus Media Group.

"Fortunately for us, the reliability and performance of Kioxia's NVMe SSDs enabled us to run continuous, intensive compute operations at speeds up to 100+ GB/s for nearly seven months straight, without a single SSD failure."

The project not only broke the previous record but did so by a wide margin. StorageReview’s 202 trillion digit milestone was huge, but it wasn’t officially verified by Guinness World Records.

The last recognized benchmark from Guinness was 62 trillion digits, so this new effort pushed that number nearly five times higher.

"Attaining a Guinness World Records title for the most accurate value of Pi is a tremendous achievement, emphasizing the courage of taking on a challenge and the power of great cooperation and teamwork," said Axel Stoermann, Vice President and CTO for Embedded Memory and SSD at Kioxia Europe.

"Kioxia America's successful collaboration with Linus Media Group enabled the demonstration of the robust capabilities of our NVMe SSDs under the most demanding of workloads. We will continue to advance the capabilities of our flash memory and SSD technology to support supercomputing applications," he added.

The Linus Tech Tips channel released a video documenting the effort which you can watch below. It also revealed the final digit of the record-breaking result. (Spoiler alert, the 300 trillionth digit of Pi is 5.)

You might also like
Categories: Technology

iPhones just got Google’s best AI feature for free – and it could genuinely make me switch back from Android

Wed, 05/21/2025 - 12:00

I've never owned another smartphone apart from an iPhone up until this year. However, as AI makes its way onto every tech product on the planet, I needed to try Android to understand the differences between artificial intelligence in the two ecosystems.

After using a Samsung Galaxy S25 for a few weeks, I returned to my iPhone 16 Pro Max. Not because it was better, but because the ecosystem you've built your life in equates to the deciding factor when it comes to choosing between flagship smartphones.

Once back on iOS, I found myself missing one specific AI feature more than others, and without access on iPhone, I quickly defaulted back to living with an Android device.

That AI feature I'm talking about is Gemini Live, and while you could access it on iOS, the experience was dumbed down. That was until yesterday, at Google I/O 2025, when Google announced that all of Gemini Live's capabilities are rolling out on iPhone, and at no cost.

Here's why Gemini Live is the best AI tool I've ever used, and how adding all of its capabilities to iPhone means I'm ready to jump back to Apple.

What Visual Intelligence wanted to be

(Image credit: Apple)

Gemini Live already existed in the Gemini app on iOS, but it lacked two crucial elements that make the Android version that much better. Firstly, Gemini Live on iOS was unable to access your iPhone's camera, and secondly, it couldn't see what you were doing on your screen. I/O 2025 changed all that.

Now, iPhone users can give Gemini Live access to their camera and screen, allowing for new ways to interact with AI that we've not really seen on iOS before.

Gemini's camera ability is single-handedly one of, if not the, best AI tool I've used to date, and I'm thrilled iPhone users can now experience it.

What is Gemini Live's camera feature? Well, imagine a better version of what Apple wanted Visual Intelligence to be. You can simply show Gemini whatever you're looking at and ask questions without needing to describe the subject.

I've found Gemini Live's camera functionality thrives in situations like cooking. I used it last week to make Birria Tacos, and not only was I given advice every step of the way, but it was also able to see everything I was doing and help direct me towards a delicious dinner.

Not only did propping my S25 on a stand give Gemini Live the perfect angle, but because it can connect to Google apps, I could ask it to get information on a recipe directly from the content creator's video. No need to constantly touch your phone with dirty hands in the kitchen, and no need to even check a recipe anymore. Gemini Live can do it all.

An AI companion every step of the way

Screen sharing allows Gemini Live to see what's on your display at any time, allowing you to ask questions related to imagery, something you're working on, or even how to complete a puzzle in a game. It's seriously cool, similar to Apple Intelligence-powered Siri we were promised but never received back at WWDC 2024.

Gemini Live's full free rollout has just started, so we're yet to see how this functionality will work on iOS. That said, if it works half as well as it does on Android, this will be a feature I could see a lot of people falling in love with.

Gemini Live and its multiple ways of interacting with the world completely unlock AI on a smartphone, and now that iPhone users can access it too, I have no reason not to return to the Apple ecosystem.

@techradar

♬ original sound - TechRadar You might also like
Categories: Technology

Google updates its enterprise agents for more autonomy

Wed, 05/21/2025 - 11:30
  • Google launches Python ADK v.1.0.0 and Java ADK v.0.1.0
  • Its agent-to-agent communication protocol has also been improved
  • Python SDK for A2A makes it even easier to manage inter-agent communications

Google has announced a series of updates to its AI agents in the hope that they become more streamlined as efficient colleagues, rather than tools that colleagues use.

"We envision a future where intelligent agents are not just tools, but collaborative partners in solving complex challenges, streamlining workflows, and unlocking new possibilities," Product Manager Polong Lin and Developer Advocate for Cloud AI Holt Skinner said in a blog post.

The latest updates focus on enhanced agent management and improved agent-to-agent communication – a sign of AI becoming even more autonomous.

Google wants to make AI agents more autonomous

The tech giant has enhanced its Agent Development Kit (ADK) to include a new production-ready and stable v.1.0.0 release of its Python ADK, which it says is already used by Renault Group, Box and Revionics. To coincide with this stable release, Google has also confirmed its first release of the Java ADK v0.1.0.

In terms of management, acknowledging that the Vertex AI Agent Engine already helps developers deploy, manage and scale agents in production, Google has also launched its Agent Engine UI for better agent lifecycle management.

Key features of the new Google Cloud console UI include a dashboard for managing deployed agents, metrics such as requests and CPU usage, session management and tracing, and deployment details with debugging tools.

Further improvements to the Agent2Agent (A2A) protocol have also been confirmed, including v0.2 which brings stateless interaction support for lightweight communications and standardized authentication using OpenAPI-like schema. A Python SDK for A2A has also been released to make it easier for developers to integration agent-to-agent communication within Python-based agents.

Keen to show off the power of A2A, Google boasted of support from Auth0, Box AI, Microsoft, SAP and Zoom.

"These advancements in our ADK, Agent Engine, and A2A protocol are designed to provide you with a comprehensive and flexible platform to bring your most ambitious agent-driven projects to life," Lin and Skinner commented.

You might also like
Categories: Technology

Vulnerability that allows full admin takeover found in premium WordPress theme

Wed, 05/21/2025 - 11:00
  • 'Motors' allowed threat actors to take over admin accounts
  • This enabled full website takeover
  • The developers released a fix

Motors, a premium theme for WordPress, was carrying a critical-severity vulnerability that allowed malicious actors to fully take over compromised websites.

The privilege escalation flaw, due to the theme improperly validating user identities before updating passwords, is now tracked as CVE-2025-4322, and has a severity score of 9.8/10 (critical).

Security researchers Wordfence, who first spotted this bug, explained how threat actors could use it to “change arbitrary user passwords, including those of administrators, and leverage that to gain access to their account."

TechRadar Pro readers can get 60% off Premium Plans at RoboForm now!

New users can take advantage of RoboForm’s exclusive deal and get 60% off the Premium Plan. With this deal, you can get unlimited password storage, one-click login & autofill, password sharing, two-factor authentication for added protection, cloud backup, and emergency access for trusted contacts. To claim this deal, visit this link and sign up for the Premium Plan to lock in this huge discount.

Preferred partner (What does this mean?)View Deal

Premium themes

Obviously, having access to an admin account grants the malicious actors all kinds of privileges, including complete website takeover. All versions up to 5.6.68 are affected. The update that addresses the flaw was released on May 14, 2025. Since themes are not as simple to suspend, or swap, as plugins, users are advised to update their Motors as soon as possible.

Motors is a car dealer WordPress theme, designed for auto dealers, classified listing, auto rental, boats, repair services, and motorcycle dealers. It is developed by a company called StylemixThemes and, according to BleepingComputer, is one of the top-selling themes of its kind. On the Envato market, it is selling for $79 and has been sold more than 22,300 times.

WordPress is the world’s number one website builder platform, powering more than half of all websites on the internet. This also makes it a major target for cybercriminals but, since it’s mostly secure, hackers are looking for exploits in themes and add-ons, which are used as stepping stones for further compromise.

For example, in early March this year, news broke that malicious JavaScript code was deployed into more than 1,000 WordPress websites, following compromised extras. Users are advised to only keep the add-ons they are actually using, and to keep them updated at all times.

Via BleepingComputer

You might also like
Categories: Technology

Wednesday season 2's sneak peek teases bucketloads of Addams Family lore for the beloved Netflix series

Wed, 05/21/2025 - 10:15
  • Wednesday season 2’s latest sneak peek teases more family-based lore
  • This news comes after a big casting announcement, confirming additions including Joanna Lumley and Steve Buscemi
  • The series arrives on Netflix in two parts in August and September

The first season of Wednesday was a huge success for Netflix, and as it returns for a second outing, the creators have teased that we should expect a lot more Addams Family lore.

Recently, I raved about the Netflix show's confirmed all-star cast, many of whom will be joining the family. Season one put a lot of focus on Wednesday Addams' time at Nevermore Academy, and while we will be returning there, we're also going to see more of Wednesday’s family and the dynamics between its various oddball members.

This is excellent news if, like me, you’ve been dying to see more from Morticia and Gomez especially, as their relationship is one of the most iconic on-screen partnerships out there.

Take a look at Netflix's latest teaser below for a fun, behind-the-scenes look at what's coming up in Wednesday season 2.

What do we know about Wednesday season 2?

Wednesday season 2 is the highly anticipated follow-up to one of the best Netflix shows, and the second installment will be split into two parts – an approach Netflix has taken with other shows recently, including Cobra Kai season 6, which was split into three parts.

The first four episodes will be released on August 6, followed by part two on September 3, when the remaining four will premier. That means there isn't too much of a wait between each edition, thankfully, but this won’t be a series you can binge-watch in a whole weekend unless you wait patiently until September.

Plenty of familiar faces will be returning for Wednesday season 2 including Tim Burton as creator, Jenna Ortega as the titular protagonist, and Catherine Zeta-Jones as Morticia.

Big-name newcomers include Steve Buscemi as the new Nevermore Academy principal, and Joanna Lumley as Wednesday's grandmother. The cast really does look excellent this season, and we should be in for a wild ride.

You might also like
Categories: Technology

The Fujifilm X-E5 could be landing in June – will it fix the X-E4’s minimalist mistakes?

Wed, 05/21/2025 - 10:10
  • Fujifilm X-E5 rumored to be launching in June
  • News comes four years after the release of the X-E4
  • X-E4 received mixed reviews for its minimalist approach

With retro looks and tactile controls, Fujifilm’s early X-E cameras gained an enthusiastic following. But when the X-E4 launched in 2021, it split opinion. Praised for its compact style, fans of the series felt that Fujifilm had stripped away too many features. Now, fresh rumors suggest its successor is set to land next month. Many will be hoping it rights those minimalist wrongs.

According to Fuji Rumors – a reliable industry source – the Fujifilm X-E5 is due to make its debut in June. While Fujifilm hasn’t officially confirmed the model’s existence, news of a summer launch doesn’t come as a huge surprise. Talk of the new model’s imminent arrival has been circulating since late last year.

The big question is what approach Fujifilm will take with the X-E5. Back in December, we touted it as one of the most exciting cameras of 2025. Since then, we’ve learnt nothing substantive about its specs. There have been no major online leaks, which is unusual for a camera that’s deep into development.

In the absence of anything concrete, we can only speculate about what the X-E5 will look like – and what we want to see from Fujifilm’s comeback camera.

What the X-E5 needs to get right

On paper, the X-E4 had the makings of a winner: it inherited the 26MP X-Trans sensor and X-Processor 4 from Fujifilm’s more expensive X-mount cameras. It also retained the compact proportions and attractive rangefinder styling of previous versions. But the physical execution proved divisive. In pursuit of minimalism, Fujifilm removed a lot of the physical controls found on the X-E3.

Gone were the rear control dial, AF lever and flash, plus a few buttons. Fujifilm ditched the grip, too. With no in-body image stabilization or weather-sealing either, many felt that the X-E4 prioritized style over substance. Critics pointed to the lack of dual SD card slots as evidence that the X-E4 wasn’t a tool for serious photographers. All of which was harder to justify with a body-only launch price of $850 / £799 / AU$1,399.

When the X-T30 II launched in November 2021, it looked like much better value. In a telling indictment of the X-E4’s shortcomings, Fujifilm stopped X-E4 production after just a couple of years. Ironically, the resulting rarity of the model actually drove its price up.

Against that background, the X-E5 has two jobs to do: win back favor among Fujifilm enthusiasts and re-establish the position of X-E series in the maker’s mirrorless line-up.

The former doesn’t have to be difficult. Looking at online comments, the formula for success needs to include the reintroduction of certain physical controls, including the rear dial and a built-in grip. Other common wishlist items are a high-res EVF, a bigger battery and a 40MP APS-C sensor. In-body image stabilization would really sweeten the deal.

That spec sheet would signal a clear shift back towards the enthusiast roots of the X-E series. In reality, it’s unlikely that we’ll see all of those features on the new camera, but it’s not a fools bet to suggest that Fujifilm will pitch the X-E5 as a proper photographer’s camera. Especially with the X-M5 now catering for videographers.

If Fujiiflm can inject the X-E5 with enough substance while keeping its analog charisma intact, it could be one of the sleeper hits of the year. Then it simply needs to make enough units to meet the inevitable demand. Rest assured, we’ll be keeping a close eye on Fujifilm announcements and rumors over the next few weeks.

You might also like...
Categories: Technology

Lies of P is getting a massive free update, adding difficulty options as well as an extremely welcome boss rush

Wed, 05/21/2025 - 10:00
  • Lies of P is set to receive a huge free update
  • It features two new difficulty modes in addition to the current standard
  • Battle Memories and Death March modes will let players replay boss fights

If you thought Lies of P: Overture was the only piece of substantial content coming to one of the best soulslike games this year... well, you thought wrong. That's because Lies of P is also set to receive a free update that will be made available to all players - irrespective of whether you're buying the Overture expansion or not.

The update is bringing two new difficulty modes to the game. These are Butterfly's Guidance and Awakened Puppet, both of which provide a more story-focused, easier difficulty for those not used to the intensity of soulslikes. The default difficulty that's in the game right now is being renamed 'Legendary Stalker.'

There's no word yet on exactly what effects these difficulty modes will have on Lies of P. But it's easy to imagine reduced damage scaling for enemies and perhaps a decrease in enemy density depending on which difficulty you select.

The update is also bringing a dedicated boss rush mode to the game. Named Battle Memories, you'll be able to access this mode at the Hotel Krat Stargazer after clearing the game at least once.

Battle Memories will let you challenge previously defeated bosses from both the base game and the Overture DLC (downloadable content). Each boss has five difficulty tiers, with the harder Tiers 4 and 5 locked until you've cleared the previous ones (you'll get access to Tier 4 after beating Tier 3, for example).

Publisher Neowiz has also teased additional unlockables for those willing to brave the harder tiers: "Who knows what rewards await those who overcome these hardening challenges?"

But that's not all, as Battle Memories is being paired with another mode called Death March. This mode challenges players to defeat a minimum of three bosses consecutively with a limited pool of items and healing Pulse Cells. You're also able to set custom scenarios here, selecting the bosses you wish to face. The difficulty tiers of Battle Memories can also be applied here.

It's sounding like a very substantial update to go alongside the upcoming Overture expansion. With multiple difficulty options, boss rush modes, and the potential for unique rewards, there's really something for everyone here.

Finally, this Lies of P update will arrive the same day that the Overture DLC releases, sometime in the Summer of 2025.

You might also like...
Categories: Technology

The price of AI? Adobe hikes Creative Cloud subscriptions for some with new Pro plan – here’s what you need to know

Wed, 05/21/2025 - 10:00
  • Adobe is increasing the price of its Creative Cloud All Apps plan
  • The new plan will include AI tools and credits
  • The plan’s name is also changing to Creative Cloud Pro

Adobe’s Creative Cloud suite contains some fantastic apps for creatives, with Photoshop, Illustrator and more among its contents. But if you currently subscribe to Creative Cloud’s top tier, you might want to look away now, as Adobe says its price is about to jump to a pricey $770 a year.

As it is, Adobe’s All Apps plan for Creative Cloud will send you back $659.88 if you pay annually, meaning the changed pricing will cost you over $140 more than before. And that’s not the only change to the All Apps plan’s pricing – if you pay monthly, you’ll be asked for fork out $104.99 instead of the current $89.99 fee.

Corporate per-seat plans are going up to $99.99 per month (up from $89.99), while student and teacher pricing will increase from $34.99 to $39.99. And Adobe is also changing the name of the Creative Cloud All Apps plan to Creative Cloud Pro.

At a time of ongoing economic uncertainty, this will be an unwelcome change indeed for many of Adobe’s users. You’ll need to work out if the price rises are worth paying for if you should start looking for an alternative.

What else is changing?

(Image credit: Future)

In return for the increased cost, Adobe is putting a big focus on artificial intelligence (AI) tools. The company says that customers will get unlimited access to AI features like Photoshop’s Generative Fill and Generative Remove in Lightroom, as well as unlimited access to Firefly Boards, which are used for planning and brainstorming.

Elsewhere, users will be able to use the new Adobe Firefly generative AI app and integrate their own AI models into it. Adobe is offering 4,000 monthly credits for premium AI video, audio and image generation, in addition to the unlimited credits for standard tools.

If you’re not feeling enthusiastic about these new AI features – and the price tags that come with them – you don’t need to stay with Adobe. There are plenty of excellent replacements out there, from the best Photoshop alternatives to the best InDesign alternatives.

And if you do want to stick around, Adobe says the changes will come into effect on your first renewal after June 17, 2025.

You might also like
Categories: Technology

Co-op and M&S food supplier hit by ransomware attack

Wed, 05/21/2025 - 10:00
  • Peter Green Chilled told its clients it suffered a ransomware attack
  • It temporarily stopped delivering goods
  • Markets are feeling the sting of the attack

Peter Green Chilled, a UK logistics company that distributes chilled and frozen food to UK supermarkets, suffered a ransomware attack recently that caused serious problems throughout the supply chain.

According to multiple media reports, the company mailed its customers on May 15, to notify them of the cybersecurity incident that occurred the day before. The BBC, citing the company’s managing director, Tom Binks, said the transport activities were operational, but new orders were not being processed.

No further details about the attack itself were given. Therefore, we don’t know who the attackers are, how they managed to infiltrate Peter Green Chilled’s IT infrastructure, or how much they’re demanding in ransom.

Save up to 68% on identity theft protection for Techradar readers

TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.

Preferred partner (What does this mean?)View Deal

How the attack affects supermarkets

We also don’t know if the attackers stole any sensitive files, as is usual in ransomware attacks. The company is not responding to media inquiries at the moment, it seems.

Peter Green Chilled is not the largest organization in its industry, but it still plays an important role, as it supplies major UK supermarkets, including Tesco, Sainsbury’s, and Aldi. It also services Co-op and M&S, who are currently addressing cyberattacks of their own.

The attack at Peter Green Chilled sent ripples throughout the industry. Speaking at a morning radio show, the founder of The Black Farmer food brand explained how Peter Green not delivering the goods will probably cost the business around $133,000 a week. And that’s just one business, for one week’s work.

Cybercriminals love targeting critical infrastructure providers, since the pressure is enormous and forces the organizations to pay the ransom demand to keep the business going. For Jamie Moles, Senior Technical Manager at NDR provider ExtraHop, cybersecurity in retail and logistics needs to be treated with the same severity as it’s being treated in critical infrastructure.

“Cybersecurity in retail and supply chain logistics must be treated with the same severity as critical infrastructure,” he told TechRadar Pro in an emailed statement. “Protecting digital systems is no longer optional, and modernising how organisations can see into their networks will improve resilience against threats like ransomware, ensuring continuity and trust in the systems we all rely on every day."

Via The Register

You might also like
Categories: Technology

‘I made a lot of mistakes with Google Glass’: Google’s Sergey Brin admits missteps but says Android XR has a bright future for one big reason

Wed, 05/21/2025 - 09:46

Sergey Brin, the man most responsible for Google Glass, is out of retirement and back at the company he cofounded, helping make AI happen. He's also touting a new kind of "Glass" or, more accurately, Android XR smart glasses while admitting to some big Google Glass missteps.

Brin unexpectedly took the stage at Google I/O 2025 for a sitdown with Big Technology Podcast’s Alex Kantrowitz and DeepMind's CEO Demis Hassabis on Tuesday. While the discussion mostly revolved around Gemini, Google's generative AI platform, Kantrowitz asked Brin what he learned from the Google Glass experience and how he might apply that to the modern Android XR Glasses project.

With bracing candor, Brin told Kantrowitz, "I definitely feel like I made a lot of mistakes with Google Glass, I'll be honest."

It's quite an admission from Brin, who more than a decade ago was Google Glass' biggest champion and memorably hosted a launch featuring wing-suit skydivers jumping from a plane while wearing Google Glass. It was a spectacular moment, but even there, Brin now sees a flaw and how he might avoid a similar misstep.

Talking about how much he missed that big launch moment, Brin turned his attention to the new XR glasses' own launch. "Maybe...we should probably polish the product this time, when it's ready and available, and then we'll do a really cool demo. So, that's probably a smart move."

Brin may see the blemishes and ultimate failure of Google Glass, the first wearable smart glasses that let you use gestures, head tilts, and even blinks to take a picture, but there was a moment between 2012 to 2014, when Google Glass was everywhere.

I wore my pair to a fashion show and CES, and on numerous network TV shows. Brin was spotted on the subway wearing his. They were, for a time, a cultural phenomenon, but also ridiculous, almost as quickly spawing the meme "Glassholes".

Would be cool if this guy showed up again at #GoogleIO. This is from a decade ago (at a different event) when Sergey showed me how to take a selfie with Google Glass pic.twitter.com/y0zlrMyR5cMay 14, 2024

Still, I appreciated Brin's enthusiasm, and when I ran into him more than a decade ago, while wearing Google Glass, naturally, he graciously took the time to show me how you could take a selfie with Google Glass (yes, it involved removing them from your face).

Glass suffered early on from availability and pricing issues ($1,500 / £1,000, or around AU$2,000) and Brin points to his naivete as the cause: "I just didn't know anything about consumer electronics supply chains really, and how hard it would be to build that and have it be at a reasonable price point....This time, we have great partners that are helping us build this."

The point here is that instead of Google trying to figure any of this out, it now has companies like Samsung, Gentle Monster, and Warby Parker, all experts in building consumer products, supply chains, and retail pricing, building the Android XR Glasses for them.

AI will make the clear difference

Google Glass faded into ignominy, but Brin sees Google's return to smart glasses in a fresh light, and it's mostly because of his new pet project at Google: AI and Gemini.

Aside from the form factor, which did not look like normal glasses, Google Glass might have been ahead of its time. They were "smart" without really having any true smarts of their own.

"Now, in the AI World," said Brin, "the things these glasses can do to help you out without constantly distracting you, that capability is much higher." And it potentially makes interacting with Android XR Glasses much more natural. It's also why Google and its partners are putting Gemini at the center of these Android XR wearables.

A Google presenter wearing Google Android XR Glasses at Google I/O 2025. (Image credit: Future)

Finally, Hassabis, who'd been watching the whole exchange, chimed in, "I feel like the universal assistant is the killer app for smart glasses, and I think that's what's gonna make it work."

This all makes sense; a powerful AI sitting resident on your face in a familiar form factor that asks for nothing more than voice commands to do your bidding, but can also watch and act on your behalf. That's the future of smart glasses, and one that Google Glass aspired to but never achieved.

Android XR Glasses will be Brin's second smart glasses act and, perhaps, the one he'll ultimately be best remembered for.

You might also like
Categories: Technology

Senua's Saga: Hellblade 2, one of my favorite games of 2024, is finally getting its long rumored PS5 release this summer

Wed, 05/21/2025 - 09:40
  • Senua's Saga: Hellblade 2 officially confirmed to be coming to PS5 this summer
  • The game was originally released as an Xbox Series X|S and PC exclusive
  • PS5 Pro enhancements confirmed, along with other new features

Developer Ninja Theory has today, on the game's first anniversary, confirmed that Senua's Saga: Hellblade 2 will be coming to PlayStation 5 "this summer".

In an official announcement in a post on the developer's website, as well as a video that you can see below, Studio Head Dom Matthews confirmed the news and will also include some "new features" that will also be coming to PC and Xbox versions of the game via an update. No firm date has been given for the release right now, however.

The post goes on to say that Ninja Theory has been "working hard to fully optimise Hellblade 2 for PS5 and PS5 Pro, to give you the very best experience we can," which I'm hoping means some excellent enhanced visuals or graphics features - especially for PS5 Pro and to make use of PSSR. However, no specifics are mentioned, and it looks like we'll have to wait to hear more.

I really enjoyed Hellblade 2 and was longing for a reason to replay it, so this is music to my ears. The game is rather bleak and almost miserable, but it's technically brilliant, has an intensely gripping story, and some of the most richly atmospheric settings and locations I've experienced that make it absolutely mesmerizing and something that has long lived in my memory since I finished it for my review.

Ninja Theory games have long been on PlayStation, and it was somewhat strange to see the studio release a game and not have it launch on PlayStation, too. This is down to the studio being acquired by Xbox Game Studios back in 2018 and thus releasing the game exclusively on PC, Xbox Series X and Series S, and Xbox Game Pass last year when it launched on May 21, 2024.

But now, Senua's Saga: Hellblade 2 can join the ranks of Ninja Theory PlayStation games that include its predecessor, as well as Heavenly Sword, Enslaved: Odyssey to the West, and DmC: Devil May Cry when it arrives this summer.

You might also like...
Categories: Technology

Pages