Viewsonic is set to unveil its latest monitor, the VP2788-5K, at the upcoming Pepcom Digital Experience in January 2025.
Designed for desktops, the 27-inch display (via TechPowerUp) is set to make it the smallest 5K (5120 x 2880) resolution monitor on the market.
ViewSonic is launching several other displays at the event, including the VG2748N, a 27-inch 1080p monitor that offers wireless casting capabilities, and the XG275D-4K gaming monitor, which brings 4K resolution with switchable refresh rates.
ViewSonic high-resolution monitorsViewSonic’s VP2788-5K is a compact device with a 99% DCI-P3 color gamut and robust connectivity options, including Thunderbolt 4, HDMI 2.1, USB-C and A, and DisplayPort.
The monitor is expected to be available in the first quarter of 2025, possibly before the end of March.
Personally, I'm cautious about getting too excited about the VP2788-5K given that ViewSonic's previous 8K offering, the VP3286-8K, never hit the market.
Nevertheless, Jeff Muto, the company's Business Line Director, said, "ViewSonic is excited about its new line of product offerings in 2025."
"Our new desktop monitors, along with our current slate of portable display devices," he went on, "showcase how ViewSonic continues to expand its display solutions to offer more choices, features, and functionality to any type of work or play lifestyle."
You might also likeWolf Man is the latest Blumhouse horror movie to grace our screens, with horror director Leigh Whannell returning to showcase his take on the classic Universal monster movie. He's done it before with The Invisible Man, spinning it as a tale of gaslighting and domestic abuse, and his vision for Wolf Man tells a similar, emotionally devastating tale, making it a great entry into this year's new movies.
Speaking to TechRadar, lead actor Christopher Abbott spoke about why Blake's transformation into the horrifying titular character means the horror is as much a tragedy as anything else, as the theme of loss runs throughout the narrative.
When asked if any other performances influenced his, he told us: "Yeah, I would say The Fly, Elephant Man, there's a little David Lynch in there. I guess there's sort of like a tragic element with both of those monster creatures, you know, and I think this has that."
This was one of my most anticipated horror movies of 2025, and it really did deliver. While, yes, it was scary and the jumpy moments were effective I did find myself more disturbed by the psychological torment and the emotional moments, especially when it came to the breakdown and total loss of communication between Blake and his wife and daughter (played by Julia Garner and Matilda Firth, respectively).
(Image credit: Blumhouse)It's a big reason why I wanted to do the movie because when I first saw the designs, I just thought there was something very tragic about the monster.
Christopher Abbott, actorWolf Man features some brilliant scenes where the POV shifts from the wolves to the humans, so we get to see both sides of the situation. Their inability to communicate and understand each other makes Blake's transformation deeply sad, as he's morphed into something completely unrecognizable, mirroring the ways we can't communicate with any species except our own. That sense of isolation and the primal animal instinct taking over is the real horror at the heart of Wolf Man. The audience already knows Blake and his wife Charlotte's marriage is strained, and now they've had the ability to talk it out ripped away.
Abbott added: " They're miscommunicating. They're talking around each other, they're not communicating well. You just need that idea to then help set up the more fantastical thing where, where the communication then gets taken from you. And then how do you, how do you then communicate when you're not even, uh, physically able to?"
Not only was there the breakdown of a bond between husband and wife, but also a father and daughter. Abbott praised his young co-star Matilda Firth, who played his on-screen daughter Ginger, revealing: "She was oddly seasoned. It felt like she was oddly seasoned. I loved her, but it was almost off-putting. She felt too much like a pro. I felt like I was working with like an 80 year old theater vet. But she just takes things in stride. She's there to play. You can throw something at her and she'll do it or try it."
Wolf Man is in theaters from Friday, January 17.
You might also likeMini PCs were a booming market in 2024, and 2025 shows no signs of slowing down in this regard - and there’s one particular model that caught my eye.
The Mind 2 mini PC from Khadas, which the firm flaunted at CES 2025, is an impressive bit of hardware for professionals and casual users alike.
This modular device bears more of a resemblance to a bulky SSD than a mini PC, but is powered by the Intel Arrow Lake platform, giving it serious bang for your buck.
Power boostBoasting an Intel Core Ultra 7 225H processor and built on the new Arrow Lake-H architecture, the Mind 2 marks a significant improvement on Khadas’ previous model.
The Mind 2 includes general performance improvements, but also great energy efficiency, enhanced integrated graphics, and, given the sharpened industry focus on the technology, improved AI processing capabilities.
“Compared to its predecessor, the Mind 2s boasts remarkable improvements in both single-core and multi-core performance, delivering a more efficient computing experience,” Khadas said in its launch announcement.
All told, the Mind 2 is a powerful mini PC that’s flexible enough in its capabilities to span a range of functions - from professional design and video editing to complex AI tasks.
What separates the device from counterparts in the industry, however, is the suite of complementary tools and features that accompany it. Khadas is keen to point out that the Mind devices come with an entire ecosystem of capabilities surrounding them.
An AI developer’s dreamFocused primarily toward AI developers, the mini PC comes with the Mind 2 AI Maker Kit. Powered by the Intel Lunar Lake platform and an Intel Core Ultra 7 258V processor, the Maker Kit boasts up to 115 TOPS of computing power.
This has the potential to significantly enhance AI model efficiency and optimization. The Mind Maker Kit essentially acts as an agile deployment server and edge computing platform, allowing devs to deploy models locally or at the edge.
That means lower latency and better efficiency for real-time applications, as well as better data privacy. Combine this with the fact that it weighs a paltry 435 grams, and this makes it a perfect device for developers on the move or in remote work environments.
The newest addition to the Mind ecosystem is equally tantalizing and could be a game changer for user productivity.
Mind xPlay is focused specifically toward “mobile work and multi-scenario applications”, according to Khadas. This feature allows users to integrate with accessories such as the Mind Dock or Mind Graphics GPU expansion module.
Khadas Mind 2: Pricing and other featuresThe Khadas Mind 2 comes complete with Thunderbolt 4 and USB4 ports, marking an improvement on the previous model.
Storage capabilities can also be improved via SSD and the device has a built-in 5.55Wh battery.
It doesn’t come cheap, however, with pricing set to start at $799 and can be altered or tuned based on user preferences.
The options include a base-level device boasting an Intel Ultra 5 125H processor, although this version is limited to 16GB memory and 512GB storage.
Opting for the Ultra 7 155H model does offer better flexibility and general performance, but there’s quite a jump in price. Users can select 32GB or 64GB memory with this line and up to 2Tb in storage capacity. These setups will set you back more than $1,000.
You might also likeFujifilm has been tipped to launch several mirrorless cameras in 2025, including the X-E5 and an intriguing medium-format compact called the GFX100RF. But if you're looking for a cheap sidekick that's built for pure fun, then the rumored Instax Wide Evo could be the most exciting of the bunch.
According to Fuji Rumors, the Instax Wide Evo will launch "soon," and the formula sounds pretty simple. Take the Fujifilm Instax Mini Evo (a hybrid instant camera that blends digital and film snapping), cross it with a Fujifilm Instax Wide 400, and you've got a pretty good idea of what to expect.
Promisingly, the Fuji Rumors' sources claim this Instax Wide Evo will be "regarded as the best-looking Instax camera ever made." That's subjective, but I think the Instax Mini Evo is still the most stylish instant camera out there.
Yes, it's made out of plastic, but the leather-and-chrome, rangefinder-style design is the closest thing to an instant version of a Fujifilm X-Pro camera.
Right now, rumored specs are thin on the ground – we don't yet know if it'll inherit the Mini Evo's 35mm equivalent lens, automatic flash, self-timer, and other features. But one added detail in the rumors is that there'll be a "film rewind knob" on the body, which you'll turn when you want to print your photos.
That all sounds very fun, but it's the overall concept of a stylish, versatile Instax Wide camera that I'm looking forward to...
Why I want the Instax Wide Evo The Fujifilm Instax Wide 400 (above) produces lovely big prints, but it's not the best-looking camera around and lacks modern features (Image credit: Beth Nicholls)Instax film comes in three sizes (Mini, Square, and Wide), and I've always preferred the Wide format because it's the closest thing to a real photo rather than a little souvenir sticker. Wide is basically the size of two Mini prints with a photo size of 8.6 x 10.8cm (or 3.4in x 4.2in).
The problem is that Fujifilm only has one Wide camera – and as our Instax Wide 400 review shows, it's not the most fully-featured or handsome of things. An Instax Wide Evo could fix these issues nicely while solving one of the main drawbacks of the Wide format – mounting film costs.
The benefit of a 'hybrid' instant camera like the Evo series is that, because the images are captured digitally and then printed, you can choose which ones to print out – which is helpful when 20-shot packs cost $24.99 / £16.99 each. Sure, it isn't quite the pure experience of exposing film directly and waiting for it to develop, but it is much more practical, particularly if you're a parent.
I recommended the Fujifilm Instax Mini Evo to a friend who wanted an instant camera for their kid, and they love it. The experience is still fun and intuitive compared to simpler Instax models. And there's the added benefit of being able to connect the camera to other smartphones so that it can act as a Bluetooth printer at parties.
Some more examples of Instax Wide prints, from our Fujifilm Instax Link Wide review. (Image credit: Future / Tom Morgan)The cherry on top would be if the Instax Wide Evo could also print directly from my Fujifilm X-T5 camera, but that's not guaranteed. For some reason, Fujifilm has restricted direct printing to a few camera models, with the rest needing to use the smartphone app.
Still, that wouldn't be a deal-breaker and I'd almost certainly buy an Instax Wide Evo if it lives up to its rumored billing. And it seems we won't have to wait long to find out.
You might also likeWhether you already own a Galaxy smartphone or are keenly curious about what may arrive at Samsung’s next Galaxy Unpacked event on January 22, 2025, the tech giant has some good news when it comes to its Care Plus warranty program.
For both existing and new Samsung Care Plus Theft and Loss customers with a Galaxy smartphone, it will now cost just $0 to fix a cracked screen instead of $29. That’s a considerable saving for a same-day repair for a cracked screen on a smartphone, and not to mention it’s an unlimited amount. This means that if you happen to crack the screen several times, it will be $0 for the same-day repair to get your device back online.
Considering all of Samsung’s Galaxy smartphones feature glass touchscreens, a crack is not only more likely – even with improvements to the structure – but can really take away from the functionality that a phone is supposed to provide. And if it’s an especially tough crack, it can cause trouble for your fingers.
(Image credit: Image Credit: Pixabay)Depending on your Galaxy smartphone, Samsung Care Plus Theft and Loss can cost between $8 and $18 monthly. However, all tiers now feature unlimited, same-day screen repairs for $0. You’re also covered for backglass repairs, liquid damage, and even theft. Many of the plans also include set-up help and general support.
Furthermore, Samsung also offers Care Plus Theft and Loss for smartwatches and tablets, both of which benefit from this new $0 same-day screen repair. The good news is that while other repair partnerships have stopped, more than 700 Samsung-authorized locations still can perform the repair. This means you can bring in a cracked device with Care Plus Theft and Loss, get it repaired, and be on your way.
It’s likely no coincidence that Samsung is rolling out this new price adjustment for Care Plus Theft and Loss ahead of Galaxy Unpacked, where we expect to see the next Galaxy S family of smartphones – the S25, S25+, and S25 Ultra – unveiled.
In fact, in the lead-up to the event on January 20, you can already ‘pre-reserve’ the next Galaxy. We wouldn’t be shocked either to see some type of Care Plus discount be included alongside preorders of the forthcoming smartphone and considering the benefits the extended warranty provides, it’s likely worth the investment.
You can sign up to pre-reserve the next Galaxy here, find the nearest Care Plus authorized location here, and read all about what TechRadar is expecting at the January 20, 2025, Galaxy Unpacked here.
You might also likeIn September 2024, we reported how the UnifyDrive UT2 portable NAS device's RAID-configurable storage made it a strong, unique option for creatives against stationary alternatives.
Now, the company has announced the product's latest iteration, the UP6, at CES 2025 (via GlobeNewsWire), offering storage while also leveraging AI to act as a smart assistant for managing it on the go.
The UP6 offers up to 48TB of storage and supports instant file transfer and backup with one-click support for SD, TF, and CFe cards.
Intel Core Ultra processor's AI capabilitiesAt the heart of the UP6 is an Intel Core Ultra processor, enabling the use of AI features such as natural language search and facial recognition, in addition to handling large, encoded files.
It comes with a 10 GbE Ethernet interface and two 40Gbps Thunderbolt 4 (TB4) ports, supporting rates of up to 8000 MB/s; that's a transfer of 1TB of data in about two minutes.
The device features a touchscreen HDR display and supports wireless access points for seamless collaboration in locations without internet access.
UnifyDrive claims the device’s AI models that power the NAS drive's search function run locally on the machine.
Bin Yuan, founder of UnifyDrive, noted, “the era of bulky, unattractive, slow, plastic NASes hidden in networking closets is over. The UP6 lets you bring your data with you, offering unmatched portability and power for the most demanding workflows — today and tomorrow.”
“The UT2 has exceeded our expectations as a portable NAS solution that transforms data management for modern lifestyles,” he added.
You might also likeI don't trust you. I like you and I want to share my tech knowledge with you but when it comes to something like flying a drone, I simply won't trust that every random drone owner will follow basic flight safety rules.
But I'm not DJI, the world's number-one drone maker. Arguably the maker of the best drones in the world (its fliers top our best drones list and warrant their own best list), DJI made it clear this week that it fully trusts its drone customers to steer their drones clear of sensitive areas like prisons, airports, and national landmarks.
These so-called no-fly zones are currently GEO-coded into DJI drones, which means the flying cameras would automatically steer clear. They did in the US, at least. Last year, DJI switched those controls off in the European Union, and now it's followed suit in the U.S.
Flying blindIn a blog update posted this week, the company officially switched its "Restricted Zones (or No Fly Zones) to Enhanced Warning Zones. Instead of stopping the drones automatically from flying into an airport, you'll get a warning that you're flying into restricted airspace and, as DJI puts it, the company is, "placing control back in the hands of the drone operators, in line with regulatory principles of the operator bearing final responsibility."
DJI offers many reasons like the rise of a strong drone regulatory structure that didn't exist when DJI first started selling its quadrocopters in the US more than a decade ago. That's true, there are lots of rules, but they're somewhat inconsistent.
For a time, the FAA required everyone flying a drone of nearly any size (250 grams to 55lbs) to get a drone registration (somewhat less stringent than a license). The FAA rule was eventually struck down so that the majority of small prosumer drones no longer need any registration.
I was sorry about this change, because the light-touch registration process taught new fliers about the rules of the road (air). For instance, they could not fly above 400ft, so as not to interfere with aircraft, and they were not supposed to fly in certain zones that included airports.
I took these lessons seriously but also appreciated that DJI technology had my back and would stop me from flying where I shouldn't. The registration also provided a level of accountability. Your registration number was supposed to be affixed somewhere inside your drone so it if was found flying where it shouldn't be, the errant device could be traced back to the careless pilot.
With popularity comes responsibility (Image credit: DJI)The allure of a DJI drone is strong: who doesn't want to fly a drone? Few companies have their kind of track record across a single category. I've flown numerous DJI drones over the years and can't name a single dud.
In recent years, the drone maker has been on quite a tear. We got the new Avata 2, a fantastic FPV drone that puts you in the drone flight driver's seat to create incredible fly-through videos. There's the new 3-camera Mavic Pro 3, which might be the ultimate prosumer drone, and now the recently released entry-level DJI Flip that, when folded, looks unlike anything DJI has ever produced.
My point is that DJI has a drone for every taste and flying style. It appeals to an incredibly broad set of consumers. But not every buyer has flown a drone before or understands how to keep them from becoming unintentional weapons of minor destruction.
You think I'm exaggerating? Droves have been reported flying across the flight path of landing and departing commercial aircraft. Even more worrisome is what happened when a drone got in the way of one of the planes carrying life-saving water from a lake to the Los Angeles fires. LA had to ground those planes until they figured out what was going on.
I doubt that the drone pilot was trying to harm the plane or interfere with fire control efforts. They likely wanted the amazing video the drone could capture. But drones in amateurs hands do not belong in those situations, and yet I think DJI's decision will only make such situations that much more likely.
A looming no fly rule of its own (Image credit: DJI)When I think about DJI's decision, I have to consider its somewhat tenuous place in the US market. Despite its success, DJI has been the target of a potential US ban for more than a year. It was briefly included in a bill to limit some technology and goods, the Countering CCP Drones Act, because, as a Chinese company, there's concern that the Chinese government could use DJI tech to spy on US interests. While DJI managed to avoid a ban, the company still has to prove to the US government that its technology is not a national security risk.
DJI has fought these allegations from the start, but I almost feel like this new decision is a bit of passive aggression. Yes, DJI made the same change in the EU, but doing so now in the US, especially as we face a change of Executive administration, seems like an especially bad idea.
Unless the purpose was for DJI to say, "We had your back on basic drone safety. Now, see what it's like when we don't."
It's hard to imagine that the decision will curry favor here with US lawmakers. In the meantime, the software update went into effect on January 13, 2025, which means our skies are already a Wild West and less safe than they were last year. It's quite possible that we could soon have all manner of DJI drones buzzing airports, the Statue of Liberty, the US Capitol, and other precious locations.
I have no issue with licensed pros flying in these locales because they've cleared flights with officials. It's those other amateurs flying drones, with a 5-to-8-mile flight range, who will send them buzzing into your territories. We don't need to be swatting away DJI drones like so many flies.
I love DJI drones and I love you, dear drone enthusiast readers, but right now I'm struggling with trust on two fronts.
You might also likeIn 2020, Apple switched from Intel to its ARM-based Silicon and since then, running x86-based software on its hardware has been challenging.
To help this, Parallels, the virtual machine (VM) software for Mac users, has now released its latest version, Parallels Desktop 20.2, into public preview finally bringing x86 emulation to Apple Silicon.
The launch means developers can run, develop, and test 32-bit Windows apps in a native environment; especially useful for those working with legacy software yet to see ports to ARM-based systems.
A game changer?By the company's own admission, however, the new version is very much a preview; boot times for Windows VMs range from two to seven minutes, depending on the hardware.
Even after booting, the responsiveness of the system lags, and creating a new virtual machine can take considerable time, up to 30 minutes for Windows and two hours for Linux.
Another major drawback is the lack of support for USB devices. This can be a problem for users who rely on external devices in their workflows, such as printers or scanners.
All virtual machines must run through Apple’s hypervisor and Parallels’ own can't be used, ruling out nested virtualization.
Furthermore, there's currently no sound in Intel-based VMs, and some Windows updates aren't working correctly. The emulator also only supports 64-bit VMs, though 32-bit applications remain compatible with them.
The current limitations makes it clear that this feature is not yet ready for mainstream use, but the potential is there, and I'm rooting for Parallels to iron out these issues so that it can once again become a powerful tool for enterprise users and developers using Macs in the Apple Silicon era.
You might also likePure Storage has announced an expanded collaboration with Micron which will see the American semiconductor company’s G9 QLC NAND integrated into the storage firm's future DirectFlash Module (DFM) solutions for use in hyperscale data centers.
This move builds on a decade of cooperation between the two companies, spanning seven generations of NAND integration which includes the Micron G8 QLC NAND, which has been qualified for production in Pure Storage’s 150TB DFM expected later in 2025 (Pure Storage has previously said it plans to ship 300TB DFMs by 2026).
Micron already uses this NAND chip in its own 60TB SSD.
Addressing performance and efficiency needsDFM technology promises faster data transfer rates, low latency, and high reliability for data-intensive workloads. The module also reduces energy consumption compared to traditional HDD-based solutions, lowering both operating costs and carbon footprint. The use of NAND with higher areal density (bits per square millimeter) allows for greater storage capacity in smaller physical footprints, contributing to more efficient rack usage and scalability.
“Pure Storage’s collaboration with Micron is another example of our significant momentum bringing the benefits of all-flash storage technology to hyperscale environments," said Bill Cerreta, General Manager of Hyperscale at Pure Storage.
"With Micron’s advanced NAND technology, Pure Storage can further optimize storage scalability, performance, and energy efficiency for an industry with unparalleled requirements.”
Jeremy Werner, Senior Vice President and General Manager of Micron’s Storage Business Unit, added, “Micron’s advanced NAND technologies, combined with Pure’s innovative storage solutions, enable data center operators to address the increasing performance, efficiency, and scalability needs for today’s hyperscale data centers, Built on trust and thriving on innovation, our collaboration with Pure Storage consistently offers cutting-edge storage solutions for hyperscale and enterprise environments.”
The announcement follows that of Japanese memory giant Kioxia, which, like Micron, has a longstanding relationship with Pure Storage. Last year, Kioxia reported it had begun sampling shipments of 2Tb QLC devices, featuring its eighth-generation BiCS FLASH 3D flash memory technology, which Pure Storage also uses in its all-flash storage products.
You might also likeAs someone who takes a fairly laissez-faire approach to skincare despite having storied difficulties with acne and oiliness, I’m excited to see that beauty tech is on the rise. This year at CES 2025, I saw everything from LED face masks to smart mirrors making debut appearances alongside the latest and greatest gadgetry, however, one of the most exciting arrivals was L'Oréal’s new skin testing device, the Cell Bioprint.
Made in collaboration with Korean start-up Nanoentek, this tabletop device uses advanced proteomics (the study of how protein composition affects skin aging) to determine the past, present and future of your skin health, all in a five-minute test.
I had the opportunity to try it out on-site, and I learned a lot about my skin, including some surprising insights I’ll be taking heed of in my new skincare regime.
Want a quick run-down? Check out the timestamped video below.
Lab on a chipSo, how does the Cell BioPrint actually work? The test itself is deceptively simple, but the science behind the scenes is truly impressive.
The test begins by dabbing stickers on the apple of your cheeks 15 times, before inserting them into a solution. The sticker lifts dead skin cells from your cheek, which the solution then strips, leaving behind only the tell-tale proteins identified by L'Oréal as biomarkers for various signs of skin aging.
Next up, you’ll need one of L'Oréal’s “lab on a chip” testing cartridges, on which you’ll dot a few drops of the solution. Much like the lateral flow tests we were so familiar with during the pandemic, this cartridge sucks up the solution; and this is where the actual Cell BioPrint machine comes into play. It features an ATM-esque slot, into which you slide the testing cartridge so it can be measured and analyzed.
While you wait for the analysis to be complete, you’ll also take a few scans of your cheeks and forehead with L'Oréal’s Skin Connect device and answer a short questionnaire about your chronological age and skin type before the machine spits back out your testing cartridge and delivers your results on the connected tablet.
By the skin of my skin @techradar ♬ original sound - TechRadarNow, as I mentioned before, I’m no skincare guru, despite a now-decades-long battle with my own complexion. So, anything that removes the guesswork for me is a huge win in my book.
Suffice it to say, then, that I was delighted by the depth and detail of my skincare report from the Cell BioPrint. Not only did give me some great insight into my skin, but it also told me that my skin’s biological age is actually two years younger than my biological age - a compliment I was not expecting to receive.
In addition to correctly identifying that my skin tends towards larger pores and oiliness, I also learned that my skin barrier function is weak, and that there’s some unevenness to my skin. One of the most useful learnings is that my skin absorbs retinol well, meaning I can confidently join the leagues of TikTok-informed skincare gurus in worshipping the famous form of Vitamin A.
(Image credit: Future)Better yet, the device can even recommend specific products for your skin composition. Of course, the recommendations made by the Cell BioPrint are solely from L'Oréal’s family of products, so there’s that to consider; but you can always research for yourself any product dupes or alternatives that fall within your price range or preference.
Having spent years hopping from product to product, I’m feeling a lot more confident about my skincare regimen following my demonstration of L'Oréal’s Cell BioPrint device; but that’s not just down to the tech itself. Cell BioPrint is destined more for department stores and retail environments instead of for home use, meaning users will have an expert at-hand to help break down the results; an important element in today’s age of misinformation.
With a pilot due in Asia later this year, it’s hopefully only a matter of time before this technology lands in a store near you; I’ll race you to the front of the line to see if I can maintain my two-year skin age gap.
You might also like…TechRadar extensively covered this year's CES, bringing you all of the big announcements as they happened. Head over to our CES 2025 news page for the latest stories and our hands-on verdicts on everything from 8K TVs and foldable displays to new phones, laptops, smart home gadgets, and the latest in AI.
And don’t forget to follow us on TikTok and WhatsApp for the latest from the CES show floor!
Following Nvidia's RTX 5000 series unveiling at CES 2025, we’re waiting for our first look at the RTX 5090's performance from user benchmarks to find out how well the new Multi Frame Generation feature really works - but for now, we've got our first look at its little brother, the RTX 5080 Founders Edition, and its new power connector.
This comes from a reviewer on the Chinese social media site Bilibili (as reported by VideoCardz), who showcased the RTX 5080 FE along with its new power adapter and stated that the embargo date for reviews will be January 29 (a day before launch) - although this conflicts with a previous VideoCardz report that embargoes will lift on January 24. At the time of writing, the Bilibili post appears to have been taken down - potentially due to a legal notice from Nvidia - but the VideoCardz article is still live, at least for now.
The RTX 4080 and 4090 power connector, called the 12VHPWR connector, wasn't ideal for users - considering it was a potential fire hazard, and its short length left a lack of space and flexibility for side panels on PC cases to close (it also wasn't very pleasant to look at, frankly).
Based on the early image shown in the VideoCardz article (which we’re not posting here because we’d rather not invoke the fury of Nvidia’s legal department), it seems that Nvidia is providing a much longer and more flexible power adapter now - I recently covered the RTX 4080 Super and its performance in the Resident Evil 4 remake, and the only glaring issue I found wasn't with the card's performance itself, but rather the finicky power adapter.
While there's only so much information we can take from a leaked image like this, it looks to be a little more case-friendly if it operates in the same manner as the likes of Seasonic's 12VHPWR power cable - VideoCardz also pointed out the additional sense wires that have been added to ensure a secure 8-pin connection, hopefully preventing any connection issues that caused the connector meltdowns we saw with the previous generation.
(Image credit: Nvidia) What solutions are available for the RTX 4080 GPU power adapter?For those sticking with the RTX 4080, depending on your PC's power supply, there are plenty of options on the market that work as viable alternatives to the problematic RTX 4080 power connector. It's important to buy the correct cable that provides a sufficient amount of power and is compatible with your PSU - you don't want to run the risk of buying a cheap, unreliable one either.
It isn't exactly clear from the provided pictures whether the new power adapter for the RTX 5080 (and the RTX 5090) will be compatible with the RTX 4080 or any other RTX 4000 cards, but if it is, that could be the easy solution to this matter.
January 30 is on the horizon, so we'll be seeing more of what Nvidia’s new powerhouse GPUs will have to offer - hopefully, there are no recurring issues concerning potential melting cables on this occasion.
You may also like...New AI tools spawn almost hourly these days and companies inundate my email inbox promising the world with tools that can do anything and everything from making my dog speak to editing subjects out of photos. While shiny new products with headline AI features pique my interest, I’ve been quietly waiting for a small, yet significant, ChatGPT update to become available – and now it’s finally here.
An actual AI personal assistantSince the first AI-powered smartphone tools popped onto the market, I’ve been trying to find the perfect chatbot to become my very own personal assistant, able to learn everything that I tell it and cleverly remind me when I need important information. At the end of last year, I stopped using my trusty task manager apps like Things 3 and Siri to see what OpenAI’s chatbot was truly capable of. After a day without notifications and reminders, I canceled the challenge and returned to my regular workflow, after all, using an app to help take control of my life that can’t remind me of anything was a complete dealbreaker.
Fast forward to 2025, and OpenAI has added one of the biggest, yet subtle, improvements to ChatGPT with tasks and it might honestly be the biggest update we’ve had yet. ChatGPT tasks might not be as headline-stealing as an AI video generator like Sora, or a new model like o3, but it’s the kind of addition that could drastically improve millions of lives around the world and I can’t wait to incorporate it into my workflow.
Why is ChatGPT tasks such a big deal?Announced on X, ChatGPT tasks is still in beta, rolling out to ChatGPT Plus, Team, and Pro subscribers. The new feature allows ChatGPT to set reminders, becoming your go-to task manager with the added functionality of all the power that OpenAI’s chatbot has to offer. This means you can go about your day updating ChatGPT with information about your life and then ask tasks to suggest alerts and notifications proactively, such as daily weather reports, reminders of what bus to take, and even when bills are due.
When I first attempted to use ChatGPT as my go-to life management tool, I loved its memory feature because it meant I could tell it things that I wanted to remember and then ask the AI at a later date to bring back that information. It was very useful for tracking bus times and routes, as well as remembering key information that I often forget such as the last time I ordered contact lenses, or when I needed to reorder my blood pressure medication. Now with tasks, I can harness the power of ChatGPT’s memory functionality and dump everything that comes to my brain onto the app so I can focus on the things that matter most instead of remembering the mundane.
I have been waiting for ChatGPT to have notifications on iOS with reminders and tasks for a very long time, so much so that I could be tempted to pay the hefty $20 / £16 / $AU 32 monthly subscription fee for a Plus account just to get early access. OpenAI says the task management functionality will be coming to free users at a later date.
One of my most anticipated AI tools of 2025 is the major Apple Intelligence update coming to Siri as part of iOS 18.4 in a few months. That update gives Siri personal context and on-screen awareness so it can function as your on-device AI-powered personal assistant. While not built into iOS in the same way, ChatGPT tasks as part of the ChatGPT app or accessed through Siri integration could make OpenAI’s chatbot your go-to for all your everyday needs. Combined with a souped-up Siri, I’m beyond excited at the prospect of AI taking control of my day-to-day life in 2025 so I can focus on the things that matter most, like drinking coffee and eating pizza.
You might also likeCarPlay is Apple’s digital system for interacting with your car – listening to music, using maps, that sort of thing. And the company has been teasing CarPlay 2 for what seems like forever, all with very little to show for it. Now, though, it looks like we’ve just been given a glimpse of how it will work, including a set of widgets that will give you all sorts of abilities from your dashboard.
The leaked images were posted by MacRumors contributor Aaron Perris on X. There, Perris uploaded four shots of a rectangular dashboard populated with various widgets. All of the images were monochrome, but it’s likely that the final CarPlay 2 release will feature much more color.
The first image displayed a large, empty rectangle that can be populated with widgets. Some of those widgets were shown in the second picture, which depicted square widgets for the Clock, Weather, and Calendar apps. Perris also showed a widget for a combined navigation and music display next to a standalone music player.
Despite that, Perris didn’t reveal where these images came from or provide any more information on what we can expect from CarPlay 2, leaving us with plenty of questions to answer.
Where is CarPlay 2? Image 1 of 4(Image credit: Aaron Perris)Image 2 of 4(Image credit: Aaron Perris)Image 3 of 4(Image credit: Aaron Perris)Image 4 of 4(Image credit: Aaron Perris)CarPlay 2 has had a long, bumpy road since it was first announced. Apple teased it at its Worldwide Developers Conference (WWDC) in 2022, saying it would release the updated system before too long. Yet there’s been nothing but radio silence since then.
In fact, Apple’s CarPlay website still says that the first models featuring CarPlay 2 will “arrive in 2024.” Clearly, that’s not going to happen anymore.
One reason for the delay could be that CarPlay 2 is not the same plug-and-play outfit as the original CarPlay was. The first edition of the car dashboard system uses the same rectangular layout in every car, making it easy for vehicle manufacturers to include it. CarPlay 2, on the other hand, promises a complete dashboard takeover, which means that Apple has to work directly with each carmaker to weave it into their unique layouts. That has likely, at least partially, caused the delay with the rollout.
The revelation of the new CarPlay 2 screenshots should give Apple fans some hope that work is progressing well. But without any word from Apple, there’s no way of knowing when it will finally make its long-awaited and much anticipated arrival.
You might also likeExperts have found a vulnerability in Google’s OAuth “Sign in with Google” feature which could allow malicious actors to access sensitive data belonging to businesses that have shut down.
Google acknowledged the flaw, but is not doing much to address it, rather saying that it is up to the businesses to ensure the security of the data they are leaving behind.
The vulnerability was first discovered by security researchers from Trufflesecurity, who reported it to Google in late September 2024. However, it was only after the company’s CEO and co-founder, Dylan Ayrey, presented the issue at Shmoocon in December 2024 that Google reacted.
Google suggests mitigationsHere is how it works, in theory:
A business signs up for an HR service using its business email account and the “Sign in with Google” feature. It uses the HR service for things like employee contracts, payouts, and more. Some time later, the business shuts down, and terminates the domain. After that, a malicious actor registers the same domain, and recreates the same email address used to log into the HR service.
They then proceed to log into the account on the HR platform, where they can access all the information and files left behind.
Google awarded Trufflesecurity a small bounty, but decided not to pursue a fix: "We appreciate Dylan Ayrey’s help identifying the risks stemming from customers forgetting to delete third-party SaaS services as part of turning down their operation," a Google representative told BleepingComputer.
“As a best practice, we recommend customers properly close out domains following these instructions to make this type of issue impossible. Additionally, we encourage third-party apps to follow best-practices by using the unique account identifiers (sub) to mitigate this risk.”
In other words, it’s up to the businesses to make sure they’re not leaving residual data behind.
Ayrey notes a quick look through Crunchbase returns more than 100,000 domains that can be abused this way. He suggested Google introduce immutable identifiers, while SaaS providers add cross-referencing domain registration dates.
Via BleepingComputer
You might also likeIt should come as no surprise to anyone familiar with my work that I spend quite a bit of my free time browsing gaming- and tech-related social media. This includes r/pcmasterrace, a PC gaming community currently embroiled in a virtual civil war over a highly divisive topic: AI-powered resolution upscaling and frame-gen technology.
The debate is largely focused on Nvidia’s DLSS 4 and Multi Frame Generation right now (apologies to Intel and AMD, but XeSS and FSR are often left out of these conversations), with most PC gamers falling into one of two camps: ‘DLSS is great’ and ‘DLSS is bullshit’. Well, it turns out the former camp is a lot bigger than the latter, based on new statistics released by Nvidia.
DLSS use has been steadily on the rise ever since its introduction back in 2019 (in an update to Battlefield V), with Nvidia’s user data now indicating that more than 80% of players with RTX GPUs turn on DLSS in their games - with some individual games sporting even higher percentages. DLSS adoption is becoming more commonplace among developers too, with more than 540 games and apps supporting it, including 15 of the top 20 most-played PC games of 2024.
The DLSS debateAt the end of the day, a community like r/pcmasterrace won’t be truly indicative of the wider PC gaming community: it’s a gathering place for hardcore gamers and PC builders, and that sort of clientele inevitably leads to some hot-blooded discourse.
Some argue that tools such as DLSS and frame-gen are a good way to squeeze extra performance from your PC, while others complain about input latency, ‘fake frames’, and the modern prevalence of AI in gaming. The naysayers aren’t completely without justification, either; DLSS has historically run into some issues with maintaining image quality, frame blurring, and input latency - although like all emergent technologies, it’s only improved with each generation.
Personally? I’m on the fence about it. On the one hand, I do believe that DLSS has improved a great deal since its first iteration, and the Multi Frame Generation feature coming to the newly-announced Nvidia RTX 5000 GPUs feels a bit like dark and forbidden magic - a piece of software quadruples my framerate without me actually needing to do anything? Witchcraft!
But on the other hand, there’s no ignoring that there are some downsides to DLSS and frame-gen tech. While I don’t subscribe to the ridiculous ‘fake frames’ argument I often see bandied about on Reddit - come on guys, it’s not like the regular frames are being lovingly handcrafted by generations of artisanal frame-makers in a Tibetan mountain village - it’s not yet a perfect tool, and there’s one obvious pitfall here.
An AI-powered world of gamingAs my colleague Isaiah Williams recently pointed out, DLSS 4 and Multi Frame Generation can provide some phenomenal results - but there are fears among PC gamers that this could lead to developers falling into an over-reliance on AI tech in games, particularly when it comes to the optimization of PC ports.
As consoles begin to implement similar tech too - with the terribly-named PSSR landing on Sony’s PS5 Pro last year and a DLSS-like feature potentially coming to the Nintendo Switch 2 - it’s clear that upscaling is here to stay even before looking at Nvidia’s latest stats. As the hardware demands of modern triple-A games continue to grow, there’s a fear among gamers that developers will start viewing it as a band-aid for poor game optimization - while users running older hardware that doesn’t support upscaling are left out in the cold.
It’s a legitimate fear, though I don’t think we should be using it to crap all over Nvidia; DLSS and Team Green’s wider gaming software suite are frankly very impressive, and the GPU giant is now consciously focused on mitigating the drawbacks of upscaling and frame-gen through generational improvements and features like the latency-reducing Reflex 2.
Besides, the blame for poor optimization in PC games lies with developers and publishers - while there’s an argument to be made that Nvidia is enabling this behavior, I think it’s fair to say that Team Green is developing and implementing these tools purely with the goal of improving game performance. As for the argument that players with older GPUs shouldn’t be left out, well… unfortunately, we all have to upgrade eventually. So next time you see a spirited argument about frame-gen on Reddit, maybe think about giving Nvidia a bit of slack.
Got some pressing thoughts about this? If you've read this article all the way through then I'll bet you do. Tell me how much of a genius (or moron) I am in our shiny new TechRadar comments section below!
You might also like...New research from The Access Group finds that 35% of UK workers admit to using generative AI without telling their managers. As employee use of AI is on the upswing, many organizations are still developing their plans for how to govern its use. Shadow AI is quickly becoming a challenge for many IT teams.
Shadow IT isn’t a new concept. The rapid evolution of SaaS technology has created technology sprawl within organizations as employees turn to tech tools to support their day-to-day work. IT is often out of the loop on technology being used within their organizations, so what options do they have to govern new technology, including AI and the risks that come with it? The answer lies in making it easier for employees to bring new technology into the organization with IT’s involvement.
The root cause of shadow ITThere are numerous reasons why employees choose to bring unauthorized technology into their organizations. In the UK, hybrid working models, easy access to cloud services, and the evolution of AI have meant shadow IT has become a major concern for businesses. Some employees may also opt to bring in their own technology because they are too busy or are concerned that they will be bothering IT if they go through the proper channels. Regardless of the reason, the root cause of shadow IT adoption is tied to inefficient and perhaps, broken processes.
But the responsibility doesn’t solely rest with employees. For example, if an employee does follow the process to submit a technology request but it is delayed or goes unnoticed by the IT team, they’re also likely to turn to shadow IT. In this scenario, not only do organizations open themselves up to security concerns around unauthorized software, but it also strains IT teams and wastes time for employees’ working requests.
The key to improving IT processesFar too often, inefficient processes exist because they are manual and disconnected. Businesses don’t know where the breakdown is because they have no visibility into the end-to-end process, data isn’t shared between the people and systems it should be, and the user experience is riddled with challenges. This is where process automation comes into play.
That statement may seem like a given, especially in the context of IT processes. But unfortunately, many companies struggle to automate their processes, including IT processes. The reasons vary, including manual processes being intricate and difficult to automate, legacy systems lacking the integrations needed to automate, and more.
Following a simple framework for automation can help most organizations overcome these challenges:
By following this framework, organizations can speed up the fulfillment of new IT requests and curb shadow IT use. Here are a few examples: Map the IT fulfillment process – Get an end-to-end view of each step in the fulfillment process to visualize where bottlenecks and inefficiencies occur. Common bottlenecks occur around assigning requests and leaving reviews in the pending stage.
Enhance experience and scale engagement – It’s also important to look at the user experience to ensure it has low barriers to entry. How are employees submitting their requests to the modernized process? By leveraging low-code application development tools, you can create an intake form for employees that’s digital and intuitive to use.
Automate bottlenecks – Once you’ve identified where inefficiencies are in the process and modernized the user experience, apply automation to streamline the sticking points. For example, automation can help you avoid requests being stuck in the “assignment” stage by automatically assigning requests to the appropriate team or team member. It’s important to note that automated processes can succeed with a “set it and forget it” mindset. Continual monitoring and improvement are needed to ensure that the process functions properly.
The rapid evolution of technologies like AI will only exacerbate the shadow IT challenge. That’s why organizations must modernize their IT processes today to avoid further shadow IT creep and prevent future security risks. End-to-end process automation is key to understanding where IT processes are breaking down, applying automation, and scaling new processes for maximum engagement – all while reducing the need for employees to turn to shadow IT for their technology needs.
We've featured the best IT management tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
After what's felt like an eternity, Marvel has finally released Daredevil: Born Again's first trailer to the public.
Just two days after Daredevil: Born Again star Vincent D'Onofrio revealed the teaser had been delayed due to the LA wildfires, a Stories post on Marvel's official Instagram account confirmed that the trailer would arrive today (January 15). And, with the two-minute video debuting at 7am PT / 10am ET / 3pm GMT, you can now watch it below.
As has become the norm with Marvel trailers, there's tons to unpack from the TV-MA-rated show's first teaser. There's a surprising and increasingly tense meeting between Cox's Matt Murdock and D'Onofrio's Wilson Fisk in a diner. Then there's the multiple shots of bloody and wince-inducing action sequences. Oh, and brief looks at the series' stacked ensemble cast, including confirmation – if more was needed – that two fan-favorite Daredevil characters in Karen Page and Foggy Nelson are definitely back for Born Again. In fact, there are plenty of familiar faces, including Jon Bernthal's Frank Castle/The Punisher and Wilson Bethel's Bullseye, among a whole host of newcomers, such as the late Kamar de los Reyes' White Tiger and this season's other villain in Muse, a serial killer who's being played by, well, we don't actually know yet.
The trailer may have only just been revealed, but we've know about the Marvel Cinematic Universe (MCU) show's story synopsis for a while. For those who haven't read it, here's what it tells us about Born Again's plot: "Matt Murdock (Charlie Cox), a blind lawyer with heightened abilities, is fighting for justice through his bustling law firm, while former mob boss Wilson Fisk (Vincent D’Onofrio) pursues his own political endeavors in New York. When their past identities begin to emerge, both men find themselves on an inevitable collision course."
Outside of that, we know Born Again will be a direct continuation of the story told in Daredevil, which originally ran on Netflix (it was one of the streamer's most beloved TV Originals) between 2015 and 2019. You can learn more details about the the Marvel Phase 5 project's confirmed cast and story details in my Daredevil: Born Again hub.
Marvel Television’s all-new series #DaredevilBornAgain premieres March 4 at 6pm PT/9pm ET. Only on @DisneyPlus. pic.twitter.com/4POqX6A8DVJanuary 15, 2025
Daredevil: Born Again's first trailer has been a long time coming. The Disney Plus show's inaugural teaser received its worldwide debut at D23 Expo 2024 last August. However, it was exclusively shown to attendees, so the rest of us have had to wait five months for its public reveal.
We have caught glimpses of the highly-anticipated series since then, though. Marvel revealed the briefest of sneak peeks at Daredevil: Born Again in a video celebrating the comic titan's 85th birthday in August 2024. A further 20 seconds of new Born Again footage formed part of Marvels 2025 TV line-up trailer last October, too, but that's all we'd been treated to until today's full trailer debut. Well, aside from leaked set photographs that have Daredevil fans worried about the fate of a beloved character.
Daredevil: Born Again will premiere exclusively on Disney Plus, aka one of the world's best streaming services, on March 4 (US) and March 5 (UK and Australia) – and I, for one, cannot wait.
You might also likeSince the inception of the internet, passwords have been the primary authentication factor to gain access to online accounts. Yubico’s recent Global State of Authentication survey of 20,000 employees found that 58 percent still use a username and password to login to personal accounts, with 54 percent using this login method to access work accounts.
This is despite the fact that 80 percent of breaches today are a result of stolen login credentials from attacks like phishing. Because of this, passwords are widely understood by security experts as the most insecure authentication method that leaves individuals, organizations and their employees around the world vulnerable to increasingly sophisticated modern cyber attacks like phishing.
In fact, even passwords which are considered ‘strong’ by websites – i.e., they contain more than a dozen characters comprising uppercase and lowercase letters, numbers, and symbols – can still be easily guessed or stolen by bad actors. Once they obtain the password, they can then bypass all legacy multi-factor authentication (MFA) systems and access individuals’ personal details with ease. Combined with the fact that people tend to reuse passwords across multiple accounts – which gives hackers the ability to breach multiple accounts with a single login – it becomes abundantly clear that passwords as an authentication method are flawed and extremely insecure in countless ways.
Surprisingly, there remains a lack of awareness regarding best practices for authentication: according to the same Yubico survey, 39 percent of individuals believe a username and password is the most secure form of authentication, while 37 percent consider mobile SMS one-time passcodes (OTPs) the most secure authentication method. While any form of MFA is superior to relying solely on a password, it’s important to recognize that not all MFA methods offer the same level of security. Traditional MFA techniques, including SMS-based OTPs and mobile authenticator applications, have significant vulnerabilities, with cyber criminals displaying an ability to easily circumvent these through phishing attacks.
As individuals and organizations become increasingly aware of the cyber risks associated with passwords and legacy MFA, enterprises have started to transition away from outdated authentication methods and move towards stronger, more cyber resilient technologies, in the form of phishing-resistant, passwordless solutions like passkeys.
A passwordless future with passkeysUnderstanding the risks that passwords bring, organizations and individuals around the world are looking for a solution that provides improved security and a better user experience. Passkeys have taken the world by storm as the de facto authentication solution across apps and websites to replace passwords – helping both individuals and enterprises achieve this easily. Passkeys seamlessly authenticate users by using cryptographic security “keys” stored on their computer or device. They are considered a superior alternative to passwords since users are not required to recall or manually enter long sequences of characters that can be forgotten, stolen or intercepted.
As passwordless-enabled FIDO credentials, passkeys deliver phishing resistance and accelerate a move away from problematic passwords that are easily breached. Passkeys are utilized for logging into applications and services efficiently and safely, thereby improving both productivity and online security. For example, passkeys require verification of possession as well as the user's physical presence during the login process, which effectively safeguards them from interception or theft by remote cyber criminals.
Beyond enhanced security, accessibility is also improved significantly by using a passkey – highlighted by two different forms of passkey options: authentication protocols can either be stored in the cloud (synced passkey) or on a device like a hardware security key (device-bound passkey).Then, it is then exchanged effortlessly at login via a swipe, press, tap, or biometric gesture.
From a security perspective, passkey login makes it far more challenging for malicious actors to exploit credentials and gain unauthorized access since it utilizes public key cryptography based on mathematical principles. They can also be conveniently, and securely stored on hardware security keys, which offers a higher level of security as it prevents the passkey from being copied or shared across the cloud and other devices. However, each passkey option brings different benefits – and it’s important to understand which type is right for your situation and threat model.
The right passkey strategy for youFirstly, it is important to establish the difference between synced and device-bound passkeys. Synced passkeys are primarily designed for broad consumer use rather than enterprises, and are stored in the cloud. This means the credentials can be copied across all the devices connected to a user’s account. For individuals and families sharing devices and accounts, this can be a big advantage. However, for organizations, this can create some concerning failure points and expose major flaws in key enterprise scenarios such as remote working and supply chain security.
Device-bound passkeys offer greater manageability and control of their FIDO credentials than synced passkeys - making them better suited for security savvy and high-risk individuals, as well as businesses. Device-bound means that authentication must originate from one particular piece of hardware separate from everyday devices, where the passkey cannot be copied or shared. Despite the lack of flexibility that comes with having to register each device separately, these solutions deliver higher security assurance as the only method of authentication is to possess a specific, previously registered device.
However, even within device-bound passkey options there are important differences: some options are located in general purpose everyday devices like smartphones and laptops, and others that reside in hardware security keys, that are recognized as offering the highest security assurance. Hardware security keys equip organizations with reliable credential lifecycle management and the necessary proof to validate the security of their credentials, ensuring enterprises can achieve optimal security and remain compliant with the most rigid requirements across different industries.
In cybersecurity, finding a balance between accessibility and security is imperative – and it is no different when considering passkeys. Enterprises should opt for a passkey solution that provides security and convenience in equal measure. The solution ought to enhance the security of online accounts and sensitive data, as well as protect users and the wider organization against phishing and unauthorized access, while at the same time allowing employees to take advantage of a seamless login experience.
As we navigate the ever-evolving cybersecurity landscape, the integration of passwordless authentication, particularly through the widespread implementation of passkeys, will prove to be instrumental in protecting our digital identities and securing the systems and services that are integral to our daily lives.
We've featured the best identity theft protection.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here : https://www.techradar.com/news/submit-your-story-to-techradar-pro
Microsoft and its manufacturing partners reportedly dropped prices for Copilot+ PCs substantially at the tail-end of last year, but the cuts didn’t stimulate sales, according to an analyst firm – which could be a worry indeed for the future.
The Register highlighted the move seemingly made in the final quarter of 2024, in which these AI laptops were reduced by 10% - going by the average distributor sale price in Europe.
Marie-Christine Pygott, a senior analyst at Context, told The Register: “While price reductions helped stimulate some interest in Q4, the value proposition of these devices [Copilot+ PCs] still needs to be communicated more effectively to users.”
On a more positive note, Pygott added: “As the concept matures, awareness grows, and a greater range of price points is being addressed, we expect adoption rates to increase in 2025.”
The analyst tells us that more broadly, PC sales in Europe (for desktop computers and tablets, as well as laptops) witnessed some solid growth in the final quarter of last year, and sales for December were up 7% year-on-year, in fact.
During the quarter, AI-capable laptops saw their adoption rate grow to 32%, up from 22% in Q3 – but despite the upward movement, that didn’t match some forecasts (of 40% growth).
However, the definition of an AI-capable laptop is any device with an NPU to pep up AI workloads, no matter how strong that NPU is. Copilot+ PCs are a separate category within that, calling for a powerful NPU of at least 40 TOPS, and in that subdivision, growth was much weaker – it went from 3% to only 5%, according to Context.
Pygott told The Register that the leading notebooks in the world of AI PCs were Apple MacBooks (in Europe and the UK), but Lenovo and HP were making strong headway for market share (based on distributor sell-through data, we should note, not retailer sales).
Whereas Microsoft’s Surface devices are in pole position among the subcategory of Copilot+ PCs, unsurprisingly given how good these laptops are (the latest Surface Laptop is not just a great Copilot+ device, but also our best laptop overall, in fact).
(Image credit: Future) Analysis: Pricing and perceptionWe must be cautious around a single set of analyst figures, but the sales picture presented here does look rather weak. What to do, then, for Microsoft and its big Copilot+ PC project?
As Pygott points out, there are two obvious problems. Firstly, these devices were too costly at launch, and secondly, people don’t really understand what the AI in an ‘AI PC’ adds to the whole experience (with good reason, and we’ll come back to that).
As Pygott observes: “These [Copilot+ PCs] are currently in the premium price range, but their value add is not always clear to users. We believe this will change as it becomes clearer to users what these PCs can do, and how the way they use a PC will change with AI, but it will take some time.”
As to the issues around cost, as Pygott tells us, a “greater range of price points is being addressed,” which refers to the progress made in bringing in cheaper Snapdragon X chips. This will facilitate the release of more affordable Arm-based Copilot+ PCs (and Arm CPUs are still the majority for these devices).
First off, we had the Snapdragon X Plus 8-core processor arrive in September 2024 ushering in more affordable Copilot+ PCs around the $800 level. Then, the new vanilla Snapdragon X was revealed at CES 2025, and this promises to reduce the cost of these AI laptops to around the $600 mark.
The evolutions of these Arm-based notebooks may cut down the Snapdragon CPU, but crucially they do not mess with the powerful integrated NPU – it’s the same as in higher-tier Snapdragon chips – so they still fully qualify as Copilot+ PCs, just models in truly affordable territory (come Black Friday or the like, we might see $500 price tags). And that should go a long way to helping stoke sales, which, after all, have been predicted to really take off this year (and going forward into the rest of the decade).
The other sticking point of getting folks to realize the benefits of an AI-focused laptop is a trickier proposition, involving Microsoft bolstering the AI tricks infused in Windows 11, and particularly the Copilot+ PC exclusive features, naturally – such as Recall.
That’ll be the first order of the day – getting Recall out of testing, and working well so the feature sheds its controversial reputation (if indeed that is possible, at this stage of the game). But it feels like a tall order for Microsoft to have its suite of AI capabilities make a real impact on public perception, at least in 2025, anyway.
The danger is that if it takes a lot longer for that to happen, the Copilot+ PC project is going to be saddled with a sense of confusion and pointlessness around these devices, which won’t do the brand any favors.
On a more positive note, it’s not like good things aren’t being done with this category of devices – they certainly are. As noted, Microsoft’s latest Surface devices seriously impressed us here at TechRadar, and a Copilot+ PC from Asus was one of the most promising laptops we saw at CES 2025 recently. But while these might be great machines in many respects, the idea of where AI fits, and why it’s such a key aspect – when it isn’t really, not yet – remains the thorny issue.
You may also like...Marc Benioff, the CEO of top CRM software firm Salesforce, has clapped back at his Microsoft counterpart Satya Nadella after the latter suggested software-as-a-service companies like Salesforce could go bust in the wake of the AI chatbot agent boom.
Speaking on The Logan Bartlett Show, Benioff claimed, “customers don’t look at them and don’t take them seriously”.
“I’ve spoken to these customers,” he went on, “they barely use it, and that’s only if they don’t already have a ChatGPT license or something like that in front of them.”
Salesforce’s Agentforce 2.0 platformBenioff pointed out that Salesforce has its own “agentic platform” in production, while adding Microsoft “[isn’t] even making the AI themselves” - referring to its $10 billion USD investment into OpenAI - the company behind ChatGPT.
Nadella made his remarks on the Bg2 podcast in December 2024, albeit without referring to Salesforce by name.
Salesforce even launched Agentforce 2.0, an AI chatbot agent creation platform, in December 2024 - a clear effort to keep up with the AI trend, so it’s not entirely clear why he has Microsoft in his sights, because Microsoft doesn’t seem to have Salesforce on its own radar.
Benioff does have form for taking aim at Microsoft’s Copilot AI, mind you. At Dreamforce 2024, he compared Copilot to Microsoft’s erstwhile mascot Clippy, and has maintained that comparison in tweets.
Back in a missive from October 2024, he wrote “Copilot’s a flop because Microsoft lacks the data, metadata, and enterprise security models to create real corporate intelligence.”
Via IT Pro
You might also like