Windows 11 has a new preview out and it does some useful – albeit long-awaited – work in terms of accelerating the rate at which files are pulled out of ZIPs within File Explorer, plus there are some handy bug fixes here – and a minor feature that’s been ditched.
All this is happening in Windows 11 preview build 27818 (which is in the Canary channel, the earliest external test build).
As mentioned, one of the more notable changes means you’ll be able to extract files from ZIPs, particularly large ZIP archives, at a quicker pace in File Explorer.
A ZIP is a collection of files that have been lumped together and compressed so they take up less space on your drive, and unzipping such a file is the process whereby you copy those files out of the ZIP.
File Explorer – which is the name for the app in Windows 11 that allows you to view your folders and files (check here for a more in-depth explanation) – has a built-in ability to deal with such ZIP files, and Microsoft has made this work faster.
Microsoft explains in the blog post for this preview build: “Did some more work to improve the performance of extracting zipped files in File Explorer, particularly where you’re unzipping a large number of small files.”
It’s worth noting that this is a performance boost that only applies to File Explorer’s integrated unzipping powers, and not other file compression tools such as WinRAR or 7-Zip (which, in case you missed it, are now natively supported in Windows 11).
Elsewhere in build 27818, Microsoft has fixed some glitches with the interface – including one in File Explorer, where the home page fails to load and just shows some floating text that says ‘Name’ (nasty) – and a problem where the remote desktop could freeze up.
There’s also a cure for a bug that could cause some games to fail to launch after they’ve been updated (due to a DirectX error), and some other smoothing over of general wonkiness like this.
Finally, Microsoft informs us that it has deprecated a minor feature here. The suggested actions that popped up when you copied a phone number (or a future date) in Windows 11 have been disabled, so these suggestions are now on borrowed time.
(Image credit: Future / Jeremy Laird) Analysis: Curing sluggishness rather than ushering in super-zippy performanceWindows Latest noticed the change to ensure ZIP performance is better in File Explorer with this preview, and tested the build, observing that speeds did indeed seem to be up to 10% faster with larger, file-packed ZIPs.
Clearly, that’s good news – and it’s great to see Microsoft’s assertion backed up by the tech site – but at the same time, this is more about fixing poor performance levels, rather than providing super-snappy unzipping.
Complaints about File Explorer’s unzipping capabilities being woefully slow in Windows 11 date back some time, particularly in scenarios where loads of small files are involved – so really, this is work Microsoft needs to carry out rather than any kind of bonus. If Windows Latest’s testing is on the money, a 10% speed boost (at best) may not be enough to placate these complainers, either, but I guess Microsoft is going to continue to fine-tune this aspect of File Explorer.
There are plenty of other issues to iron out with File Explorer too, as I’ve discussed recently – there are a fair few complaints about its overall performance being lackluster in Windows 11, so this is a much broader problem than mere ZIP files.
Furthermore, Microsoft breaking File Explorer for some folks with last month’s February update doubtless didn’t help any negative perceptions around this central element of the Windows 11 interface.
You may also like...Philips Hue bulbs and lamps are some of the best smart lights around, and they're already pretty easy to set up; but a new app update has made things even easier, letting you add several lights to a room at once.
Once you've installed app version 5.38, which is available now for Android and iOS, you'll be able to simply scan the QR codes on several Hue devices to add them to the app together, rather than doing them one at a time.
That should be handy if you've splurged on a new set of smart bulbs in the Amazon spring sale, and will reduce headaches if you move house and need to set everything up again.
The editor of Hueblog.com has already experimented by adding a dimmer switch to their (no doubt extensive) setup, and reports that it works perfectly.
You can now use QR codes to add lights to a room in the Philips Hue app, plus other devices like dimmer switches (Image credit: Signify)If the device you want to add doesn't have a QR code, you can bypass the new option by tapping the 'No QR code' button, and the app will find it for you the old-fashioned way, then allow you to assign it to a room.
Still no AIThis is a helpful addition to the Philips Hue app, but we're still waiting for the major software update that will add the generative AI assistant that Signify (the company behind Philips Hue) promised back in January.
According to Signify, the assistant will be able to create "personalized lighting scenes based on mood, occasion or style," and will let you use natural language to describe what you want rather than using a photo as a starting point or picking shades from a color wheel.
The company hasn't announced when the new tool will arrive, but it should be available before the end of the year – hopefully in time to let you describe your perfect festive lighting, and have all your fixtures adapt automatically. I'm dreaming of a bright Christmas.
You might also likeA few days ago, Apple analyst Jeff Pu claimed in a research note that Apple’s A20 chip – which will come to the iPhone 18 lineup – would offer disappointing performance increase over past chips. Now, Pu has just reversed course on this idea.
In the original report, Pu claimed that the A20 chip will be made with a 3-nanometer process dubbed N3P. While this is expected to bring improvements to performance and efficiency, they’re only likely to be modest changes compared to the iPhone 17’s A19 chip, which is also likely to be made using a 3nm process.
That was odd because it clashed with another report from Pu’s employer GF Securities, which outlined that Apple would use a 2nm process in the A20.
After being contacted by MacRumors, Pu has updated the report to clarify that the A20 could actually be made using a 2nm process. If correct, this would likely mean much more significant performance increases, and could make the iPhone 18 a tempting prospect if you’re thinking of upgrading your device.
Protecting your iPhone screen (Image credit: Future | Alex Walker-Todd)There’s more good news for iPhone fans in the form of a fresh patent uncovered by Patently Apple. Here, Apple describes a new technique that would strengthen the iPhone’s front surface with a mixture of glass and other components.
In the patent, Apple explains that combining several different materials can result in a front iPhone screen that's resistant to scratches, can cut down reflections, and can prevent the screen from becoming burnished over time.
This is done by taking the front glass and applying a hard coating that's resistant to scratches and burnishing. Below that, an 'interference layer' made up of several compounds can be included, which helps to cut down on reflections when you look at the screen. The idea is to give your iPhone a range of different protections without making the display too thick or heavy.
It’s an interesting idea, but we might have to wait a little while until we see it. Apple only filed the patent in September 2024, so it’s very unlikely that this tech has found its way into the iPhone 16 range. Whether it will arrive in the iPhone 17 is anyone’s guess, but with six months to go until Apple reveals its next iPhones, we’ll be keeping our eyes peeled.
You might also likeApple Intelligence continues to dominate headlines for everything but its AI capabilities, as Apple now faces a lawsuit for false advertising over its AI-powered Siri.
The lawsuit, which Axios originally reported, claims Apple has falsely advertised its Apple Intelligence software that launched alongside the iPhone 16 lineup of smartphones.
The lawsuit claims that Apple has misinformed customers by creating "a clear and reasonable consumer expectation that these transformative features would be available upon the iPhone's release,"
Now, six months after the launch of the iPhone 16 and iPhone 16 Pro, some of the Apple Intelligence features showcased in promotional campaigns have been delayed, with no expected release schedule.
Most notably, the lawsuit highlights an ad starring The Last of Us actor, Bella Ramsey, where Ramsey showcased Siri's AI capabilities including personal context and on-screen awareness to help them schedule appointments. That ad, which was available from September has now been removed from YouTube following the announcement of Siri's delay.
Filed in San Jose, California, by Clarkson Law Firm, which has previously sued Google and OpenAI, the lawsuit targets Apple's iPhone features that haven't shipped yet and not the capabilities of Apple Intelligence features like Genmoji that have.
You can read the full lawsuit online, but the key argument reads, "Contrary to Defendant's claims of advanced AI capabilities, the Products offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance. Worse yet, Defendant promoted its Products based on these overstated AI capabilities, leading consumers to believe they were purchasing a device with features that did not exist or were materially misrepresented."
We'll have to wait and see if anything comes of this legal battle, but considering Apple has only delayed Siri's upgrade, we could see the AI improvements launch before anything comes to pass.
Apple Intelligence's redemption arcJust yesterday, reports of a Siri leadership shakeup started to surface. And, with exec Mike Rockwell expected to be named as the person to oversee the launch of Siri's AI upgrade, there's reason to be optimistic.
Rockwell is known for his impact in bringing Apple Vision Pro to market, and it shows a real effort from the company to overhaul the current Siri approach so that consumers finally get the capabilities promised.
If Rockwell's direction can get Siri back on track, then Apple Intelligence as a whole could still be a success. After all, once the dust settles, if Apple has a capable AI offering in its smartphones, we'll all quickly forget about the lawsuits and the bad press.
That's not to say we shouldn't hold Apple accountable for advertising features that are still not available on a device six months after launch, but if any company deserves a chance at redemption it's the Cupertino-based firm.
You might also likeSix years on from its initial reveal, System Shock 2: 25th Anniversary Remaster is finally releasing for consoles and PC later this year, following the first System Shock's remaster in 2023.
After a name change (it was originally called System Shock 2: Enhanced Edition) and six years of careful but challenging development, the highly-anticipated remaster is finally coming to PS5, PS4, Xbox Series X|S, and PC on June 26, 2025.
A lengthy PlayStation Blog, written by Nightdive Studios communications manager Morgan Shaver, goes into detail on why the remaster has taken so long to develop. In summary, it's a combination of incomplete source code and developer Nightdive's penchant for attention to detail.
Nightdive's Alex Lima chimes in here, saying that "extensive reverse engineering" was required to have System Shock 2 playable on modern hardware.
“The game engine that System Shock 2 uses is large and complicated,” adds Nightdive's Lexi Mayfield. “It was originally designed for PCs from the late 1990s with a mouse and keyboard and was only used for three games. As a result, porting the game to PlayStation was a long and arduous process, from both a coding and interface perspective.”
(Image credit: Nightdive Studios)For System Shock 2: 25th Anniversary Remaster, players can expect improved visuals as well as support for advanced shaders and much higher refresh rates, leading to much better presentation and performance overall.
Originally a PC exclusive, the game has also received controller support for the first time ever now that it's coming to consoles. Actions like leaning around corners, quick-swapping items, weapons, and psi powers have been "streamlined" for controllers. A new quickbar and context menu should also mean players will spend less time fiddling around their busy inventories.
Personally, I'm a huge fan of the original System Shock 2. I love almost everything about it, from its terrifying mutated human enemies, horrific atmosphere, and an incredible soundtrack that bounces between moody horror and fast-paced, pulse-pounding techno.
The star of the show is undoubtedly SHODAN, a rogue AI that serves as System Shock 2's primary villain. SHODAN is delightfully evil, her warped speech patterns constantly flitting between creepy and silly without ever going overboard in either department. She's so good at both taunting and mocking the player, making for a constantly entertaining and intimidating threat.
You might also like...Oppo has just teased a first look at its new Watch X2 Mini, and I'm hopeful that it could be our first glimpse of the new smaller version of the OnePlus Watch 3.
Just a few days ago, Oppo CEO Qiao Jiadong took to the Chinese social media platform Weibo to confirm that the company has three new products on the way, including a "small-size full smartwatch" (translated).
Now, Oppo has officially teased the Oppo Watch X2 Mini from the same account. It reveals a gold colorway, possibly a re-designed digital crown, and chamfered edges.
While you might not be interested in the new Oppo Watch X2 Mini, you might be curious to hear that the Oppo Watch X2 is actually just a rebadge of the OnePlus Watch 3. The two smartwatch makers are owned by the same parent company.
As such, this could be our first glimpse of the new smaller version of the OnePlus Watch 3, which is the best Android smartwatch for battery life in 2025.
OnePlus Watch 3's smaller version: What we know (Image credit: Future)This could well, then, be our first look at the new smaller version of the OnePlus Watch 3, which might well share the same design. But it doesn't tell us much else about the watch.
As reported by NotebookCheck, the smaller X2 mini is likely to feature a 42mm case, which would suggest that the OnePlus Watch 3 will be the same size. That would make perfect sense given that the larger model is 46mm, making the 42mm the likely complimentary size.
Naturally, both watches will have a smaller display and less battery life than the larger X2/OnePlus Watch 3. However, Oppo and OnePlus have cracked an excellent dual-processor system that gives that watch industry-leading battery life in this category, and this should still feature – and a smaller touchscreen will also draw less power.
Size aside, OnePlus also says it's working on LTE support beyond China for the OnePlus Watch 3, so stay tuned for that too before the end of the year.
You may also likeThe UK Government has released its guidelines on protecting technical systems from future quantum computers.
The National Cyber Security Centre set a timeline for the UK industry and government agencies to follow with key dates, firstly, by 2028 all organizations should have a defined set of migration goals, and an initial plan, and should have carried out a ‘full discovery exercise’ to assess infrastructure and determine what must be updated to post-quantum computing.
By 2031, organizations should carry out the highest priority migration activities, and have a refined plan for a thorough roadmap to completing the change. Finally, by 2035, migration should be completed for all systems, services, and products.
Large-scale threatsThe UK Government labelled the move a ‘mass technology change that will take a number of years’ - but why is the migration needed?
The Government outlines that the threat to cryptography from future ‘large-scale, fault-tolerant quantum computers’ is now well understood, and that technical systems will need to evolve to reflect this.
“Quantum computers will be able to efficiently solve the hard mathematical problems that asymmetric public key cryptography (PKC) relies on to protect our networks today,“ the guidelines confirm.
“The primary mitigation to the risk this poses is to migrate to post-quantum cryptography (PQC); cryptography based on mathematical problems that quantum computers cannot solve efficiently.”
The report warns that the total financial cost of PQC migration could be ‘significant’, so organizations should budget accordingly, including for “preparatory activities” as well as the migration itself.
For SMEs, the PQC should be more straightforward and seamless, as services will typically be updated by vendors, but in the case of specialised software, PQC-compatable replacements or upgrades should be identified and deployed in line with the above timetable.
You might also likeGoogle's Loss of Pulse Detection recently began rolling out to the United States after receiving clearance from health authorities in February. Now, Google has revealed how exactly it created the life-saving feature, and just what makes it so important.
The Google Pixel Watch 3 is the best Android smartwatch on the market owing to its excellent performance, stylish design, and decent battery life. At launch, it was unveiled with Loss of Pulse Detection, which can alert emergency services and bystanders if the wearer suffers a cardiac arrest.
Now, Google has revealed some of the behind-the-scenes work that went into the feature in pursuit of solving "a seemingly intractable public health challenge."
As Google notes, out-of-hospital cardiac arrest (OHCA) events cause millions of deaths worldwide, with one-half to three-quarters of events going unwitnessed.
Per Google, "About half of unwitnessed OHCA victims receive no resuscitation because they are found too late and attempted resuscitation is determined to be futile."
OHCA and successful resuscitation is all about time. The chain of survival, which ends with advanced care, starts with access to emergency services or bystanders who can deliver CPR or administer treatment with a defibrillator. However, timely awareness that someone is experiencing OHCA is crucial.
Witnessed events have a 7.7x higher survival rate than unwitnessed events, which is why Loss of Pulse Detection is so vital.
How Google made Loss of Pulse Detection (Image credit: Google)Google says that its Loss of Pulse Detection relies on a multimodal algorithm using photoplethysmography (PPG), a process that uses light to measure the changes in blood volume, along with accelerometer data from onboard sensors.
There are multiple "gates" that must be passed because the events are so rare, and false positives are less than ideal.
Before an alert goes out, there's data from the PPG sensor (normally used to monitor your heart rate), a machine learning algorithm to check the transition from pulsatile (having a pulse) to pulseless, and further sensor checks to confirm the absence of a weak pulse using further LEDs and photodiodes.
It's all a very technical way of saying your Pixel Watch needs to be absolutely sure your heart has stopped beating before triggering an alert, rather than alerting because a user has taken off their watch, for instance.
Google says that during development it partnered with cardiac electrophysiologists and their patients, including patients with scheduled testing of implanted cardiac defibrillators, where Google measured planned temporary heart stoppages.
Google says that the other vital aspect of developing the feature, aside from accuracy, is responsibility. It detailed further the efforts it has made to minimize false positives, and also notes that skin tone is not a barrier to the efficacy of the feature.
Google also says the design accounts for maximizing battery life, using data from sensors that would already be activated to trigger subsequent further checks, rather than running a background monitoring system all the time.
The full blog is a fascinating insight and well worth the read. As noted, Loss of Pulse Detection is now available in the US, along with all the other territories it is already live in, including the UK and 14 other European countries.
You may also likeFinding the email you need in a crowded Gmail inbox should finally be a lot easier thanks to another AI-powered new update.
The email provider is rolling out a new, smarter search function that will list results in terms of relevance, rather than just in chronological order.
Factoring in details such as recency, most-clicked emails, and frequent contacts, the company says this means the emails you’re actually looking for should be far more likely to be at the top of your search results.
Gmail "Most relevant" (Image credit: Google)“With this update, the emails you’re looking for are far more likely to be at the top of your search results — saving you valuable time and helping you find important information more easily,” the company wrote in a blog post announcing the news.
Users will still be able to search for the most recent results, with Gmail adding a toggle to switch between "Most relevant" and "Most recent" results, based on how they like to search.
Google says the move can help reduce search time, pinpointing the information people are looking for more quickly and accurately.
The feature is rolling out now to personal Google accounts across the world, and will be available on the Gmail app for Android and iOS, with business users also set to receive the feature soon.
You might also likeSuper-slim phones are great from an aesthetic standpoint, but the shrunken form factor can lead to performance constraints – so its reassuring to see a new Samsung Galaxy S25 Edge benchmark leak that suggests it's going to be up to speed with the rest of the series.
As per the benchmark (via SamMobile), the Galaxy S25 Edge looks set to come with the 8-core Qualcomm Snapdragon 8 Elite inside. That's the variant set to a higher clock speed that we've seen in other Samsung devices, including the Samsung Galaxy S25.
What's more, the single-core score of 2,969 and the multi-core score of 9,486 suggest that performance is going to be on a par with the Galaxy S25 Ultra – the phone we described as "the ultimate Android" in our Samsung Galaxy S25 Ultra review.
There are some caveats here: this could well be a Galaxy S25 Edge running unfinished software, for example. But it's good to see these early results pointing in the right direction, ahead of the phone's expected launch in April.
Staying cool The other phones in the series, including the Galaxy S25, are already available to buy (Image credit: Philip Berne / Future)The chipset fitted inside a phone doesn't always tell the whole story of its performance potential: to prevent that chipset from overheating and crashing the phone, it'll be accompanied by various safety measures and cooling features.
How effective that cooling is – and thus how fast the chipset can run – depends on multiple factors, but generally speaking, the more space available, the better the cooling (which is why desktop PCs can be much more powerful than laptops).
While the Samsung Galaxy S25 Edge is rumored to be a mere 5.84mm thick, front to back, it's also expected to be taller and wider than the standard Galaxy S25. That could well mean Samsung can fit in a more advanced cooling system.
All should be revealed within the next few weeks, when Samsung unveils the phone in full – after giving us brief glimpses of what it looks like. It seems very likely Apple will follow with its own super-slim phone later in the year, the iPhone 17 Air.
You might also likeThere’s both good and bad Pixel news today, but the good news will affect more people than the bad, so let’s start there.
Reddit users are finding that Pixel phones with Tensor chipsets (meaning everything from the Google Pixel 6 onwards) are achieving much higher GPU scores on Geekbench 6 than they did at launch. This is widely being attributed to the Android 16 beta, but Android Authority reports seeing similarly upgraded performance on Android 15.
So chances are you don’t need to grab a beta version of Android to see improvements, but rather that recent stable software updates have massively boosted GPU performance.
The exact boost varies depending on model, but Android Authority claims its Pixel 6a unit saw a nearly 23% GPU performance increase, while elsewhere there are reports of a 62% improvement for the Pixel 7a, a 31% improvement for the Pixel 8, and even a 32% improvement for the recent Google Pixel 9.
Android Authority speculates that Google achieved this through including newer GPU drivers in recent Android updates, as while all recent Pixels use an Arm Mali GPU, they don’t always ship with the latest available GPU driver version.
How much impact these performance improvements will have in the real world remains to be seen, but they’re nice to see, and could help extend the lifespan of older Pixel models.
No Satellite SOS for the Pixel 9a The Google Pixel 9a (Image credit: Google)Now for the bad news, and this relates specifically to the new Google Pixel 9a, which we’ve learned doesn’t support Satellite SOS. Google confirmed as much to Android Authority, and this is a feature found on other Google Pixel 9 models which allows you to contact emergency services in areas without Wi-Fi or cell signal.
So it’s a potentially life-saving tool, and while Google didn’t say why it’s absent here, it’s likely because the Pixel 9a uses an older Exynos Modem 5300, rather than the 5400 used by the rest of the Pixel 9 series.
While this is a feature that you’ll hopefully never need to use, it would be reassuring to have, and this isn’t the only omission in the Pixel 9a, as we also recently learned that it lacks several AI tools offered by the rest of the Pixel 9 line.
In fact, this phone has had a slightly troubled launch, with not just these omissions emerging, but also a delay in sales of the phone while Google investigates a “component quality issue”.
Still, the silver lining there is that this delay allowed time for these omissions to be uncovered, so you might think twice about buying the Google Pixel 9a. Certainly, we’d wait until we’ve had a chance to put it through a full review before purchasing one.
You might also likeWhile the Hitman: World of Assassination trilogy has been a stand-out success across PlayStation, Xbox, and PC its transition from flat gaming to VR has been a tough ride. Exploring IO Interactive’s sandbox levels in virtual reality has its charm, graphics woes, lacking motion controls, and general bugginess have negatively impacted prior releases across PSVR, Steam, and recently the Meta Quest 3.
But fourth time’s the charm, so to speak, as with the latest Hitman: World of Assassination release on PSVR 2, IOI has seemingly cracked the VR formula – at least based on my experience in a roughly hour-and-a-half-long demo.
I’ve been looking for an excuse to get back into Hitman, this is it - it really could be the next best PSVR 2 game.
Getting to gripsMy day started off smoothly. I was whisked away to Sapienza – a fictional Italian coastal town introduced in Hitman (2016) – with the goal of eliminating Silvio Caruso, Francesca De Santis, and the biological weapon they’ve created, with me taking out the human targets with an exploding golf ball and sniper rifle respectively.
One shot, one kill (Image credit: iOi)Here I got to grips with developer IO Interactive’s ultimate take on what a VR Hitman should be. As expected you’re thrust into a first-person view, with this PSVR 2 interpretation featuring a suite of motion controls to replace the usual button prompts. Reloading a firearm is an involved process – you have to manually eject the empty cartridge, grab and insert a new one, then cock the pistol to be able to fire again – and to break into areas you aren’t allowed to enter you’ll need to pull out your lockpick, the stolen key card you swiped, or your trusty crowbar to physically crack open the barrier in your way.
The only time you don't have to manually do Agent 47’s job for him is when you’re blending in or climbing.
IO Interactive told me that while some players say they want to stay in first-person the whole time and perform 47’s blending-in techniques for themselves, that doesn’t work for the gameplay as a whole.
Blending in is a time for players to catch their breath, take stock of their situation, and watch out for people hunting them or those who could rumble their disguise – a third-person view facilitates this in a way a first-person one can’t, and from playing the game I can see what they mean. Climbing in third-person also has the added benefit that it’s less nauseating for many than the first-person alternative.
However the team has found ways to use VR in other ways to make this PSVR 2 version more than a simple port, such as with dual-wielding. Obvious applications are that you can go into a mission with dual-wielded guns blazing and forgo Agent 47’s ‘Silent Assassin’ reputation, but others include new takedown techniques.
With a blunt object in each hand, you can knock out two guards simultaneously, making it easier to sneak around undetected and complete a mission with that important Silent Assassin, Suit Only rating.
Much better than Hitman's PSVR and Steam attempts (Image credit: I/O) A whole world to exploreSpeaking of Hitman: World of Assassination as 'just' a PSVR 2 port, this is the (almost) full-on World of Assination package but in VR.
Some missions have been cut (at least for now) such as the bonus Patient Zero campaign (I say for now, as the IOI team gave me the impression it wanted to bring these levels to VR eventually), Hitman 2’s sniper missions, and some of the more elaborate Elusive Targets – like the recent The Splitter mission featuring Jean-Claude Van Damme.
Otherwise, everything’s there. In Sapienza, I was delighted to see the Kraken easter egg was still present – even if I didn’t quite have the time or aim to solve it – and in Berlin, I took on The Drop Elusive Target mission starring real-world DJ Dimitri Vegas.
I also noticed that everything ran fairly smoothly. Even on Berlin’s crowded dance floor and at Miami’s packed car race event I didn’t experience any noticeable stuttering. Graphics-wise it's a step down from what you’ll be used to on the PS5’s flat game, however, it didn’t look bad by any stretch – though I’ll want to test the game out further before passing a final judgment on the performance.
And returning to the Hitman PSVR 2 experience is something I can’t wait for. I love the Hitman trilogy and this PSVR 2 version has truly done it justice in a way I’m sure many players feel the other VR attempts haven’t quite managed to.
The full VR game releases on March 27 as a $9.99 / £8.99 add-on to the original PS5 game (which you’ll also need to own), and I’ll be one of the first in line.
You might also likeAI continues to spark debate and demonstrate remarkable value for businesses and consumers. As with many emerging technologies, the spotlight often falls on large-scale, infrastructure-heavy, and power-hungry applications. However, as the use of AI grows, there is a mounting pressure on the grid from large data centers, with intensive applications becoming much less sustainable and affordable.
As a result, there is a soaring demand for nimbler, product-centric AI tools. Edge AI is leading this new trend, by bringing data processing closer to (or embedded within) devices, on the tiny edge, meaning that basic inference tasks can be performed locally. By not sending raw data off to the cloud via data centers, we are seeing significant security improvements in industrial and consumer applications of AI, which also enhances the performance and efficiency of devices at a fraction of the cost compared to cloud.
But, as with any new opportunity, there are fresh challenges. Product developers must now consider how to build the right infrastructure and the required expertise to capitalize on the potential of edge.
The importance of local inferenceTaking a step back, we can see that AI largely encompasses two fields: machine learning, where systems learn from data, and neural network computation, a specific model designed to think like a human brain. These are supplementary ways to program machines, training them to do a task by feeding it with relevant data to ensure outputs are accurate and reliable. These workloads are typically carried out at a huge scale, with comprehensive data center installations to make them function.
For smaller industrial use-cases and consumer industrial applications – whether this is a smart toaster in your kitchen or an autonomous robot on a factory floor – it is not economically (or environmentally) feasible to push the required data and analysis for AI inference to the cloud.
Instead, with edge AI presenting the opportunity of local inference, ultra-low latency, and smaller transmission loads, we can realize massive improvements to cost and power efficiency, while building new AI applications. We are already seeing edge AI contribute towards significant productivity improvements for smart buildings, asset tracking, and industrial applications. For example, industrial sensors can be accelerated with edge AI hardware for quicker fault detection, as well as predictive maintenance capabilities, to know when a device’s condition will change before a fault occurs.
Taking this further, the next generation of hardware products designed for edge AI will introduce specific adaptations for AI sub-systems to be part of the security architecture from the start. This is one area in which embedding the edge AI capability within systems comes to the fore.
Embedding intelligence into the productThe next stage in the evolution of embedded systems is introducing edge AI into the device architecture, and from there its “tiny edge”. This refers to tiny, resource-constrained devices that process AI and ML models directly on the edge, including microcontrollers, low-power processors and embedded sensors, enabling real-time data processing with minimal power consumption and low latency.
A new class of software and hardware is now emerging on the tiny edge, giving the possibility to execute AI operations in the device. By embedding this capability within the architecture from the start, we are making the ‘signal’ itself become the data’, rather than wasting resources transforming it. For example, tiny edge sensors can gather data from the environment that a device is in, leveraging an in-chip engine to produce a result. In the case of solar farms, sensors within a solar panel can specifically detect nearby arc faults across power management systems. When extreme voltages occur, it can automatically trigger a shutdown failsafe and avoid an electrical fire.
With applications like arc fault detection as well as battery management or on-device face or object recognition driving growth in this space, we will see the market for microcontrollers capable of supporting AI on the tiny edge grow at a CAGR of over 100% (according to ABI Research). To realize this potential, more work is needed to bridge the gap between the processing capabilities of cloud-based AI and targeted applications from devices that are capable of working on, or being, the edge.
However, like with any new technology: where there is a demand, there is a way.
We are already seeing meaningful R&D results focused on this challenge, and tiny AI is starting to become embedded in all types of different systems – in some cases, consumers are already taking this technology for granted, literally talking to devices without thinking ‘this is AI’.
Building edge AI infrastructureTo capitalize on this emerging opportunity, product developers must first consider the quality and type of data that goes into edge devices, as this determines the level of processing, and the software and hardware required to deal with the workload. This is the key difference between typical edge AI, operating on more powerful hardware, capable of handling complex algorithms and datasets, and tiny AI, which focuses on running lightweight models that can perform basic inference tasks.
For example, audio and visual information - especially visual - are extremely complex and need a deep neural architecture to analyze the data. On the other hand, it is less demanding to process data from vibrations or electric current measurements recorded over time, so developers can utilize tiny AI algorithms to do this within a resource-constrained or ultra-low power, low latency device.
It is important to consider the class of device and microcontroller unit needed in the development stage, based on the specific computational power requirements. In many cases, less is more, and running a lighter, tiny AI model improves the power efficiency and battery life of a device. With that said, whether dealing with text or audio-visual information, developers must still undertake pre-processing, feeding large quantities of sample data into learning algorithms to train the AI using them.
What’s on the horizon?The development of devices that embed AI into the tiny edge is still in its infancy, meaning there’s scope for businesses to experiment, be creative, and figure out exactly what their success factors are. We are at the beginning of a massive wave, which will accelerate digitalization in every aspect of our life.
The use-cases are vast, from intelligent public infrastructure, such as the sensors required for smart, connected cities, to remote patient monitoring through non-invasive wearables in healthcare. Users are able to improve their lives, and ease daily tasks, without even realizing that AI is the key factor.
The demand is there, with edge AI and tiny AI already transforming product development, redefining what’s classified as a great piece of technology, enabling more personalized predictive features, security, and contextual awareness. In just a few years, this type of AI is going to become vital to the everyday utility of most technologies – without it, developers will quickly see their innovations become obsolete.
This is an important step forward, but it doesn’t come without challenges. Overcoming these challenges can only happen through a broader ecosystem of development tools, and software resources. It’s just a matter of time. The tiny edge is the lynchpin through which society will unlock far greater control and usefulness of its data and environment, leading to a smarter AI-driven future.
We feature the best Computerized Maintenance Management System software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
There’s a magic in the very name of Wonka! And now Netflix has given a green light to The Golden Ticket, a reality competition series inspired by Willy Wonka, that legendary candy maker initially appearing in Roald Dahl’s 1964 children’s classic, Charlie and the Chocolate Factory.
As a precocious California child of the ‘70s back when the last Ice Age melted, celebrated author Roald Dahl’s imaginative kids’ book centered around an eccentric confectioner and a poor European lad who finds a Golden Ticket to tour Willy Wonka’s mysterious headquarters was always the first title I grabbed off my bedroom bookshelf whenever staying home from school.
Years later, I was enthralled to see my all-time favorite book adapted into a Hollywood feature film, Willy Wonka and the Chocolate Factory, starring Gene Wilder as the kooky cacao wizard. I even enjoyed (to a lessor degree) director Tim Burton’s Johnny Depp-led version from 2005 and Wonka, the most recent musical iteration by filmmaker Paul King starring Timothée Chalamet.
Netflix’s next appetizing new TV series based on Roald Dahl's story, aptly called The Golden Ticket, is best described as mixing the ingredients of logistical tactics and fun interactive gameplay, all while sugar-crazed contestants seek to achieve entry to a “retro-futuristic” candy-making factory using their Golden Ticket and negotiating through a number of chocolatey challenges to complete the various objectives.
“We are thrilled to bring the magic of The Chocolate Factory to life like never before,” said Jeff Gaspin, vp of unscripted material at Netflix. “This one-of-a-kind reality competition blends adventure, strategy and social dynamics, creating an experience that is as captivating as it is unpredictable. For the first time, a lucky few won’t just have to imagine the experience — they’ll get to step inside the factory and live it.”
With the profusion of popular cooking shows, obstacle challenges, and food-based reality programs scattered across the streaming landscape these days, a chocolate-coated project centered on Roald Dahl’s masterpiece seems destined for instant success. Eureka Productions (The Mole, Dating Around, TwentySomethings Austin) will serve as the series producers.
Netflix purchased the rights to Roald Dahl’s entire catalog of intellectual property back in 2021, which includes books such as The Fantastic Mr. Fox, Matilda, James and the Giant Peach, The BFG, The Witches, and the sequel to Charlie and the Chocolate Factory titled Charlie and the Great Glass Elevator. This endeavor will be Netflix’s first dip into the world of Willy Wonka.
Although there’s no official release date yet for Netflix’s The Golden Ticket, we’ll be sure to deliver the full scoop on any upcoming details and developments for what is sure to be one of the best Netflix shows.
You might also likeHot on the heels of its announcement that NotebookLM's Audio Overviews are now available in Gemini, Google has revealed that a new feature, Mind Maps, will now be available as an option in NotebookLM.
Mind maps are great at helping you understand the big picture of a subject in an easy-to-understand visual way. They consist of a series of nodes, usually representing ideas, with lines that represent connections between them.
The beauty of mind maps is that they show you the connections between ideas in a way that helps make those connections more obvious.
Another string to its bowNotebookLM is Google’s AI research helper. You feed it articles, documents, even YouTube videos and it produces a notebook summarizing the main points of the subject and you can chat to it and ask questions, as you would a normal AI chatbot.
Its best feature is that you can also generate an Audio Overview in NotebookLM, which is an AI-generated podcast between two AI hosts that discusses the subject, so you can listen to it and absorb the key points while doing something else at the same time. The Audio Overview can sound so natural it’s hard to believe you’re not listening to two humans talking!
Now Mind Maps have been added as another string to NotebookLM’s bow for helping you absorb information. They work in either the standard free version of NotebookLM or the paid-for Plus version.
(Image credit: Google/Apple) Better understandingTo generate a Mind Map you simply open one of your notebooks in NotebookLM, or create a new one, then click on the new Mind Map chip in the Chat window (the central panel).
Once you’re viewing your Mind Map (it appears in the Studio panel once it has been generated) you can zoom in or out, expand and collapse branches, and click on nodes to ask questions about specific topics.
NotebookLM is shaping up to be an essential tool for students who have a lot of information to digest, and don’t necessarily read very quickly. Using the power of AI you can get AI to do a lot of the leg work for you, then present you with the key bits of information, and Mind Maps is just another way for NotebookLM to help you on your path to better understanding.
You may also likeFor years, the tech industry has been obsessed with the next ‘big thing’. Blockchain, AI voice assistants, the metaverse – each promise to revolutionize marketing as we know it. Venture Capitalists have poured billions into these dazzling innovations and marketers have scrambled to understand and implement the latest shiny platforms.
But as we look ahead, the real marketing revolution won't be found in the dazzling and disruptive. It will be found in the dependable, the practical, and yes, dare I say it, the ‘boring’.
Because in 2025, the biggest marketing trend won’t be about chasing fleeting novelty; it will be about mastering the essential, often overlooked, technologies that underpin true digital transformation. We're talking about the unglamorous heroes of modern marketing: composability, next-generation Content Management Systems (CMS), robust data strategies, sophisticated automation and laser-focused customer segmentation.
Why this shift? Because while the hype cycle churns, a critical mass of businesses are realizing that flashy gimmicks don’t deliver sustainable growth. They’re discovering that true market impact, the kind that drives revenue and builds lasting customer relationships, comes from a solid foundation of functional, reliable and adaptable technology.
The 93% Problem: Missing the Mark in Market ImpactHere’s a stark reality: 93% of companies are still missing out on significant opportunities to accelerate their market impact. Why? Because they’re hampered by outdated, inflexible tech stacks. They're trying to run a 2025 marketing strategy on systems built for 2015, or even 2005. Our recent survey of IT and marketing leaders found that traditional monolithic CMS platforms are falling short for teams today - nearly all expressed frustration with limited integration options with other services and tools in their tech stack, with 38% stating they constantly need a better integration experience.
Legacy CMS are like trying to run optimized code with outdated hardware - every move is held back by limitations. Developers are wrestling with monolithic platforms, spending countless hours on maintenance and workarounds instead of building innovative customer experiences. Last year alone, businesses shelled out nearly $3 million on tech upgrades, yet IT teams dedicated up to 25 hours per week simply maintaining these legacy systems. That’s a staggering waste of resources and potential.
These platforms simply cannot keep pace with modern demands. They struggle to integrate with AI tools, cross-platform solutions, or seamlessly connect with diverse digital channels. This isn't just an IT problem; it's a fundamental marketing bottleneck. It prevents marketers from executing agile campaigns, personalizing customer journeys, and delivering truly omnichannel experiences - the very things that drive competitive advantage in today’s market.
Startup Speed, Enterprise Security: The Achievable IdealBig enterprises often envy the speed and agility of startups. But the myth that large organizations must be slow and cumbersome is just that - a myth. By embracing ‘boring’ but brilliant technologies, even the largest enterprises can move with startup-like speed without compromising security or stability. The key lies in composable architectures and modern, headless CMS platforms. These are the ‘boring’ building blocks that unlock incredible agility.
Imagine a tech stack where you can pick and choose the best tools for each aspect of your marketing - from e-commerce and analytics to personalization and CRM -- and connect them seamlessly via APIs. This “composable architecture,” increasingly favored by 70% of retail decision-makers (up from just 44% two years ago), allows enterprises to break free from rigid, all-in-one legacy systems.
Speed-to-Market: The New Competitive BattlefieldIn today's hyper-competitive landscape, speed-to-market isn't just an advantage; it's becoming the ultimate competitive edge. Customers expect instant gratification, personalized experiences and seamless interactions across every channel. If you can’t deliver at speed, you’ll be left behind.
Headless CMS is a game-changer in this regard. It fundamentally changes how websites, apps and digital experiences are built and managed. By decoupling the content repository from the presentation layer, headless CMS empowers developers to work with their preferred front-end frameworks like React, Vue, Laravel and Svelte. It’s about reducing friction in the stack, which means more time engineering features and less time managing legacy dependencies or maintenance tasks. This translates directly to faster campaign launches, quicker website updates and the ability to rapidly adapt to changing market demands.
Ending the Dev Bottleneck Marketers DespiseFor too long, marketers have been locked in a frustrating dance with development teams. Every content update, every new campaign landing page, and every adjustment to the website often required developer intervention, creating bottlenecks and slowing everything down.
Headless CMS breaks down these silos. Its integrated workflows empower non-technical marketing teams to make real-time updates without disrupting site performance or requiring constant developer support. Marketing teams can make real-time updates while developers stay in control of the site’s performance. It’s like giving marketers their own sandbox to work in… so no one is stepping on each other’s toes.
This newfound autonomy is transformative. Marketers can be more agile, responsive, and creative, launching campaigns faster and optimizing content in real-time based on performance data. Developers, freed from constant content update requests, can focus on high-impact projects that truly drive innovation.
Omnichannel, Finally RealizedThe promise of omnichannel has been around for years but for many businesses, it remains elusive. Legacy systems simply weren’t built for today’s fragmented digital landscape. With headless CMS though omnichannel delivery becomes a tangible reality.
Headless architecture allows you to create content in a central hub and seamlessly distribute it across websites, apps, digital displays, IoT devices and even voice-activated interfaces like Alexa. This centralized content management ensures consistency, optimized delivery and effortless scalability as new channels emerge.
The Smart Money is on ‘Boring’The allure of the cutting-edge is undeniable. But in 2025, the savviest marketers will recognize that true competitive advantage isn't about chasing the flashiest new toy. It’s about building a robust, flexible and future-proof marketing infrastructure based on ‘boring’ but brilliant technologies like headless CMS and composable architecture.
It’s time to shift our focus from novelty to necessity, from hype to helpfulness. The future of marketing isn’t about the shiniest object; it’s about the strongest foundation. And in 2025, that foundation will be built on the power of ‘boring’ tech.
We feature the best online marketing service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Full spoilers follow for Severance season 2 episode 10.
After 10 weeks of teases, never-ending fan theories, and shocking in-universe moments, Severance season 2 has ended.
And, boy oh boy, what a finale it was. The hit Apple TV show has been the talk of the town since its return in mid-January, and I don't expect discussions to end any time soon, especially after this episode.
As the dust settles, let's dissect what happened in Severance's season 2 finale, titled 'Cold Harbor'. Major spoilers immediately follow, so make sure you've watched it before reading on.
Is Gemma rescued by Mark in the Severance season 2 finale? Devon and Harmony convince Mark S to help them rescue Gemma (Image credit: Apple TV+)In short: yes! But, there's a large lead-up to that. So, strap in, folks, because this explanation is going to be a long one.
Picking up immediately after Severance season 2 episode 9, Mark's innie – Mark S – informs Harmony Cobel that he hasn't completed Project Cold Harbor (more on this later) yet.
Good, that means Gemma, i.e., the wife of Mark's outie (aka Mark Scout) is still alive and can be rescued! Not so fast, Mark S says. While Harmony, Mark Scout, and Mark's sister Devon have a plan to save Gemma, Mark S is extremely hesitant to play his part. If Gemma escapes and the world learns of Lumon's nefarious operations, the evil biotech corporation will be shut down.
That includes the severed floor, so the innies, Mark S included, won't exist anymore. Understandably, Mark S doesn't want to give his life and the lives of his fellow innies for a woman his outie loves, but he doesn't.
Handheld camcorders still come in *ahem* handy in the Severance universe! (Image credit: Apple TV+)To convince Mark S that this is the right thing to do, Devon gives him a handheld camcorder, tells him to watch the video recording, and then record his response to it.
The message has been left by Mark Scout, who apologizes for using severance as an escape from Gemma's apparent death before asking Mark S to help break her out. The interaction is cordial at first, but things get intense as each persona becomes suspicious of the other's true motives.
Mark Scout makes the grave error of bringing up Helly R and calling her Heleny; a similar slip-up to one Helena made when she got Gemma's name wrong in Severance season 2 episode 6. That, alongside other issues, eventually leads Mark S to tell Mark Scout that he doesn't trust him and won't help save Gemma.
Mark S becomes increasingly emotional as he speaks to his outie throughout episode 10's first art (Image credit: Apple TV+)Cue Harmony Cobel's intervention. Dropping some deep Severance lore by way of story exposition, she tells Mark S that the numbers he and the rest of the Macro Data Refinement (MDR) team have been working on are actually Gemma.
According to Harmony, they are a doorway into Gemma's mind, with each cluster correlating to one of the Four Tempers that Lumon founder Kier Eagan is said to have tamed decades earlier.
Anyway, Project Cold Harbor is the 25th and final file Mark S must finish to help Lumon complete their severance-based work. Oh, and once that's done, Lumon will dispose of the innies.
This particular revelation infuriates Mark S who, before storming out of the birthing cabin that he, Harmony, and Devon have been hiding at, informs the pair that, unless Mark Scout returns to Lumon, he won't see Gemma again.
Well, that's new... (Image credit: Apple TV+)Next thing we know, Mark S is standing in the elevator leading to Lumon's severed floor. Exiting the elevator, Mark S is greeted by a giant, creepy mural of him completing Cold Harbor as numerous Lumon employees past and present, plus the Eagan family, watch on.
But Mark S doesn't have time to take it all in as he's quickly reunited with Helly. The duo head to MDR and, after reading a card from their manager Seth Milchick – held by a creepy-looking statue of Kier Eagan – Mark S restarts his work on Cold Harbor. He's incredibly reluctant to do so because he knows it'll mean the death of the innies but, after a tear-jerking heart-to-heart with Helly, he begins to process the final few number clusters.
Meanwhile, Dylan G is revealed to be alive. Remember, he tried to resign from Lumon after Gretchen, his outie's wife, rejected his marriage proposal. Episode 9 ended on something of a cliffhanger for Dylan G, with the implication that, by resigning, he was effectively committing suicide.
As I predicted, though, Dylan G can't simply walk away. Milchick meets him at the severed floor's elevator with his outie's response to his resignation. The pair clearly don't get on, but the heartfelt reply from Dylan's outie reignites the fire within Dylan G, who heads back to MDR to find his friends.
I've got to admit, I laughed out loud at this entire sequence (Image credit: Apple TV+)But, back to Mark S and Helly R. As Mark completes the final file, Helly hands him Irving's note – i.e., the one that contains directions to the black hallway that leads to the testing floor, where Gemma is being held captive.
Before their rescue mission can begin, though, an amusingly surreal sequence distracts them. Milchick converses with the previously mentioned Kier statue (it's actually an animatronic) and then leads Lumon's Choreography and Merriment (C&M) division in a dance number that puts season 1 episode 7's Music Dance Sequence to shame. Hey, the Apple TV+ show was originally billed as a dark comedy, so absurd, laugh-out-loud moments like this are bound to happen intermittently!
To help Mark S slip away, Helly creates her own distraction by stealing Milchick's walkie-talkie and heading to the bathroom. He follows her but she manages to evade him and temporarily lock him in it. Milchick almost escapes, but Helly, with help from Dylan G and a vending machine, blocks the entrance, trapping him.
With no one to stop him, all Mark S has to do is follow Irving's directions, take the testing floor elevator, find Gemma, and help her escape, right? Well...
Who dies in Severance season 2 episode 10? So long, Mister Drummond (Image credit: Apple TV+)Don't worry, Mark S, Gemma, and the rest of MDR survive. In fact, only one person dies in 'Cold Harbor' and that's Mister Drummond.
Taking the testing floor elevator up to the severed floor after Mark S completes Project Cold Harbor, Drummond opens a secret room located behind the wall opposite the testing floor elevator hallway. There, he meets Mammalian Nurturables division chief Lorne and one of her goats (I'll explain what the goats are used for later).
"Where do you think you're going, Mark S?" (Image credit: Apple TV+)Drummond and Lorne are interrupted by Mark S' attempts to break down the door to the testing floor elevator. His key card doesn't grant him access to it, you see.
Drummond investigates and, realizing what Mark S is trying to do, tries to stop him. A scuffle ensues and, after overpowering Mark S, Drummond starts strangling him to death.
Don't do it, Lorne! (Image credit: Apple TV+)He would've succeeded, too, if not for Lorne's intervention. Remember, Mark S and Helly left a lasting impression on Lorne and her crew in Severance season 2 episode 3, so it feels right that she'd save Mark S' life.
A fight breaks out between Drummond and Lorne, which Lorne wins. Before she can kill Drummond, though, Mark S stops her because he can use Drummond as leverage to get to Gemma.
Hold on, Mark, aren't you going to cross a severance barr- oh, never mind... (Image credit: Apple TV+)Or so Mark S thinks. After using Drummond's all-access key card on the hallway door, he holds Drummond at gunpoint as a hostage. As they travel down in the elevator, Mark S starts to explain what's going to happen when they reach the testing floor.
However, when the elevator passes through its severance barrier, Mark's outie re-emerges, causing him to pull the trigger on the gun, and the bullet *ahem* severs the artery in Drummond's neck. Long story short: Mark accidentally kills him.
Does Mark choose Gemma or Helly in the Severance season 2 finale? Gemma and Mark, reunited at last – but not for long (Image credit: Apple TV+)Above all others, this is the moment that's going to divide Severance fans. Why? Because Mark chooses Helly over Gemma.
Stepping out of the testing floor elevator, Mark's bloodied outie makes his way to the Cold Harbor testing room where he's confronted by Cecily, the nurse who keeps tabs on Gemma. Cecily refuses to open the door and runs away when Mark brandishes his gun.
How's he going to get in now? With Drummond's blood, of course. Like Cecily, Drummond's blood signature grants him access to all the test rooms and it just so happens that Mark is covered in it. So, he uses some blood from his tie to gain entry to the Cold Harbor test room.
Oh, Mark... (Image credit: Apple TV+)There, he finds a terrified Gemma, who doesn't recognize Mark because her 25th (!) innie persona is active in this room. Despite Doctor Mauer's attempts to convince her to stay away from him, Mark manages to get her to leave with him. Passing through the test room's severance barrier, Gemma's outie re-emerges. Cue a soul-stirring reunion between the couple that won't leave a dry eye in the house.
Severance doesn't let us have happy things, though, so you know a gut-punch event is moments away.
Traveling back to the severed floor, Mark and Gemma pass through the testing floor's severance barrier, which causes Mark S and Ms Casey to take over. Mark S is still aware of the plan to get Gemma out, though, so he leads her to the severed floor's exit stairwell. Its doorway contains another severance barrier that, once crossed, will allow their outies to emerge again and finally be rid of Lumon.
Fly, you fools! (Image credit: Apple TV+)And here comes the kicker. Mark S convinces Ms Casey to leave, which she does and then turns back into Gemma. However, when Gemma tries to get Mark S to follow her, he hesitates. That lets Helly R – after leaving Dylan and the C&M team to deal with Milchick – find Mark S and Gemma at the exit stairwell.
Helly's arrival reminds Mark S that he wants to be with her – after all, they love each other. As a confused and devastated Gemma watches, Mark S and Helly flee back through Lumon's labyrinthine halls to locations unknown.
What is Project Cold Harbor, exactly? What does Gemma find in the Cold Harbor test room? (Image credit: Apple TV+)One of the sci-fi mystery-thriller's most talked-about mysteries, Project Cold Harbor is finally solved in this episode.
After Gemma passes through the test room's severance barrier, Doctor Mauer tells her to walk into the room. There, she's greeted by a crib that's either the exact same one that Mark and Gemma were going to use for their baby or a near-identical replica of it. Either way, considering Gemma lost her and Mark's baby years earlier, it's disturbing and sinister that Lumon would make Gemma go through that again, albeit via her latest innie persona.
Well, that's not disturbing at all, Lumon (Image credit: Apple TV+)Mauer instructs Gemma's newest innie to take the crib apart with a screwdriver. She duly obliges and, as she does so, Mauer watches her progress from the secret monitoring room we first saw in season 2 episode 7. Jame Eagen, Lumon's current CEO, also watches from his own monitoring room.
Later, Mauer asks if Gemma knows who she is, to which she replies "I don't know". That confirms Lumon's latest test is a success, but Mark's arrival puts paid to the final stage of whatever evil scheme they'd concocted.
What are the goats used for in Severance's season 2 finale? I swear if anything happens to this little fella, Lumon... (Image credit: Apple TV+)Ever since they were first teased in season 1, fans have longed to learn more about what Lumon Industries is doing with its goats. It's high time we found out, too. The last time we saw Lumon's goats was in season 2 episode 3 but, even then, we learned next to nothing about what they'll be used for.
Well, we finally have an answer – and, after a new Apple teaser confirmed they'd return in Severance's season 2 finale, it turns out one of the five best Severance goat theories was actually right.
During episode 10's final act, we learn that Lumon's Mammalian Nurturables division has hand-reared the goats for sacrificial offerings. Indeed, Lorne says as much after she meets Drummond in the secret room I mentioned earlier.
Justice for Emile! (Image credit: Apple TV+)Drummond asks Lorne if the goat is "full of verve and wiles", to which Lorne says it has the "most of its entire flock". Interestingly, verve and wiles are two of the word-based cards we saw in the Cobel household in Severance season 2 episode 8, which signals their importance to the Kier Eagan doctrine.
Anyway, animal sacrifices have played an integral role in many real-world religions, so it's no great surprise that they do so in Lumon's religion-like cult. What is a surprise, though, is what purpose the goats appear to play as part of their sacrifice.
Drummond tells Lorne that the goat she's brought – it's called Emile, by the way – will be "entombed with a cherished woman whose spirit must be guided to Kier's door". The suggestion here is that, once the goat is sacrificed (read: shot in the head), it'll somehow be imbued with Gemma's spirit when she's also bumped off following Project Cold Harbor's completion.
Thankfully, Emile is spared by Mark S' inadvertent intervention, and Lorne teams up with Mark S to overpower Drummond. Will we ever see one of Lumon's goats get the chop? Hopefully not, but never say never.
Does Severance season 2's final episode have a mid-credits or post credits scene? "What? Are you waiting for an end credits scene to play?" (Image credit: Apple TV+)End credits stingers have become par for the course for many franchises, including the Marvel Cinematic Universe (MCU), as well as certain shows on the world's best streaming services, but not here.
Apple TV Originals steer clear of teasing what's to come next season, and that's certainly true of the Severance season 2 finale. There's no mid- or post-credits sequence to stick around for.
You should sit through the end credits to appreciate how many people worked on such a compelling series like this. But, if you're only hanging around for an end credits stinger, you'll be sorely disappointed.
Has Apple announced Severance season 3 yet? Waiting for that official season 3 announcement like... (Image credit: Apple TV+)Not yet, but it's a case of when, not if, Apple will renew one of the best Apple TV+ shows for a third season.
For one, Severance has become more successful with each passing week. The three-year gap between seasons 1 and 2 was excruciating, but that allowed more people to check out its debut season and become obsessed with its numerous mysteries.
With more viewers jumping on the bandwagon after the release of its sophomore installment, Severance is now more popular than Ted Lasso. The number of breakout Apple TV+ shows pales in comparison to some of the tech giant's streaming rivals, including Netflix and Prime Video. So, if Apple really wants to prove it's a serial entertainment industry contender, greenlighting new seasons for hit shows like Severance should be a foregone conclusion.
Tell us when you'll be back, Milchick! (Image credit: Apple TV+)For what it's worth, the series' next entry is in the works. In February, director/executive producer Ben Stiller confirmed season 3's scripts are currently being written.
Unfortunately, there's not as much good news on the filming start date front. Dichen Lachman, who plays Gemma/Ms. Casey, exclusively told me that she doesn't know when principal photography will begin on Severance season 3.
More details could be revealed at a post-season 2 finale event in LA this Saturday (March 22), though. Speaking on the New Heights podcast, Stiller said: "Hopefully we’ll be announcing what the plan is very soon". I'm keeping my fingers crossed that it's sooner rather than later!
How does 'Cold Harbor' set up Severance season 3's story? What will become of Jame and Helly after this season's finale? (Image credit: Apple TV+)We don't know, but there are plenty of unanswered questions that'll need, well, answering.
For starters: where have Mark S and Helly gone? They can't leave Lumon as their outies will take over, but Lumon will be using all of its resources to track down the pair after events that transpired in 'Cold Harbor'.
Then there's the wider fallout and implications of what happened in this episode. Lumon should come down hard on Dylan G and C&M following their rebellion. Milchick will have something to say and/or do with them, too, but he'll also be in the firing line as this all happened under his watch. That's before we get onto how Lumon proceeds with the severance program and its malicious plans for the technology and those who signed up for the procedure.
Mark's problems are only just beginning (Image credit: Apple TV+)That's just the tip of the iceberg. Will Gemma seek out Devon and Ricken, and try to enlist their help to rescue Mark? What's Harmony Cobel's endgame? What do the season 2 finale events mean for Jame in a leadership capacity? Is Reghabi going to show up again and attempt to help Gemma reintegrate? When will we see Irving and Burt again? And what is the Grand Agendum that Kier Eagan's animatronic mentions after Mark finishes Project Cold Harbor?
Those questions and many other mysteries need to be solved before Severance's end credits roll for the final time. For what it's worth, creator Dan Erickson "has a sense" of what the show's final scene will be and how many seasons would be "ideal" to reach it. Whether Severance has one or more seasons of story left to tell, then, is perhaps the biggest mystery of all.
YouTube is seemingly pulling out all the stops to remain at the top of the streaming game for both video content and YouTube Music and while it answered our requests for adjustable video quality a few years ago, the platform has yet to offer the same for audio.
However, this could be on the horizon for YouTube, as new hints point to a forthcoming feature that would allow you to control audio quality when watching videos.
Thanks to Android Authority, which has spotted new strings in the YouTube beta app, there’s fresh evidence that hints at YouTube’s next big upgrade. It would essentially give you the liberty to adjust the audio quality of whatever video you're watching.
These could come in three different options; Normal, YouTube’s standard audio, High, an improved bitrate option, and Auto, which could simply be an automatic setting depending on your internet speed. It seems too good to be true, doesn’t it? Well ,with YouTube, there’s always a catch.
Video quality settings are free to adjust for all users, but audio control may only be available to YouTube Premium subscribers. (Image credit: YouTube)According to Android Authority’s findings, YouTube’s audio quality feature will only be available for those who are subscribed to YouTube Premium, and even then, there’s a possibility that this feature may only be applicable to certain videos in its endless library of content.
It’s hard to pinpoint when YouTube will launch this feature since it only exists as a few lines of coding at the moment, but if YouTube decides to proceed with it, it could be one of the platform’s most notable upgrades of the past few years.
It seems as though YouTube will do almost anything to get more people signed up for its YouTube Premium service, and these attempts to lure you in have been cropping up quite frequently. A few weeks ago, YouTube launched its cheaper YouTube Premium Lite tier in the US, packing ad-free content on ‘most videos’ but excluding offline or background video playback.
For as long as I can remember, adjustable video quality settings have been part of YouTube’s array of video enhancements, but they have had no effect on audio playback. The audio quality of YouTube videos has always depended on the uploader, so if the audio control rumors are true, it could do wonders to get more audiophiles to jump on the YouTube Premium bandwagon.
You might also likeGiving eyesight to AI is becoming increasingly common as tools like ChatGPT, Microsoft Copilot, and Google Gemini roll out glasses for their AI tools. Hugging Face has just dropped its own spin on the idea with a new iOS app called HuggingSnap that offers to look at the world through your iPhone’s camera and describe what it sees without ever connecting to the cloud.
Think of it like having a personal tour guide who knows how to keep their mouth shut. HuggingSnap runs entirely offline using Hugging Face’s in-house vision model, smolVLM2, to enable instant object recognition, scene descriptions, text reading, and general observations about your surroundings without any of your data being sent off into the internet void.
That offline capability makes HuggingSnap particularly useful in situations where connectivity is spotty. If you’re hiking in the wilderness, traveling abroad without reliable internet, or simply in one of those grocery store aisles where cell service mysteriously disappears, then having the capacity on your phone is a real boon. Plus, the app claims to be super efficient, meaning it won’t drain your battery the way cloud-based AI models do.
HuggingSnap looks at my worldI decided to give the app a whirl. First, I pointed it at my laptop screen while my browser was on my TechRadar biography. At first, the app did a solid job transcribing the text and explaining what it saw. It drifted from reality when it saw the headlines and other details around my bio, however. HuggingSnap thought the references to new computer chips in a headline were an indicator of what's powering my laptop, and seemed to think some of the names in headlines indicated other people who use my laptop.
(Image credit: Hugging Snap Screenshot)I then pointed my camera at my son's playpen full of toys I hadn't cleaned up yet. Again, the AI did a great job with the broad strokes in describing the play area and the toys inside. It got the colors and even the textures right when identifying stuffed toys versus blocks. It also fell down in some of the details. For instance, it called a bear a dog and seemed to think a stacking ring was a ball. Overall, I'd call HuggingSnap's AI great for describing a scene to a friend but not quite good enough for a police report.
(Image credit: Hugging Snap Screenshot) See the futureHuggingSnap’s on-device approach stands out from your iPhone's built-in abilities. While the device can identify plants, copy text from images, and tell you whether that spider on your wall is the kind that should make you relocate, it almost always has to send some information to the cloud.
HuggingSnap is notable in a world where most apps want to track everything short of your blood type. That said, Apple is heavily investing in on-device AI for its future iPhones. But for now, if you want privacy with your AI vision, HuggingSnap might be perfect for you.
You might also likeFollowing hot on the heels of the revived Renault 5 and its madcap R5 Turbo 3E big brother, Citroen is the latest brand to state that it is to remake more of its history.
Speaking to Autocar, Citroen's chief executive, Thierry Koskas, said the company would draw on “one of the richest histories in the world” among automakers and that the 2CV was one of the most widely recognized cars it had produced.
Stopping short of confirming a release date, Koskas claims that in the future, Citroen needs more iconic models that will “surprise.”
Earlier this year, Autocar also revealed that design work had already begun on the retro-futuristic 2CV. At the same time, Citroen’s brand chief said that we can expect a new concept car to arrive later this year - but it won’t necessarily be a preview of what we can expect from an electric Tin Snail, as the 2CV was affectionately known.
Old school is the new school of car design (Image credit: Renault)It is no coincidence that several European carmakers are delving into the history books for EV inspiration.
Currently, the threat from cheaper and more technologically accomplished Chinese competition is causing concern that buyers will be tempted to opt for the best value options as the cost of living continues to rise.
“But buyers still want good design,” Renault’s design chief, Laurens van de Acker, told me at the recent Dacia Bigster launch. “Design and heritage or having a story to tell,” he added.
It’s also no coincidence that the same European manufacturers are rebooting vehicles that were once regarded as practical, affordable people movers with the power to carry entire nations through hard times.
The Renault 5, for example, was born during the oil crisis of the 1970s, when people were crying out for a cheap and efficient set of wheels to use daily.
Similarly, the 2CV was designed to coax farmers away from horses, haul eggs over rough terrain, and generally act as the ultimate do-it-all vehicle. Arguably the world’s first SUV.
With combustion engine cars being phased out in many countries, customers are crying out for similarly affordable and practical options in the EV space.
You might also like