As businesses realized the potential of artificial intelligence (AI), the race began to incorporate machine learning operations (MLOps) into their commercial strategies. But integrating machine learning (ML) into the real world proved challenging, and the vast gap between development and deployment was made clear. In fact, research from Gartner tells us 85% of AI and ML fail to reach production.
In this piece, we’ll discuss the importance of blending DevOps best practices with MLOps, bridging the gap between traditional software development and ML to enhance an enterprise’s competitive edge and improve decision-making with data-driven insights. We’ll expose the challenges of separate DevOps and MLOps pipelines and outline a case for integration.
Challenges of Separate PipelinesTraditionally, DevOps and MLOps teams operate with separate workflows, tools, and objectives. Unfortunately, this trend of maintaining distinct DevOps and MLOps pipelines leads to numerous inefficiencies and redundancies that negatively impact software delivery.
1. Inefficiencies in Workflow IntegrationDevOps pipelines are designed to optimize the software development lifecycle (SDLC), focusing on continuous integration, continuous delivery (CI/CD), and operational reliability.
While there are certainly overlaps between the traditional SDLC and that of model development, MLOps pipelines involve unique stages like data preprocessing, model training, experimentation, and deployment, which require specialized tools and workflows. This distinct separation creates bottlenecks when integrating ML models into traditional software applications.
For example, data scientists may work on Jupyter notebooks, while software engineers use CI/CD tools like Jenkins or GitLab CI. Integrating ML models into the overall application often requires a manual and error-prone process, as models need to be converted, validated, and deployed in a manner that fits within the existing DevOps framework.
2. Redundancies in Tooling and ResourcesDevOps and MLOps have similar automation, versioning, and deployment goals, but they rely on separate tools and processes. DevOps commonly leverages tools such as Docker, Kubernetes, and Terraform, while MLOps may use ML-specific tools like MLflow, Kubeflow, and TensorFlow Serving.
This lack of unified tooling means teams often duplicate efforts to achieve the same outcomes.
For instance, versioning in DevOps is typically done using source control systems like Git, while MLOps may use additional versioning for datasets and models. This redundancy leads to unnecessary overhead in terms of infrastructure, management, and cost, as both teams need to maintain different systems for essentially similar purposes—version control, reproducibility, and tracking.
3. Lack of Synergy Between TeamsThe lack of integration between DevOps and MLOps pipelines also creates silos between engineering, data science, and operations teams. These silos result in poor communication, misaligned objectives, and delayed deployments. Data scientists may struggle to get their models production-ready due to the absence of consistent collaboration with software engineers and DevOps.
Moreover, because the ML models are not treated as standard software artefacts, they may bypass crucial steps of testing, security scanning, and quality assurance that are typical in a DevOps pipeline. This absence of consistency can lead to quality issues, unexpected model behavior in production, and a lack of trust between teams.
4. Deployment Challenges and Slower Iteration CyclesThe disjointed state of DevOps and MLOps also affects deployment speed and flexibility. In a traditional DevOps setting, CI/CD ensures frequent and reliable software updates. However, with ML, model deployment requires retraining, validation, and sometimes even re-architecting the integration. This mismatch results in slower iteration cycles, as each pipeline operates independently, with distinct sets of validation checks and approvals.
For instance, an engineering team might be ready to release a new feature, but if an updated ML model is needed, it might delay the release due to the separate MLOps workflow, which involves retraining and extensive testing. This leads to slower time-to-market for features that rely on machine learning components. Our State of the Union Report found organizations using our platform brought over 7 million new packages into their software supply chains in 2024, highlighting the scale and speed of development.
5. Difficulty in Maintaining Consistency and TraceabilityHaving separate DevOps and MLOps configurations makes it difficult to maintain a consistent approach to versioning, auditing, and traceability across the entire software system. In a typical DevOps pipeline, code changes are tracked and easily audited. In contrast, ML models have additional complexities like training data, hyperparameters, and experimentation, which often reside in separate systems with different logging mechanisms.
This lack of end-to-end traceability makes troubleshooting issues in production more complicated. For example, if a model behaves unexpectedly, tracking down whether the issue lies in the training data, model version, or a specific part of the codebase can become cumbersome without a unified pipeline.
The Case for Integration: Why Merge DevOps and MLOps?As you can see, maintaining siloed DevOps and MLOps pipelines results in inefficiencies, redundancies, and a lack of collaboration between teams, leading to slower releases and inconsistent practices. Integrating these pipelines into a single, cohesive Software Supply Chain would help address these challenges by bringing consistency, reducing redundant work, and fostering better cross-team collaboration.
Shared End Goals of DevOps and MLOpsDevOps and MLOps share the same overarching goals: rapid delivery, automation, and reliability. Although their areas of focus differ—DevOps concentrates on traditional software development while MLOps focuses on machine learning workflows—their core objectives align in the following ways:
1.Rapid Delivery
2.Automation
3.Reliability
In traditional DevOps, the concept of treating all software components as artefacts such as binaries, libraries, and configuration files, is well-established. These artifacts are versioned, tested, and promoted through different environments (e.g., staging, production) as part of a cohesive software supply chain. Applying the same approach to ML models can significantly streamline workflows and improve cross-functional collaboration. Here are four key benefits of treating ML models as artifacts:
1. Creates a Unified View of All ArtifactsTreating ML models as artifacts means integrating them into the same systems used for other software components, such as artifact repositories and CI/CD pipelines. This approach allows models to be versioned, tracked, and managed in the same way as code, binaries, and configurations. A unified view of all artifacts creates consistency, enhances traceability, and makes it easier to maintain control over the entire software supply chain.
For instance, versioning models alongside code means that when a new feature is released, the corresponding model version used for the feature is well-documented and reproducible. This reduces confusion, eliminates miscommunication, and allows teams to identify which versions of models and code work together seamlessly.
2. Streamlines Workflow AutomationIntegrating ML models into the larger software supply chain ensures that the automation benefits seen in DevOps extend to MLOps as well. By automating the processes of training, validating, and deploying models, ML artifacts can move through a series of automated steps—from data preprocessing to final deployment—similar to the CI/CD pipelines used in traditional software delivery.
This integration means that when software engineers push a code change that affects the ML model, the same CI/CD system can trigger retraining, validation, and deployment of the model. By leveraging the existing automation infrastructure, organizations can achieve end-to-end delivery that includes all components—software and models—without adding unnecessary manual steps.
3. Enhances Collaboration Between TeamsA major challenge of maintaining separate DevOps and MLOps pipelines is the lack of cohesion between data science, engineering, and DevOps teams. Treating ML models as artifacts within the larger software supply chain fosters greater collaboration by standardizing processes and using shared tooling. When everyone uses the same infrastructure, communication improves, as there is a common understanding of how components move through development, testing, and deployment.
For example, data scientists can focus on developing high-quality models without worrying about the nuances of deployment, as the integrated pipeline will automatically take care of packaging and releasing the model artifact. Engineers, on the other hand, can treat the model as a component of the broader application, version-controlled and tested just like other parts of the software. This shared perspective enables more efficient handoffs, reduces friction between teams, and ensures alignment on project goals.
4. Improves Compliance, Security, and GovernanceWhen models are treated as standard artifacts in the software supply chain, they can undergo the same security checks, compliance reviews, and governance protocols as other software components. DevSecOps principles—embedding security into every part of the software lifecycle—can now be extended to ML models, ensuring that they are verified, tested, and deployed in compliance with organizational security policies.
This is particularly important as models become increasingly integral to business operations. By ensuring that models are scanned for vulnerabilities, validated for quality, and governed for compliance, organizations can mitigate risks associated with deploying AI/ML in production environments.
ConclusionTreating ML models as artifacts within the larger software supply chain transforms the traditional approach of separating DevOps and MLOps into a unified, cohesive process. This integration streamlines workflows by leveraging existing CI/CD pipelines for all artifacts, enhances collaboration by standardizing processes and infrastructure, and ensures that both code and models meet the same standards for quality, reliability, and security. As organizations race to deploy more software and models, we need holistic governance.
Currently, only 60% of companies have full visibility into software provenance in production. By combining DevOps and MLOps into a single Software Supply Chain, organizations can better achieve their shared goals of rapid delivery, automation, and reliability, creating an efficient and secure environment for building, testing, and deploying the entire spectrum of software, from application code to machine learning models.
We've compiled a list of the best IT infrastructure management services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The Last of Us season 2 episode 7 is out now – and, with it, the incredibly popular show's latest installment has come to an end.
Like its predecessor, season 2 of HBO's TV adaptation has been appointment viewing for all of us over the past seven weeks. And, as the dust settles on its near-50-minute finale, I imagine you've got some big questions about what happened and the show's future.
So, how does The Last of Us season 2 end? Are there any end credits scenes? And when do we think season 3 will arrive worldwide? I'll aim to answer those questions below, but bear in mind that full spoilers immediately follow for The Last of Us' season 2 finale. Make sure you've watched it before you proceed.
Who dies in The Last of Us season 2 episode 7?RIP, Jesse (Image credit: HBO)The Last of Us TV show's latest episode contains three big character deaths.
The most unexpected of those, and arguably the most shocking one since Joel's demise in season 2 episode 2, is Jesse's. The close friend of Ellie and Dina's ex-boyfriend (and father of Dina's unborn child) is killed by Abby when she single-handedly storms the Seattle theater that's been Ellie and Dina's base of operations since this season's fourth episode.
Jesse's death probably won't shock those who have played The Last of Us Part II, aka the Naughty Dog video game season 2 is based on. And if you'd been paying attention to the foreshadowing throughout season 2's final episode, such as Jesse constantly expressing his wish to get out of Seattle in one piece, I doubt you would've been stunned by his passing, either.
Mel and Owen are two of three big casualties in The Last of Us season 2 finale (Image credit: HBO)But why does Abby kill him? The reason is simple: Ellie accidentally killed Owen and Mel, two members of Abby's party who helped her track down and murder Joel in episode 2. A vengeful Abby, then, wants revenge for Ellie murdering two of her closest friends.
Having learned of Abby's location from Nora in episode 5 – that being, Seattle's aquarium not too far from the city's unmissable Ferris wheel – Ellie infiltrates the building and encounters Owen and Mel while searching for Abby.
Still traumatized from how much she tortured Nora two episodes ago, Ellie claims she won't shoot Owen and Mel if they tell her where Abby is now. Owen initially refuses, but to buy himself and Mel some time, he eventually agrees to show Ellie where she can find Abby on a map.
However, as Owen approaches the map on a table, he makes a move to grab a handgun to shoot Ellie first. Unfortunately for Owen, Ellie's survival instincts kick in and she shoots him first.
Three down, two to go, eh Ellie? (Image credit: HBO)The bullet passes through Owen's neck, killing him instantly. After exiting the back of Owen's throat, it hits Mel, who's standing behind him. The bullet slices her neck, nicking an artery in the process, which results in Mel collapsing and bleeding out.
Ordinarily, this would be a tragic accident in its own right – after all, Mel was unarmed and made no attempt to harm Ellie. However, Mel makes things even worse for Ellie (and, by proxy, us as viewers) before she dies by revealing she's heavily pregnant.
If Ellie felt incredible guilt and shame over what she'd done to Nora, she feels 50 times worse over not only taking Mel's life, but also that of her innocent unborn child. It's a moment that hits home even harder when you consider how much danger Ellie has put a pregnant Dina in since the pair left Jackson, Wyoming, too.
Abby tracks down Ellie and company to get revenge for Mel and Owen's deaths (Image credit: HBO)Jesse, Owen, and Mel aren't the only casualties of season 2 episode 7 – well, that's what The Last of Us wants you to think. One of the finale's last shots shows Abby pointing her sidearm at an unarmed Ellie, who shouts "no no no!" before the screen cuts to black as a shot is fired.
There's no way that the hit Max show just bumped off another of its main characters in Ellie, right? In short: no, she doesn't die. Ellie is the protagonist of this TV series and The Last of Us Part II. Spoilers notwithstanding, her story is far from over in HBO's live-action adaptation.
So, who fired the shot that we hear? I'm not going to ruin that now. You'll just have to wait for season 3 (more on this later) to arrive. Or, you know, you could watch a playthrough of The Last of Us 2 on YouTube if you want an answer ASAP.
Is there a mid-credits scene in The Last of Us season 2 episode 7?As of season 2 episode 7, Dina is still alive (Image credit: Liane Hentscher/HBO)There's no mid-credits scene to stick around for.
This season's final scene doesn't count as one, either. Sure, it drops a big hint about how season 3 will begin (more on this shortly), but it's a brief scene that takes place before the end credits start to roll. So, it can't be classed as a traditional mid-credits stinger.
Does The Last of Us season 2's final episode have a post-credits scene?Expect to see more of Isaac in The Last of Us' third season (Image credit: Liane Hentscher/HBO)Nope. The Last of Us season 2 doesn't have a post-credits scene, either. Based on how the show's latest episode ends, it doesn't need one.
When will The Last of Us season 3 be released?Trying to get word on when season 3 will make its worldwide debut like... (Image credit: Liane Hentscher/HBO)We don't know. HBO only confirmed that The Last of Us season 2 wouldn't be the hit series' final chapter in April, so it'll be a few years before one of the best Max shows' third season is released.
It's likely that work has been going on behind the scenes on season 3 for some time. Indeed, I'd be surprised if the show's chief creative team hasn't been penning its scripts, location scouting, and conducting other pre-production elements for months at this point.
Nevertheless, with filming yet to begin on The Last of Us season 3, I suspect it'll be mid-2027 at the earliest before it launches worldwide.
What does The Last of Us' season 2 finale tell us about the plot of season 3?Season 3's first few episodes will jump back in time to depict events from Abby's viewpoint (Image credit: HBO)Season 2 episode 7's final scene suggests that next season will give us an entirely different perspective on the events that play out during Ellie and Dina's first 72 hours in Seattle.
After the screen cuts to black in this season's finale, many viewers might have expected the credits to roll, thereby leaving us on a cliffhanger.
Instead, a new scene begins seconds later, reuniting us with Abby as she's woken up by Manny. He tells her that "they" won't be happy if she keeps them waiting, to which Abby replies she'll be there in five minutes.
Once she's fully come to, Abby steps out onto a balcony overlooking a football stadium that's been repurposed as a headquarters for the Isaac-led antagonistic faction known as the Washington Liberation Front (WLF). After she surveys the scene, Abby heads back inside as the words 'Seattle, Day One' appear in the bottom left-hand corner of the screen.
We'll witness Ellie's first 72 hours in Seattle from Abby's perspective next season (Image credit: HBO)This is the same location and time stamp that appeared in season 2 episode 4 when Ellie and Dina first arrive in Seattle. So, The Last of Us season 3's first few episodes, if not the entirety of next season, will travel back in time and cover the same three-day period in the US Pacific Northwest city through Abby's eyes.
That won't be a surprise to those who have played The Last of Us Part II. As the deuteragonist of the aforementioned video game, Abby was a playable character for half of the story depicted in the second entry of Naughty Dog's acclaimed and multi-award-winning game franchise. That means her side of the Seattle-based story, which runs concurrently to Ellie's, will be brought to life in season 3 of HBO's TV adaptation.
There's a lot of ground to cover in the Abby-centric part of the story, too. What were Owen and Mel planning to do before Ellie interrupted them? Who's the father of Mel's baby? How did Abby know where to find Ellie and co. in Seattle? What convinced Isaac to choose Abby as the WLF's new leader? Why does Isaac believe the WLF's current leadership is set to perish during the assault on the Seraphites' main headquarters? And does Manny meet the same fate as Owen, Mel, and Nora at Ellie's or someone else's hands, or is he still alive somewhere?
These questions will need answering in season 3 and beyond if The Last of Us officially ends with its rumored four-season plan. I could provide more details now, but again, I don't want to spoil anything significant about Ellie and Abby's journeys from this point on in the story. So, unless you scour the internet for answers now, you'll have to wait until season 3 arrives for them.
You might also likeAt Computex 2025, Maxsun unveiled a striking new entry in the AI hardware space: the Intel Arc Pro B60 Dual GPU, a graphics card pairing two 24GB B60 chips for a combined 48GB of memory.
Servethehomeclaims Maxsun envisions these cards powering dense workstation builds with up to four per system, yielding as much as 192GB of GPU memory in a desktop-class machine.
This development appears to have Intel's implicit approval, suggesting the company is looking to gain traction in the AI GPU market.
A dual-GPU card built for AI memory demandsThe Arc Pro B60 Dual GPU is not designed for gaming. Instead, it focuses on AI, graphics, and virtualization tasks, offering a power-efficient profile.
Each card draws between 240W and 300W, keeping power and thermal demands within reach for standard workstation setups.
Unlike some alternatives, this card uses a blower-style cooler rather than a passive solution, helping it remain compatible with conventional workstation designs. That matters for users who want high-end performance without building custom cases or cooling systems.
Still, the architecture has trade-offs. The card relies on x8 PCIe lanes per GPU, bifurcated from a x16 connector. This simplifies design and installation but limits bandwidth compared to full x16 cards.
Each GPU also includes just one DisplayPort and one HDMI output. That design choice keeps multi-GPU setups manageable and avoids hitting OS-level limits, older Windows versions, for example, may have trouble handling more than 32 active display outputs in a single system.
The card’s most intriguing feature may be its pricing. With single-GPU B60 cards reportedly starting around $375 MSRP, the dual-GPU version could land near $1,000.
If that estimate holds, Maxsun’s card would represent a major shift in value. For comparison, Nvidia’s RTX 6000 Ada, with the same 48GB of VRAM, sells for over $5,500. Two of those cards can push costs north of $18,000.
Even so, Intel’s performance in professional applications remains an open question. Many creative professionals still favor Nvidia for its mature drivers and better software optimization.
You might also likeAfter many months of speculation, Google finally showed off its still-early-day Android XR smart glasses prototype. It was an impressive live demo, with a live translation portion that went off well but not without hitches. Still, it got the crowd at Google I/O going, and right after that opening keynote wrapped, I strolled around the Shoreline Amphitheater to find a pair to try.
Much like my time with Project Moohan, the prototype Android XR headset that Google and Samsung are working on, I only spent about five minutes with these prototype glasses. And no, it wasn’t a sleek frame made by Warby Parker or a wild one from Gentle Monsters – instead, it was the pair Google demoed on-stage, the prototype Android XR glasses made by Samsung.
As you can see above, much like Meta Ray-Bans and unlike Snapchat Spectacles (the first gen), these prototypes look like standard black frames. They're a bit thicker on either the left or right stems, but they’re also loaded with tech – though not in a way that screams it from the outside.
It was a short, pretty rushed demo, but certainly a compelling one.
(Image credit: Jacob Krol/Future)The tech here is mostly hidden – there is a screen baked into the lens, which, when worn, appears as a little box when it’s showing something larger. Otherwise, when I first turned the glasses on, I saw the time and the weather hovering at the top of my field of vision.
When I pressed the button on the right stem to capture a photo, it almost flashed transparently larger in my field of vision. Neat and a bit more present way of capturing than on the screen-less Meta Ray-Bans.
These are both cool, and during the keynote, Google also shared that the screens could be used for messaging, calls, and translating as well, but I didn’t get to try that. While I couldn’t ask for directions myself, a Google rep within my demo was able to toss up what navigation would like, and this feature has me more excited about smart glasses with a screen built-in.
Why? Well, it was that the experience of navigating doesn’t get in the way of my field of view – I can simply still look straight forward and see at the top that in 500-feet or 50-feet that I need to make a right onto a specific avenue. I don’t need to look down at my phone or glance at my wrist, it’s all housed in just one device.
If I need more details or want to see my route, I could glance down to see a mini version of the map, which moved as I moved my head. If I wore these in NYC, I could walk normally and glance at the top to see directions, but when safely stopped and not in the way of others, I could look down to see my full route. That’s pretty neat to me.
(Image credit: Jacob Krol/Future)The projected screen itself had good-enough quality, though I’m not sure how it performs in direct sunlight, as I tested these in a little room that Google had constructed. It’s important to remember that this is still a prototype – Google has several brands onboard to produce these, but there isn’t an exact timeframe. Developers will be able to start developing and testing by the end of the year, though.
This year, the Project Moohan headset, which also runs Android XR, will arrive. Samsung will ship the headset in a to-be-revealed final version, which could build support from third parties and let Google get more feedback on the platform.
Gemini, Google’s very wise AI assistant, blew me away on Project Moohan and was equally compelling on the Android XR glasses. I asked it for the weather, and got it to give me an audio report of the next few days, had it analyze a replica of a painting, and even look at a book, tell me the reviews, and where I could purchase it.
That power of having Gemini in my frame has me really excited for the future of the category – it’s the audio responses, the connection to the Google ecosystem, and how it plays with the onboard screen. It remains to be seen how Samsung’s final design might look, but it will likely sit alongside several other Android XR-powered smart glasses from the likes of Warby Parker, X-Real, and Gentle Monster, among others.
I’ve long worn Meta Ray-Bans and enjoy those for snapping unique shots or recording POVs like walking my dog Rosie or riding an attraction at a Disney Park. Similarly, I really enjoyed the original version of the Snapchat Spectacles, but the appeal wore off. Those both did only a short – or in the case of the Spectacles, very short – list of functions, but Android XR as a platform feels a heck of a lot more powerful, even from a short five-minute window.
While the design didn’t sell me on Samsung’s prototype, I have high hopes for the Warby Parker ones. Seeing how Gemini’s smarts can fit into such a small frame and how a screen can be genuinely useful but not overly distracting really has me excited. I have a feeling not all of the Android XR glasses will appeal to everyone, but with enough entries, I’m sure one of them will pair form with function in a correct balance.
Gemini in glasses feels less like the future, and considering this new entry, my eyes are set to see what Meta's does next and what Apple's much-rumored entry into the world of smart glasses will look like.
You might also likeTwopan has launched the Nano SSD, a compact USB-C storage device with a built-in fingerprint reader, a feature we’d love to see more storage makers offer.
Weighing just 5g and the size of a stick of gum, the Twopan Nano SSD measures 20 x 13 x 5mm and offers 512GB of high-speed storage in a keychain-friendly design.
The product’s main appeal is, naturally, the biometric security it offers. The device supports up to 20 fingerprints and doesn't require apps or software. Twopan says plug it in and it just works.
Broad compatibilityThe Nano SSD connects via a USB-C 3.1 Gen 1 port and will work with devices like iPhone 15/16 Pro, MacBook Pro, iPad Air, Steam Deck, PS5, and Canon and Sony cameras that support USB-C file transfer.
It supports direct 4K and HD recording on the newest iPhone Pro models using HEVC (H.265) at 60fps, making it a good choice for content creators working in high-resolution formats.
It is also compatible with Android phones from Samsung and Google, offering wide usability without the need for adapters or extra cables.
Twopan says it fits into phone cases that are 3mm thick or less, making it even easier to use on the go without removing protection.
Despite its tiny size, the Nano SSD delivers up to 450MB/s read and write speeds. It’s water and dust resistant with an IP65 rating and is drop-tested for up to 10 meters. The casing is made from aluminum and shockproof plastic, offering additional durability for users who travel or work outdoors.
Twopan Nano SSD is currently live on Kickstarter with a retail price around $99. The creators were seeking $1,277 in funding and managed to pull in over $197,000 from more than 1,600 backers. Shipping is expected in August 2025.
Like most crowdfunded hardware, there’s always a chance for delays or changes. But if it delivers on promises, this could well be one of the most secure portable drives around.
You might also likeAnother Netflix library reshuffle is about to happen and, while we're excited to see the return of Squid Game season 3, aka one of the best Netflix shows, we mustn't forget about the movies being removed from the streamer's back catalog.
Christopher Nolan's The Dark Knight trilogy stands out like a sore thumb among the films leaving Netflix this June, and you don't have too long left to catch them as they'll vanish come June 1. The same goes for these three movies with 94% on Rotten Tomatoes, so catch some of the best Netflix movies (from a third-party perspective, anyway) before they depart.
TV shows are usually up for the chopping block, too, but seasons 1 to 3 of The Equalizer is the only casualty of the best streaming service's June 2025 culling. So, TV buffs can sit back without worrying that your favorite shows will be axed.
Everything leaving Netflix in June 2025Leaving on June 1
Batman Begins (movie)
Beginners (movie)
Burlesque (movie)
Closer (movie)
Cult of Chucky (movie)
Daddy Day Care (movie)
The Dark Knight (movie)
The Dark Knight Rises (movie)
Den of Thieves (movie)
From Prada to Nada (movie)
GoodFellas (movie)
Ma (movie)
Magic Mike XXL (movie)
Pride & Prejudice (movie)
Ted (movie)
Ted 2 (movie)
Two Weeks Notice (movie)
Leaving on June 11
Gran Turismo: Based on a True Story (movie)
Trap (movie)
Leaving on June 14
Godzilla x Kong: The New Empire (movie)
Leaving on June 16
The Equalizer seasons 1-3 (TV show)
Won't You Be My Neighbor? (movie)
Leaving on June 17
Carol (movie)
Leaving on June 19
Migration (movie)
Leaving on June 21
American Sniper (movie)
Leaving on June 22
Brain on Fire (movie)
Leaving on June 26
Ordinary People (movie)
You might also likeIn a market where storage capacities and speeds are constantly evolving to meet the needs of AI and cloud infrastructure, another player has stepped forward with a bold offering.
TeamGroup has announced its entry into the 64TB SSD space with the T-CREATE MASTER Ai I5U U.2 PCIe 5.0 SSD, a high-capacity solid-state drive built with enterprise workloads in mind.
This launch comes about a year after Western Digital teased a similar PCIe Gen5 model for AI applications, and five years after Nimbus Data introduced the first 64TB SSD, the ExaDrive NL series.
Enterprise-first design with next-gen performance specsUnlike consumer SSDs competing for a spot among the best portable drives, TeamGroup’s latest entry is aimed squarely at enterprise environments.
With support for the U.2 PCIe 5.0 interface and storage capacity maxing out at 64TB, the I5U is positioned as a tool for cloud-based databases and edge computing.
According to TeamGroup, it is “designed specifically for cloud infrastructure and database applications” and optimized for the demands of “large language models” and intensive AI-driven workloads.
PCIe Gen5 has become the benchmark for future-proof performance in both consumer and enterprise sectors, but claims such as “ultra-fast PCIe Gen5 speeds with enterprise-grade endurance” should be treated with caution.
Until third-party benchmarks emerge, it’s difficult to evaluate the drive’s real-world reliability and performance.
Past efforts to identify the best SSDs based purely on theoretical throughput have often ignored key factors like thermal performance, latency under load, and sustained write consistency, all of which are critical in large-scale deployments.
TeamGroup’s entry also arrives amid a broader trend of high-capacity SSDs hitting the market. From Solidigm’s 61.44TB D5-P5336 to Micron’s 6.144TB 6550 Ion SSD, competition in the ultra-high-capacity segment is heating up.
One element that remains unclear for TeamGroup’s I5U is pricing. Enterprise-grade drives at this scale rarely come cheap, but TeamGroup is known for value-oriented options.
This raises speculation that its 64TB SSD might come closer to affordability than previous alternatives.
While it's unlikely to ever replace the best external HDDs in terms of raw cost per gigabyte, it signals that ultra-high-capacity SSDs are edging closer to broader adoption.
You might also likeWhile One UI 7 has only just recently been pushed out to the masses by Samsung, it looks as though One UI 8 will be following it very shortly – and the software upgrade could well come with a new Running Coach feature included.
As spotted by tipster @tarunvats33 (via Android Central), a message sent through the Samsung Members app on Galaxy devices gives instructions for joining the One UI 8 beta program, suggesting it's going to be opened up in the near future.
One UI 8 is Samsung's take on Android 16, and it makes sense for Samsung to try to get as close to Google's software update cycle as possible. Google has hinted at a June launch for Android 16, with rumors pointing to Tuesday, June 3 as the big day.
Samsung hasn't said anything officially about dates or availability, but it seems likely that the Samsung Galaxy S25 series will be the first devices eligible to be signed up for the beta program, for those who want to try it ahead of the full release.
Getting running coachingSamsung Running Coach#OneUI8 #Samsung #OneUI pic.twitter.com/EPF2ZiP4hwMay 23, 2025
As the One UI 8 beta program gets closer to opening up, we've also got a tip about a new Running Coach feature, which was spotted by @GerwinvGiessen (via SamMobile). It's possible the feature will be part of the Samsung Health app, or a standalone app.
Based on screenshots posted to social media, the coach uses the tracking capabilities of your phone or smartwatch to analyze your current running level, and then makes personalized recommendations about improvements.
"Running Coach uses 'level up' assessments to determine your fitness level and adjust your running program accordingly," one of the information screens says. "This helps track your progress and keep the risk of injury low as you gradually improve your fitness."
There's been no announcement about any of this from Samsung, but we might get one in July sometime: that's when the Samsung Galaxy Z Fold 7 and Samsung Galaxy Z Flip 7 are expected to arrive, and they're rumored to be coming with One UI 8 on board.
You might also likePistachio is the hottest flavor of 2025, and if you've walked past a high street coffee shop recently, you'll almost certainly have seen ads for green-tinted lattes in the window. The trend started last year, when pistachio-filled Dubai chocolate (initially created to satisfy pregnancy cravings) began appearing in social media videos, and now Nespresso has got in on the act with a vanilla and pistachio-flavored coffee pod.
I spotted the green and cream-colored Nespresso Vertuo Vanilla Pistachio capsules on the Nespresso website a couple of weeks ago, and couldn't resist adding a pack to my order. So what are they like, and can they compete with the likes of Starbucks?
First, a word on serving. Each Vertuo pod produces a double shot of flavored espresso, and is intended to be enjoyed as a long drink over ice. I wouldn't recommend drinking the shot straight – although it's based on sweet arabica beans that would probably be delicious alone, the natural vanilla and pistachio flavor is pretty powerful, allowing it to withstand being diluted.
(Image credit: Future)I inserted a pod into my trusty Nespresso Vertuo Pop (one of the best Nespresso machines around if you have a small kitchen), attached the drip tray platform to raise my measuring cup to the appropriate height, locked the lid and hit the brew button.
A few seconds later, I had a sweet-smelling double espresso ready to be poured over a generous helping of ice. I'm fond of an iced latte, so I finished my drink with cold foam made using the De'Longhi Primadonna Aromatic, which I'm currently testing.
If you don't have a milk frother for your Nespresso machine, take a look at the Nespresso Aeroccino 4, which is a compact standalone device that can produce hot or cold foam using dairy or plant-based milk.
(Image credit: Future)I tried the Vanilla Pistachio Nespresso coffee as an iced caffe latte and an iced latte macchiato. Overall, I'd recommend the first option, as the potent flavor benefits from being combined thoroughly with the milk. There's no need to worry about watering down the taste.
When you get the balance right, the result is one of the tastiest Nespresso drinks I've tried so far. Sometimes flavored coffees can have a bitter or artificial-tasting edge, but that's not the case here. With plenty of ice and cold milk, you get a refreshing and well-rounded flavor that's tasty but not excessively sweet.
If you feel like something more dessert-like, Nespresso has a recipe for an iced pistachio vanilla oat latte using the capsules, which includes marshmallows for extra indulgence.
(Image credit: Future)It's delicious, but I also picked up a pack of Nespresso's Coconut Vanilla pods with my order, so I'm interested to see how the two compare. These are also intended to be served cold over ice, but unlike the pistachio pods, they brew a long black coffee to be savored like cold brew. Will they be as refreshing? I'll find out soon.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Sunday's puzzle instead then click here: NYT Strands hints and answers for Sunday, May 25 (game #448).
Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.
SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.
NYT Strands today (game #449) - hint #1 - today's themeWhat is the theme of today's NYT Strands?• Today's NYT Strands theme is… Body language
NYT Strands today (game #449) - hint #2 - clue wordsPlay any of these words to unlock the in-game hints system.
• Spangram has 8 letters
NYT Strands today (game #449) - hint #4 - spangram positionWhat are two sides of the board that today's spangram touches?First side: left, 4th row
Last side: right, 5th row
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Strands today (game #449) - the answers(Image credit: New York Times)The answers to today's Strands, game #449, are…
I struggled with today’s Strands, after a great start seeing HANDSHAKE and SALUTE immediately and then the GESTURES spangram.
A hint gave me SHRUG, but I struggled to see NAMASTE among the seven letters before me. I think this may be due to thinking that namaste was just a greeting rather than a gesture – although thinking about it, I realize it’s something that's never said without the palms coming together at the chest and a slight bow of the head.
Meanwhile, KOWTOW is a word that really should be used more commonly to describe political discourse in various countries around the globe – once a prominent part of Chinese rituals where underlings would suffer the indignity of submission, now kowtowing goes on everywhere.
How did you do today? Let me know in the comments below.
Yesterday's NYT Strands answers (Sunday, May 25, game #448)Strands is the NYT's not-so-new-any-more word game, following Wordle and Connections. It's now a fully fledged member of the NYT's games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Sunday's puzzle instead then click here: NYT Connections hints and answers for Sunday, May 25 (game #714).
Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.
SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.
NYT Connections today (game #715) - today's words(Image credit: New York Times)Today's NYT Connections words are…
What are some clues for today's NYT Connections groups?
Need more clues?
We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…
NYT Connections today (game #715) - hint #2 - group answersWhat are the answers for today's NYT Connections groups?
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Connections today (game #715) - the answers(Image credit: New York Times)The answers to today's Connections, game #715, are…
I immediately thought that JOKER must have something to do with cards – and I wasn’t wrong. However, I didn’t see CARD GAMES WITH FIRST LETTER CHANGED; kudos if you’re one of those clever people who did.
ITEMS IN A LINEN CLOSET was easier to spot, although I had “white wash load” in my mind, as that’s when I tend to see them together rather than a designated area.
I made my mistake with DIAMETRIC, thinking that POLAR didn’t fit. I took a stab in the dark with SLIP, thinking the group had something to do with evasion or trickery before seeing sense.
How did you do today? Let me know in the comments below.
Yesterday's NYT Connections answers (Sunday, May 25, game #714)NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.
It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It's playable for free via the NYT Games site on desktop or mobile.
A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Sunday's puzzle instead then click here: Quordle hints and answers for Sunday, May 25 (game #1217).
Quordle was one of the original Wordle alternatives and is still going strong now more than 1,100 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.
SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.
Quordle today (game #1218) - hint #1 - VowelsHow many different vowels are in Quordle today?• The number of different vowels in Quordle today is 4*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Quordle today (game #1218) - hint #2 - repeated lettersDo any of today's Quordle answers contain repeated letters?• The number of Quordle answers containing a repeated letter today is 1.
Quordle today (game #1218) - hint #3 - uncommon lettersDo the letters Q, Z, X or J appear in Quordle today?• No. None of Q, Z, X or J appear among today's Quordle answers.
Quordle today (game #1218) - hint #4 - starting letters (1)Do any of today's Quordle puzzles start with the same letter?• The number of today's Quordle answers starting with the same letter is 0.
If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:
Quordle today (game #1218) - hint #5 - starting letters (2)What letters do today's Quordle answers start with?• B
• H
• A
• S
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
Quordle today (game #1218) - the answers(Image credit: Merriam-Webster)The answers to today's Quordle, game #1218, are…
A terrible round for me today with one bad guess – LEAPT instead of BLEAT and one unlucky one – HOODY instead of HOWDY.
The pair of errors took me close to the edge, something I’m not used to since finally discovering the three-starter-word technique. Ah, memories.
How did you do today? Let me know in the comments below.
Daily Sequence today (game #1218) - the answers(Image credit: Merriam-Webster)The answers to today's Quordle Daily Sequence, game #1218, are…
Minisforum has announced what it calls a game-changer for AI deployment in compact computing environments: the MS-S1 Max, a 2U rackmount system powered by AMD’s Ryzen AI Max+ 395.
Minisforum says this system is designed to "revolutionize your AI workflow," but it marks an unusual departure from established norms.
While its 3.2-liter form factor and all-in-one design are drawing praise for efficiency, the core configuration raises uncomfortable questions for AMD.
Minisforum’s approach threatens AMD’s server ecosystemMinisforum has opted not to use AMD’s EPYC processors, designed explicitly for server tasks, and instead fitted what is effectively a mobile-class chip into a server chassis.
Although the MS-S1 Max is marketed as the best SMB server and even hints at broader enterprise ambitions, it’s difficult to ignore that this is a repurposing of hardware intended for a different context.
The Ryzen AI Max+ 395 is not a server CPU by design. It’s optimized for client workloads, featuring integrated Radeon graphics and an AI NPU.
What it offers, however, is a cost-effective and power-efficient solution for companies seeking local AI inference or the ability to run large models like DeepSeek 70B without the overhead of traditional infrastructure.
That edge makes it appealing to universities, labs, and AI startups, but it also turns the system into a wildcard in AMD’s carefully managed product segmentation. This unconventional use could complicate AMD’s broader strategy. EPYC chips are built for reliability, scalability, and intensive server workloads, and they command higher margins.
A surge of mini PC makers embedding consumer-grade Ryzen chips into rackmount systems might blur the line between consumer and enterprise offerings.
Still, the MS-S1 Max’s value proposition is hard to ignore. By delivering strong on-chip graphics and directing substantial memory bandwidth to its GPU, it offers a local AI engine at a fraction of the cost of traditional server gear.
That said, the catch lies in support, reliability, and long-term performance. Ryzen chips, while powerful, lack ECC memory support and validated server-grade features.
This makes them a questionable fit for mission-critical deployments, and puts AMD in a tough position. If demand grows, AMD may be forced to either restrict such uses or embrace them, potentially undermining its EPYC business.
This mobile workstation is expected to launch in the second half of the year.
You might also likeIn a tech landscape where external drives often blur into a sea of similar features and designs, TeamGroup’s new portable SSD takes a sharp detour into espionage territory.
The T-Create Expert P35S Destroyed Portable SSD introduces something previously unheard of in the mainstream consumer market: a one-click data destruction mechanism.
While the concept may sound like something pulled straight from a spy thriller, TeamGroup says the device is intended for professionals who handle sensitive or classified information.
A self-destruct SSD that promises true data erasureThis external SSD stands out thanks to its patented “physical chip destruction circuit.”
Unlike standard data wipes, this feature claims to electrically destroy the data stored on the drive, making it completely irretrievable.
Triggered by a two-step process TeamGroup calls an “anti-mistouch” system, users must both click and slide to activate the wipe.
It’s not exactly a big red button, but the dramatic undertone is part of the appeal.
The P35S, which weighs just 42 grams and measures 90 x 40 x 18 mm, offers 1,000MB/s transfer speeds via a USB 3.2 Gen 2 Type-C port.
This led to the bold “transfer 10GB in just 10 seconds - ready for anything” slogan on TeamGroup’s display at Computex 2025.
While it won’t top charts for the best SSD in terms of performance alone, it offers enough throughput for on-the-go file handling.
What’s most notable is the P35S’s target audience. TeamGroup references users such as journalists, corporate executives, and government officials, people who might need to dispose of confidential data instantly.
“Designed for end-users who carry highly confidential documents, the SSD prevents data breaches and ensures that personal and confidential information remains protected under all circumstances,” the company says.
There’s even a nod toward “defense use,” which, depending on your level of cynicism, could either suggest genuine intent or feel like a PR stretch, especially in light of recent high-profile data mishandling cases.
The SSD’s compact size and data wipe feature make it easy to picture in the hands of an undercover agent or whistleblower.
But in practical terms, it may also raise concerns about accidental erasure, especially for users prone to fidgeting. That’s one reason I’d love to test this device myself.
This isn't likely to be a top choice for gamers or media creators, but for users who prioritize security over speed, it may offer real value.
You might also likeDell has unveiled an AI PC with a never-before-seen feature it hopes will spur on the next levels of productivity.
Revealed at Dell Technologies World 2025, the new Dell Pro Max Plus laptop is the first to feature an enterprise-grade discrete NPU, offering the opportunity to carry out high-intensity AI tasks even on the move.
The mobile workstation features a Qualcomm AI 100 PC Inference Card with 32 AI-cores and 64GB memory, which Dell says should be more than enough to handle the needs of AI engineers and data scientists deploying large models for edge inferencing.
Dell Pro Max Plus(Image credit: Dell Technologies)Speaking at the event, company CEO Michael Dell addressed the upcoming Windows 10 end of life, hinting that for many users, the ideal solution is to buy an AI PC such as the Dell Pro Max Plus.
“Personal productivity is being reinvented by AI,” Dell said, “the install base of a billion and half PCs is ageing, and it’s being replaced with AI innovation.”
“The Windows 10 end of life is coming, and we are ready - Dell is the leader in commercial AI PCs, and we’re further distancing ourselves from the competition.”
The CEO highlighted the new Dell Pro Max device during his keynote address, noting it would be ideal for developers and scientists, offering up to 20 petaflops of performance due to embedded Nvidia GB300 hardware, and up to 800GB of memory - enough to run and train models with a trillion parameters.
“Today’s PCs are becoming AI workstations - blazing fast, all-day battery life powered by NPU and GPU innovation," Dell declared.
You might also likeSanDisk’s new WD Black SN8100 PCIe Gen5 SSD is fast, efficient, and engineered to meet the demands of gamers and power users alike.
The drive uses a PCIe Gen5 x4 interface and is available in 1TB, 2TB, and 4TB capacities. Built around SanDisk's in-house 8-channel controller and BiCS 3D TLC NAND, it supports read speeds of up to 14.5 GB/s and write speeds up to 12.7 GB/s, placing it among the fastest Gen5 drives currently available.
However, despite the SN8100’s cutting-edge design and impressive benchmarks, Intel’s now-defunct, four-year-old Optane P5800X still holds the crown as the fastest SSD in real-world use.
Benchmarks suggest top speeds - but not across the boardIn synthetic benchmarks like CrystalDiskMark and ATTO, the SN8100 breaks lab records for sequential throughput and random reads, reaching up to 2.3 million IOPS.
According to TweakTown, “this SSD is like none other; it’s at least 20% more powerful than any flash-based SSD we’ve ever encountered.”
It also demonstrates notable efficiency, consuming just 7 watts under load and requiring no active cooling, making it a serious contender for best SSD or the best portable SSD for enthusiast builds.
Still, synthetic benchmarks don’t always reflect real-world performance. In practical transfer tests, the SN8100 ranked ninth overall, indicating that while it's extremely fast, it's not without limitations, and it doesn't dethrone the Intel Optane P5800X.
Launched in 2021, the P5800X remains unmatched in real-world responsiveness and latency. While its sequential read speeds top out at 7.2 GB/s - slower than the SN8100 - its random read/write IOPS exceed 4.5 million, and latency frequently drops below 10 microseconds. That’s where it truly shines.
Flash-based SSDs like the SN8100 still rely on garbage collection and page-level management, leading to occasional latency spikes during small, random workloads. In contrast, the P5800X maintains consistent performance under heavy load, with no significant dips, a key reason why it’s still regarded as the fastest SSD ever made.
That said, the SN8100 is an impressive drive in its own right. It's a customized version of Silicon Motion’s SM2508 controller, enhanced with proprietary technologies like nCache 4.0 and WD Black Gaming Mode.
It also fits into the Sony PlayStation 5’s expansion slot, achieving read speeds of 6,550 MB/s in that setup, well above the console’s minimum requirement. However, with a price tag of $280 for the 2TB model, it clearly belongs in the premium tier.
You might also likeHighPoint Technologies has unveiled a portable NVMe storage solution offering nearly a petabyte of capacity.
The new system features eight of Solidigm’s D5-P5336 122TB SSDs housed in the HighPoint RocketStor 6542AW NVMe RAID Enclosure. Together, these deliver 976TB of storage in a design compact enough for mobile or space-constrained environments.
The RocketStor 6542AW supports all eight SSDs through a single PCIe connection. HighPoint’s PCIe Switching technology enables high-speed data transfer rates, addressing the performance needs of data-heavy industries.
High-capacity NVMe storage, seamless scalability“This collaboration between HighPoint and Solidigm is a game-changer in enterprise storage,” said May Hwang, VP at HighPoint Technologies.
“By qualifying the Solidigm D5-P5336 SSDs in our RocketStor 6542AW, we’ve created an unprecedented solution that combines high-capacity NVMe storage with seamless scalability."
The device itself is just under five inches tall and a little over nine inches long. Despite its small size, it offers full PCIe x16 connectivity, making it suitable for professionals on the move, small studios, or enterprise environments needing powerful storage in a limited space.
This setup supports applications such as artificial intelligence, big data analytics, and media production, where both speed and storage capacity are essential.
In AI and machine learning, model training often depends on fast access to large datasets.
With its Solidigm SSDs, the RocketStor 6542AW supports quicker training cycles. This helps researchers and developers manage workloads with improved efficiency.
For enterprise backup and HPC workloads, RAID support and high-speed connections offer secure, fast backups and low-latency data access.
HighPoint says the enclosure is well-suited for complex tasks such as engineering simulations and scientific research, where high throughput is necessary.
In media production, especially with 4K and 8K content, fast storage is key. The RocketStor 6542AW offers 28GB/s transfer bandwidth and ample room for large video files. This helps smooth editing and rendering workflows in film, animation, and design.
“As Hardware RAID adoption in the AI ecosystem is becoming more prevalent, this collaboration is significant using Solidigm industry-leading, high-capacity SSDs and HighPoint’s HW RAID enclosure,” said Mike Mamo, Senior Principal Engineer at Solidigm.
Solidigm’s D5-P5336 122TB SSDs have just gone on sale, priced at around $12,400. Eight of the mighty beasts will set you back a cool $99,200. The enclosure itself is $1,799 over at Amazon.
You might also likeWhat do you think of the iPhone’s Dynamic Island? Apple’s pill-shaped cutout seems to be rather contentious, but I’m here to throw my hat into the ring firmly on the side of Team Island. Because unlike my colleague Lance Ulanoff, I absolutely love the Dynamic Island.
As I wrote recently, I’ve not been entirely convinced by Apple’s Action button in the six months that I’ve had my iPhone 16 Pro. But the Dynamic Island is a feature that I really can’t help but admire for both its functionality and its aesthetics.
Yet start browsing social media and you’ll quickly get the feeling that it has received a lot of negative feedback since it arrived with the iPhone 14 Pro. Even the opinions that aren’t negative seem to be closer to an apathetic shrug.
Perhaps part of that is driven by reviewers and enthusiasts who get new iPhones every year. For them, the novelty has probably worn off. But for someone like me who stepped up from an iPhone 12 Pro to an iPhone 16 Pro, the Dynamic Island has been brilliant.
What’s the big deal?(Image credit: Apple)Ultimately, my appreciation for the Dynamic Island comes down to a few factors.
For one thing, it lets me tweak timers and fiddle with podcast playback without having to open the apps themselves. Before, I was constantly switching back and forth between apps when I only needed to make the slightest adjustment. That quickly grew old, but with the Dynamic Island, there are far fewer breaks in my workflow.
As well as that, it handles AirDrop requests without needing to open another window. It gives extra controls, like adjusting my iPhone’s flashlight strength and beam dimensions. There are even mini games from third-party developers, like Pixel Pals from the creator of Apollo (previously one of my favorite iOS apps before it was shut down).
Add to that the fact that the Dynamic Island is animated beautifully with smooth transitions and effects that just look sumptuous. That means that when I have to use it, I enjoy it.
All this is far better than the ugly notch on my old iPhone 12 Pro. That was purely functional and simply existed to hide the front-facing cameras and Face ID sensor array. The Dynamic Island still does that job, but it actually makes your iPhone’s cutout useful. Instead of taking away from your phone, it adds to it.
Apple didn’t just throw its hands up and admit it couldn’t hide these cameras and sensors – the company thought up a truly elegant solution, and I’m really glad that it did.
A beautiful halfway measure(Image credit: Future | Alex Walker-Todd)That’s not to say that the Dynamic Island is a flawless masterpiece – it’s absolutely not, and there are plenty of criticisms to be made of it.
The big one is that it can obscure things on your screen, including both movies and games. That’s obviously far from ideal, and because Apple hasn’t yet managed to secrete the front-facing cameras and sensors under the display, there’s no way around it.
And what about if you have more than one item in the Dynamic Island? In that case, the items can get shrunk down – or simply not appear at all. Apple’s cutout can hold two apps at a time (one large and one small), and while you can swipe across to expand or minimize its contents, you can’t have any more than that. Anything else simply isn’t shown.
While I can understand these frustrations, they’re not enough to put me off the Dynamic Island. For one thing, I don’t watch movies on my phone very often, so the obstruction isn’t particularly noticeable day to day.
For another, I know that the Dynamic Island is necessary right now. Apple hasn’t been able to hide things like the Face ID sensor array under the display – while it supposedly will soon, the feature is evidently not ready for prime time yet.
(Image credit: Future | Alex Walker-Todd)Some of the best Android phones have tiny cutouts, sure, but their facial recognition tech is either inferior to Apple’s or missing altogether. I’d rather have Face ID and a Dynamic Island than no facial recognition and a less secure device.
In the future, it looks like Apple is going to eliminate the Dynamic Island to provide a smoother, less obstructed display. I’m sure this will look amazing, going on what Android manufacturers have managed so far at least.
When that happens, it will be fascinating to see what happens to the Dynamic Island’s functionality. I’m sure Apple will think up something intriguing, just as it did with the Dynamic Island itself.
Perhaps Tim Cook and friends will give us a device that works as an all-screen phone most of the time, but that adds a Dynamic Island-like pill to store active app features such as timers.
But as long as the Dynamic Island remains on my iPhone, I’m super happy with it. It’s a halfway measure, sure, but a functional and beautiful one nonetheless.
You might also likeThe Wheel of Time has spun its last narrative thread following its cancellation by Amazon after three seasons.
Per Deadline, Amazon pulled the plug on the high fantasy series yesterday (May 23) after it determined it would cost too much money to produce more seasons, particularly in light of the show's declining viewership.
Prime Video's top brass reportedly deliberated hard over this decision because executives enjoyed what the program offered. However, a significant drop-off in its viewership during The Wheel of Time season 3's run and its production costs were cited as the primary reasons for scrapping the entire series.
The official confirmation comes over a month after one of the best Prime Video shows' third season drew to a close. Before it did, stars Josha Stradowski and Daniel Henney, who play Rand and Lan, exclusively told me that they were "confident" about a season 4 renewal. Unfortunately, the pair, alongside the rest of the show's cast and crew, won't be back for more outings.
We won't see Elayne, Egwene, and Nynaeve again following The Wheel of Time's cancellation (Image credit: Amazon MGM Studios)The Wheel of Time's demise is even tougher to take in light of its most recent eight-part installment being considered its best entry yet.
In my review of The Wheel of Time season 3, I called it a "spellbinding return to form for Prime Video's high fantasy TV show underdog" and handed it a four out of five stars rating.
Many critics agreed with me, too. Over on popular review aggregation website Rotten Tomatoes, season 3 is far and away the show's highest-rated chapter – its 97% critical rating outscoring season 2 (86%) and season 1 (81%) by some distance.
I can’t believe they decided to cancel the show I really had big hopes for another season but I guess we won’t get another one. Truly one of the best shows out there, with such an amazing fandom but I guess that does not matter. #TheWheelOfTime #savetwot pic.twitter.com/vK72LSXYQsMay 23, 2025
Nevertheless, the writing has seemingly been on the wall for the Amazon TV Original for some time.
As the weeks ticked by following its latest season's finale, fans became increasingly concerned over the lack of updates about the show's future. Some grew so worried, in fact, that they banded together and formed a fan campaign calling on Amazon to finish The Wheel of Time's story.
When Prime Video's Upfront 2025 presentation in mid-May passed without a single mention, it was a case of when, not if, Amazon planned to bring the ax down on its second biggest high fantasy show. The biggest, of course, is The Rings of Power and, following its fantasy sibling's axing, Amazon's Lord of the Rings prequel series will single-handedly bear the weight of the fantasy genre on one of the world's best streaming services.
There'll be some fans who'll be glad to see the back of Amazon's live-action adaptation of Robert Jordan's book series namesake. Indeed, some have expressed frustration over what they considered to be unnecessary diversions from the source material – changes showrunner Rafe Judkins defended ahead of The Wheel of Time season 3's launch.
Still, this is a sad day for all associated with The Wheel of Time. I didn't expect my season 3 ending explainer to be one of the last major articles I'd write on it, but I suppose all good things must come to an end. I just wish the wheel had weaved for a bit longer...
You might also likeIt seems as though there's a serious problem with the new Whoop MG (Medical Grade) fitness bands that were launched just a couple of weeks ago, with many users reporting that their devices have crashed and stopped working.
As reported by Tech Issues Today, there are complaints all over forum boards and social media. The issue is the same: the device sensors stop working just hours after the Whoop MG is set up, with no warning signs.
The tracker stops displaying any LED lights, won't sync with the mobile app, and isn't responsive even when it's fully charged. All the user complaints say the Whoop MG stopped responding within the first day of use.
This appears to be a widespread problem too: you don't have to look for long on Reddit, X, or Whoop's official community forums to find frustrated users. These Whoop MG owners report that they're following standard troubleshooting steps, to no avail.
What to do if you're affected@WHOOP wow guys I was so excited to get my new MG sensor from you. After only 5 days it failed. The worst part is the absolute runaround I’ve been getting the last 24 hours trying to deal with I can only imagine is AI customer support. Way to make a bad experience WAY worse!May 21, 2025
An official response from the Whoop team recommends fully charging the battery and then forcing a reset by rapidly tapping the top of the sensor multiple times until the side LED pulses blue. You might then find you can pair it again in the app.
These steps don't seem to be enough for some users, however. Affected Whoop MG owners are also being asked to contact Whoop support directly, and it seems some people are being sent replacements without asking for them, as bugs have been detected remotely.
If you have been affected, try the reset process linked above, and then get in touch with Whoop – you should be sent a free replacement if your Whoop MG has suddenly stopped working and can't be recovered.
And it's worth saying that this isn't affecting everyone, with some users reporting flawless operation with their Whoop MG. In the days that the TechRadar team has spent with the wearable, we haven't noticed any issues with it either.
You might also like