A surprising announcement at Nvidia GTC 2025 was the launch of the DGX station, a powerful supercomputer-class workstation PC that looks a lot like a traditional tower computer but with an Arm-based CPU inside.
This is not the first workstation Nvidia launched; it famously partnered with AMD to launch the precursor to the 2025 DGX Station called the DGX Station A100.
That one didn't have a Nvidia Arm CPU and needed separate PCIe AI accelerators (A100); the 2025 iteration doesn't. It also carried a price of more than $100,000 at launch.
Nvidia DGX StationNvidia confirmed that Asus, Boxx, Dell, HP, Lambda and Supermicro will sell their own versions of the DGX Station: the big name missing out is Lenovo.
The GB300 Grace Blackwell Ultra Desktop Superchip that powers it delivers up to “20 PFlops of AI performance” which is likely to be measured using FP4 with sparsity.
That would also infer that it is half the performance of GB200 Grace Blackwell Superchip, so something’s not clear here and I wonder whether there’s more than one version of the GB300.
Nvidia hasn’t said how many or what type of (Arm) CPU cores the GB300 uses; ditto for the GPU subsystem.
Its predecessor, GH200, had 72 Arm Neoverse 2 CPU cores clocked at 3.1GHz, up to 144GB HBMe memory and 480GB LPDDR5x memory.
What we do know is that it has 784GB of unified system memory, which one can assume means HBM (288GB HBM3e) plus what Nvidia calls Fast Memory (496GB LPDDR5x most probably).
The original DGX Station (Image credit: Nvidia) Fastest NIC in a computerNvidia also disclosed the DGX Station will use its proprietary ConnectX-8 SuperNIC, a network technology that can deliver up to a staggering data center-class 800Gb/s connectivity.
A close-up of the opened chassis shows the workstation has three forward-facing 120mm fans, a motherboard with three PCIe slots, an Nvidia-branded soldered chip (perhaps the SuperNic), and two large uncovered dies.
One of which is the Grace GPU and the other with eight distinct tiles, the Blackwell GPU.
No details about expansion or storage capabilities, the PSU capacity, the cooling solution used or the price have been revealed.
Additionally, we do not know whether you will be able to plug in accelerator cards like the H200 NVL (or a theoretical B300 NVL) to significantly improve the performance of the DGX Station.
The DGX station is expected to compete with the likes of the Camino Grando, an EPYC-powered tower workstation that packs two AMD CPUs and up to eight GPUs.
We’ll strive to update this article when further details of this workstation PC (including pricing and availability) are published.
Nvidia’s GTC Keynote also saw the formal launch of DGX Spark, formerly known as Project Digits, 12 new professional GPUs and Blackwell Ultra (or GB300), Nvidia’s most powerful GPU ever.
You might also likeWe're at that stage in the month where we're having to prepare ourselves for the next wave of movies that Hulu will be removing from its catalog.
But it looks like April is shaping up to be frightfully different compared to Hulu's previous lists. It's almost impossible to say but, I think this is the shortest 'leaving' list I've ever seen – and that's across all the best streaming services, not just Hulu.
A mere nine titles are getting the chop from Hulu this month, including seven movies and two documentaries, so it's a pleasure for me to say that Hulu's best movies and best TV shows have gained an extra life with their time on the platform. Have I heard of any of them? Never in my life – that's when you know you're not missing out on much.
But what's more exciting are the brand new movies and shows that are being added to Hulu in April – indeed, we're most excited for the arrival of the sixth and final season of The Handmaid's Tale, even if we must say our final goodbyes.
Everything leaving Hulu in April 2025Leaving on April 6
Agnes (movie)
Leaving on April 13
She Will (movie)
Leaving on April 16
Toni Morrison: The Pieces I Am (documentary)
Leaving on April 20
Totally Under Control (documentary)
Leaving on April 24
The Good Neighbor (movie)
Leaving on April 27
Resurrection (movie)
Leaving on April 30
After Everything (movie)
Code Name Banshee (movie)
Stars Fell Again (movie)
Nvidia has taken the wraps off its latest Blackwell Ultra flagship GPU hardware as it looks to assert its place as the global leader in AI computing.
Revealed at its Nvidia GTC event in San Jose, the new Blackwell Ultra hardware offers more power and efficiency in the data center than previous generations as the company looks to establish itself as the backbone of future AI development and deployment.
Declaring it to be "built for the age of AI reasoning", Nvidia says Blackwell Ultra can help democratize AI adoption across the world, making it possible for more organizations to enjoy the benefits such compute can bring.
Nvidia Blackwell UltraNvidia noted that the rapid growth of AI use cases around the world in the past few years has led for a huge demand for compute, as businesses and consumers alike clamour for more.
“AI has made a giant leap — reasoning and agentic AI demand orders of magnitude more computing performance,” said Jensen Huang, founder and CEO of NVIDIA.
“We designed Blackwell Ultra for this moment — it’s a single versatile platform that can easily and efficiently do pretraining, post-training and reasoning AI inference.”
The increase in reasoning models in particular has led for a boom in requirements for a full-stack offering that is cost-effective but also gets the job done.
Built on the initial Blackwell architecture unveiled at GTC 2024, Blackwell Ultra will offer 1.5x more FP4 inference, and are set to be available ind evices built by Nvidia partners in the second half of 2025.
It will be present in a host of new offerings from Nvidia, including the upgraded GB300 NVL72 rack, which it says offers improved energy efficiency and serviceability, with bandwidth speeds of up to 130Tb/s.
Nvidia RTX PRO 6000 Blackwell Server EditionHowever that wasn't all when it comes to new Blackwell hardware, as the company also revealed the Nvidia RTX PRO 6000 Blackwell Server Edition, designed for enterprise workloads such as multimodal AI inference, immersive content creation and scientific computing.
Nvidia says the new release offers huge advances on the previous-generation Ada Lovelace architecture L40S GPU, providing up to 5x higher LLM inference throughput for agentic AI applications, nearly 7x faster genomics sequencing, 3.3x speedups for text-to-video generation, nearly 2x faster inference for recommender systems and over 2x speedups for rendering.
(Image credit: Nvidia)Each RTX PRO 6000 can also be partitioned into as many as four fully isolated instances with 24GB each to run simultaneous AI and graphics workloads, with 96GB of ultrafast GDDR7 memory and support for multi-instance GPU.
They can also be configured in high-density accelerated computing platforms for distributed inference workloads — or used to deliver virtual workstations using Nvidia vGPU software, for graphics-intensive applications or to power AI development.
"With the RTX PRO 6000 Blackwell Server Edition, enterprises across various sectors —including architecture, automotive, cloud services, financial services, game development, healthcare, manufacturing, media and entertainment and retail — can enable breakthrough performance for workloads such as multimodal generative AI, data analytics, engineering simulation, and visual computing," noted Nvidia's Sandeep Gupte.
Nvidia has taken the world a step closer to smart, humanoid robots with the launch of its latest advanced AI model.
At its Nvidia GTC 2025 event, the company revealed Isaac GROOT N1, which it says is, "the world’s first open Humanoid Robot foundation model", alongside several other important development tools.
Nvidia says its tools, which are available now, will make developing smarter and more functional robots easier than ever, along with allowing them to have more humanoid reasoning and skills - which doesn't sound terrifying at all.
Isaac GROOT N1“The age of generalist robotics is here,” said Jensen Huang, founder and CEO of NVIDIA. “With NVIDIA Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.”
The company says its robotics work can help fill a shortfall of more than 50 million caused by a global labor shortage.
Nvidia says Isaac GROOT N1, which can be trained on real or synthetic data, can "easily" master tasks such as grasping, moving objects with either a single or multiple arms, and moving items from one arm to the other - but can also carry out multi-step tasks which combine a number of general skills.
The model is built across a dual-system architecture inspired by the principles of human cognition, with “System 1” is a fast-thinking action model, mirroring human reflexes or intuition, whereas “System 2” is a slow-thinking model for "deliberate, methodical decision-making."
Powered by a vision language model, System 2 is able to consider and analyze its environment, and the instructions it was given, to plan actions - which are then translated by System 1 into precise, continuous robot movements.
Among the other tools being released are a range of simulation frameworks and blueprints such as the NVIDIA Isaac GR00T Blueprint for generating synthetic data, which help generate large, detailed synthetic data sets needed for robot development which would be prohibitively expensive to gather in real life.
There is also Newton, an open source physics engine, created alongside Google DeepMind and Disney Research, which Nvidia says is purpose-built for developing robots.
Huang was joined on stage by Star Wars-inspired BDX droids during his GTC keynote, showing the possibilities of the technology in theme parks or other entertainment locations.
Nvidia first launched Project GROOT ("Generalist Robot 00 Technology") at GTC 2024, primarily focusing on industrial use cases, which could learn and become smarter by watching human behaviour, understanding natural language and emulating movements, allowing them to quickly learn coordination, dexterity and other skills in order to navigate, adapt and interact with the real world.
Amazon is turning off the ability to process voice requests locally. It's a seemingly major privacy pivot and one that some Alexa users might not appreciate. However, this change affects exactly three Echo devices and only if you actively enabled Do Not Send Voice Recordings in the Alexa app settings.
Right. It's potentially not that big of a deal and, to be fair, the level of artificial intelligence Alexa+ is promising, let alone the models it'll be using, all but precludes local processing. It's pretty much what Daniel Rausch, Amazon's VP of Alexa and Echo, told us when he explained that these queries would be encrypted, sent to the cloud, and then processed by Amazon's and partner Antrhopic's AI models at servers far, far away.
That's what's happening, but let's unpack the general freakout.
After Amazon sent an email to customers, actually only those it seems who own an Echo Dot 4, Echo Show 10 (3rd Gen), and Echo Show 15, that the option to have Alexa voice queries processed on device would end on March 28, some in the media cried foul.
They had a point: Amazon didn't have the best track record when it comes to protecting your privacy. In 2019, there were reports of Amazon employees listening to customer recordings. Later, there were concerns that Amazon might hold onto recordings of, say, you yelling at Alexa because it didn't play the right song.
Amazon has since cleaned up its data act with encryption and, with this latest update, promises to delete your recordings from its servers.
A change for the few (Image credit: Future)This latest change, though, sounded like a step back because it takes away a consumer control, one that some might've been using to keep their voice data off Amazon's servers.
However, the vast majority of Echo devices out there aren't even capable of on-device voice processing, which is why most of them didn't even have this control.
A few years ago, Amazon published a technical paper on its efforts to bring "On-device speech processing" to Echo devices. They were doing so to put "processing on the edge," and reduce latency and bandwidth consumption.
Turns out it wasn't easy – Amazon described it as a massive undertaking. The goal was to put automatic speech recognition, whisper detection, and speech identification locally on a tiny, relatively low-powered smart speaker system. Quite a trick, considering that in the cloud, each process ran "on separate server nodes with their own powerful processors."
The paper goes into significant detail, but suffice it to say that Amazon developers used a lot of compression to get Alexa's relatively small AI models to work on local hardware.
It was always the cloudIn the end, the on-device audio processing was only available on those three Echo models, but there is a wrinkle here.
The specific feature Amazon is disabling, "Do Not Send Voice Recordings," never precluded your prompts from being handled in the Amazon cloud.
The processing power that these few Echos had was not to handle the full Alexa query locally. Instead, the silicon was used to recognize the wake word ("Alexa"), record the voice prompt, use voice recognition to make a text transcription of the prompt, and send that text to Amazon's cloud, where the AI acts on it and sends a response.
The local audio is then deleted.
Big models need cloud-based power (Image credit: Amazon)Granted, this is likely how everyone would want their Echo and Alexa experience to work. Amazon gets the text it needs but not the audio.
But that's not how the Alexa experience works for most Echo owners. I don't know how many people own those particular Echo models, but there are almost two dozen different Echo devices, and this affects just three of them.
Even if those are the most popular Echos, the change only affects people who dug into Alexa settings to enable "Do Not Send Voice Recordings." Most consumers are not making those kinds of adjustments.
This brings us back to why Amazon is doing this. Alexa+ is a far smarter and more powerful AI with generative, conversational capabilities. Its ability to understand your intentions may hinge not only on what you say, but your tone of voice.
It's true that even though your voice data will be encrypted in transit, it surely has to be decrypted in the cloud for Alexa's various models to interpret and act on it. Amazon is promising safety and security, and to be fair, when you talk to ChatGPT Voice and Gemini Live, their cloud systems are listening to your voice, too.
When we asked Amazon about the change, here's what they told us:
“The Alexa experience is designed to protect our customers’ privacy and keep their data secure, and that’s not changing. We’re focusing on the privacy tools and controls that our customers use most and work well with generative AI experiences that rely on the processing power of Amazon’s secure cloud. Customers can continue to choose from a robust set of tools and controls, including the option to not save their voice recordings at all. We’ll continue learning from customer feedback, and building privacy features on their behalf.”
For as long as the most impactful models remain too big for local hardware, this will be the reality of our Generative AI experience. Amazon is simply falling into line in preparation for Alexa+.
It's not great news, but also not the disaster and privacy and data safety nightmare it's made out to be.
You may also likeCDs and DVDs are (or rather were) great for storage, but they can be affected by disc rot, a form of physical deterioration that affects optical discs, causing them to become unreadable over time due to corrosion or damage to the reflective layer. If you have a lot of important data stored on discs and are worried about losing it, TeamGroup has a solution to your fears – the D500R ISD WORM SD card.
Shown off at Embedded World 2025 (where it won the Embedded Vision category's top honor and the 2025 Community Choice Award), the D500R WORM card uses Write Once, Read Many technology for non-erasable, tamper-proof, long-term data retention storage.
The D500R ISD uses MLC NAND Flash, which offers better durability and steady performance, and promises read and write speeds of up to 70MB/s and 65MB/s. It supports capacities from 8GB to 256GB. That largest card is big enough to store data from around 374 CDs, so it can likely back up all your discs easily.
Seals the data in placeAs you can tell from the description, the card can be read many times, but only written to once. Via a combination of hardware design and firmware control, the D500R ISD seals the data in place and ensures it can’t be deleted, overwritten, or updated. This makes it useful for users who require lasting data protection without the risk of accidental changes or even malicious tampering from ransomware.
The card includes features like Power Fail Management, which helps preserve data in the event of an unexpected power cut, and Bad Block Management, which detects and isolates faulty storage sectors, helping extend the product’s lifespan.
It also has a built-in S.M.A.R.T system that tracks usage and sends alerts if an attempt is made to alter or delete the data in some way.
Built to withstand demanding environments, it meets multiple military-grade shock and vibration standards and is rated IP58 for dust and water resistance.
TeamGroup suggests it could be useful for law enforcement footage, medical records, or financial archives, but it could be used for backing up family photos and videos or company data.
While it’s clearly a great solution for long-term archival storage, it’s probably worth noting that the D500R WORM card is a lot easier to lose than a box of CDs or DVDs, so you might want to think about where you’ll keep it. There’s also no word on pricing yet, but it's fair to assume it won’t be cheap.
You might also likeSaturday Night Live is one of the most iconic pieces of programming in TV history, and it’s currently in its 50th season. To celebrate that achievement, NBC and showrunners, including Lorne Michaels, planned, produced, and executed well – all live – the SNL 50th Anniversary Special, which aired a few weeks back on February 16, 2025.
Rather than firing up a classic TV antenna or opting for my Hulu live TV subscription, I decided to open up Peacock, using an Apple TV 4K – one of the best streaming devices – to ultimately watch the anniversary special. It’s not the first time I’ve turned to the NBC-owned streaming service to watch a live event, that honor goes to the Summer 2024 Paris Olympics, in which I also used the multiview functionality to watch multiple competitions at once.
This time around, I was greeted with an SNL 50 overlay and could jump right into the live broadcast of the show with a single click. For those wondering, I hopped in about midway through the opening with Paul Simon and Sabrina Carpenter on one of the best streaming services.
(Image credit: Peacock)The stream began without a hitch and looked pretty sharp if I do say so myself. I also didn’t encounter stutters or a stoppage in the stream, and I didn’t even see commercials either – rather, I was presented with a text along the lines of “the SNL 50th Anniversary would resume soon” with the theme song playing on a loop and various glam shoots of the acclaimed 8H studio where the show tapes.
It was a nice experience for a live-stream event, an easy way to watch the 50th special without the need for a linear TV experience, and it went off pretty much without a hitch. Something that the linear broadcast for some didn’t as the special ran over. Even so, 14.8 million viewers across Peacock and NBC (linear TV) watched the special either live or within the same day. That makes it NBC’s most primetime event in the last five years, besting the Golden Globes.
Peacock has a sort of special sauce in the streaming service sector – yes, it’s a new offering since its premiere in 2020, but it’s backed by NBC, which has been in the business of live TV and events rolling in real time for years. The streaming service also knows how to curate its backlog of content with other elements and live events. We saw back with the Paris Olympics the ability to watch each individual category, a few of them together in multiview, and a trove of new content, including Snoop Dog at the games and a show hosted by Alex Cooper.
The SNL 50 takeover was a similar feat, which had a goal of letting fans reach the content they wanted – be it all the episodes from the 50th season or previous ones, specially-released documentaries, or even the SNL 50 Homecoming Concert, which also streamed live on Peacock. We’ll also see a similar takeover come March 21, 2025, when Wicked hits Peacock and begins streaming for all subscribers of the service.
Building a platform that supports the highest levels of performanceSpeaking to TechRadar, Patrick Miceli, CTO of global streaming and the NBCU Media Group, said that since Peacock’s inception, “we’ve invested in building a platform and technical infrastructure that supports the highest levels of performance, security, and reliability to deliver a seamless and incomparable experience for our customers.”
(Image credit: Theo Wargo/NBC)Miceli further noted that even with the complexities of live-streaming, Peacock has “achieved a 100% success rate in video quality and reliability on our NFL and Olympics coverage last year”. This comes as other services, including Netflix with the Logan Paul vs Mike Tyson fight, have struggled to keep streams afloat with heavy audience demand.
Suffice to say, Peacock and NBC at large felt confident going into the weekend with the Olympics, NFL games, and other events, including the Macy’s Thanksgiving Day Parade in the history books. Still, Miceli shared, “we do still spend a lot of time preparing for each event, accounting for a variety of different scenarios to ensure we’re ready for anything”.
I was impressed with Peacock’s performance, and countless others shared that the stream was a top performer on social media using the hashtag #SNL50. Given the performance here, I’m keen to see what kind of package Peacock curates for future specials, live events like the next Olympics, and NBA games, which will hit the streaming service in 2025.
Of course, even though it’s not live, I’ll still head to Peacock whenever I need my Bravo fix like Summer House, reruns of Vanderpump Rules, or a rewatch of a timeless classic with SNL alums Amy Poehler (Parks and Recreation).
You might also likeWe're nearly there, folks. The final episode of Severance season 2 is almost upon us – and, to celebrate (or should that be commiserate?), Apple has released an incredibly brief teaser for the forthcoming finale.
And when I say brief, I really mean it. The teaser, if it can even be called that, is a five-second clip that shows a white goat being wheeled down one of Lumon's sterile hallways via a trolley. The little guy bleats a couple of times while they're being taken... somewhere, too, and they don't sound like happy bleats to me. If anything happens to him, Lumon, my 'innie' and 'outie' will come for you!
Season finale Friday.#Severance pic.twitter.com/yf0wGtZTYsMarch 18, 2025
The 10th and final installment of Severance's second season is one of the most hotly anticipated episodes of the year so far. Indeed, the sci-fi mystery thriller series is now considered to be an even more popular Apple TV Original than its sibling Ted Lasso. Understandably, then, excitement is slowly building ahead of its debut on Apple TV+ later this week.
Many fans, myself included, will be hoping that season 2 episode 10 finally answers some of our biggest questions about the show, too. I've spent the last nine weeks poring over every one of season 2's episodes, coming up with new theories about this engrossing alternate reality to our own, and speculating on where the story will go next.
While I don't agree with some viewers that season 2's pacing has been off, then, I am ready for the show's creative team to pull back the veil on some of Severance's biggest mysteries. That includes why Lumon's Mammalian Nurturables division has been hand-rearing goats and what the nefarious buitech company is going to use them for. Fans have come up with plenty of theories about the goats and Im curious to see if any of these five intriguing theories about the goats will be proven correct.
Our last check-in with Lumon's goats took place in season 2 episode 3 (Image credit: Apple TV+)Anyway, with the next episode of one of the best Apple TV+ shows set to arrive on March 20 (US) and March 21 (UK and Australia), we've got a few more days – at the time of publication – to wait for its release.
While we do so, read up on my latest Severance theories in my season 2 episode 9 recap piece, or read the section below for more of my coverage around the show in general.
You might also likeQuantum computing is widely seen as the next major leap in technology, but also poses a security threat, as it could break the encryption systems which protect everything from online banking to government data.
The idea that vast amounts of sensitive information could be cracked in seconds by a future quantum machine is understandably a big concern, and even printers could be at risk - and to address this issue, HP has announced what it calls the world’s first printers designed to protect against such attacks.
The new 8000 Series of A3 printers, including the HP Color LaserJet Enterprise MFP 8801, Mono MFP 8601, and LaserJet Pro Mono SFP 8501 (which can output up to 70ppm), include updated ASIC chips designed with quantum-resistant cryptography, which HP says also allow digital signature verification for firmware protection.
Updated ASIC chips“Without quantum resilience, a printer facing a quantum attack at the firmware level would be fully exposed through malicious firmware updates, giving the attacker stealthy, persistent and total control of the device," the company said.
HP also claims the hardware is designed to secure BIOS and early-stage firmware, limiting the risk of manipulation through fake updates. The new models take zero trust security approaches too, helping companies manage their print fleets more securely.
There's no word on pricing or availability of the new printers yet, but this move follows HP’s earlier introduction of business PCs with firmware protection against quantum threats. The company said it plans to apply quantum-resistant cryptographic algorithms across both its PC and printer product lines.
HP also notes because many print contracts run for several years, businesses should consider this timeline in their next purchasing decisions to avoid future compliance gaps.
While quantum computing it’s still some way off achieving its full potential, the likes of Google and Microsoft have made a number of breakthroughs in this field.
Most recently, Microsoft took the wraps off Majorana 1, its first-of-its-kind quantum chip with topological core architecture.
You might also likeAfter years of leaks, occasional singles and rare live appearances, rap phenomenon Playboi Carti has released his third studio album. Simply titled MUSIC, the record set the internet and fan circles ablaze when it appeared on streaming services on March 14 – but less than a week on, keen-eared fans suspect something’s not quite right.
To say the impact of MUSIC has been massive so far would be an understatement. Opening track POP OUT has already racked up more than 10 million streams on Spotify, and the album ropes in collaborators from the scene’s highest echelon – legends from Carti’s hometown of Atlanta including Future and Young Thug, pop superstars such as The Weeknd, plus of course rap heavyweight Kendrick Lamar, who checks in for not one, but three guest appearances.
This didn’t stop fans from scrutinizing every second of the album, though – quite the opposite. With so few releases and appearances in the past few years, Carti’s massive fanbase took to the new record like fresh water in the desert, ripping through its 30 tracks and immediately posting reactions and discussions to online platforms like Reddit.
Carti – real name Jordan Carter – is no stranger to controversy, but the release of MUSIC sparked an altogether new debate. Fans began to suspect the presence of AI vocals throughout the long-awaited album; specifically alleging Carti used AI to mimic his own voice, with some suggesting that The Weeknd’s feature verses may have been altered too.
To clarify, I mostly found Reddit threads debating whether two tracks on the album (RATHER LIE and FINE ****), utilize AI, with commenters sitting on both sides of the argument. “Genuinely pathetic” says user dat_grue, while user whatsongisdat says “who cares it’s mid either way”.
Whatever you make of it, using AI is a pretty stark allegation for an artist whose only instrument is his voice. With so much of rap music’s history based in the authenticity and honesty of individual MCs and with Carti taking so long to supposedly work on this new record, any suggestion of using AI to take a shortcut was bound to cause an uproar.
Playboi Carti has been a mainstream staple for almost a decade (Image credit: Wojciech Pędzich / Wikimedia Commons / Creative Commons 4.0 Attribution / Cropped and resized) RATHER LIE?Now, I’ve been pretty vocal about my distaste for AI, especially generative AI. I find it totally backwards to hand creative work – perhaps the defining ability of humanity – to totally automated processes.
I’m not saying AI can’t be used as a tool for legitimate artistry. Stem separation, where songs are split into their instrument parts with extremely reactive and sophisticated EQ and volume changes for more streamlined sampling, is one great example. But as a replacement? No thanks. If these rumors turn out to be true, I’d feel let down that one of the most popular artists in the world, one with major label backing, has been allowed to take the easy route.
With that all said, it’s not really the potential inclusion of AI that truly unsettles me in this case – it’s that I don’t think I’d be able to tell the difference.
And right now, no artist (or producer) is obliged to disclose whether what we're hearing was ever actually uttered from our beloved artist's mouths, or played by their hands.
AI text, image, and video generators have worked their way into public view over the past few years – Google Gemini, ChatGPT, Stable Diffusion and DALL.E are fairly well known examples. However, the presence of AI in music is increasingly part of the conversation – and part of production – even if the technologies behind it are substantially less well known.
In August 2024, I came across a new single from dance music producer Dan Snaith under his Caribou alias titled Volume, a rework of the iconic house track Pump Up The Volume by MARRS. What I didn’t know beforehand, or indeed realise until I read it elsewhere, was that the female vocal that glides through the background of Caribou’s version is, in fact, Dan Snaith singing into an AI processor.
I was stunned – firstly by how natural everything sounded, and then by the fact that, despite considering myself a tech-savvy music lover, I hadn’t sensed something was even different about this vocal.
It’s a similar story with MUSIC. When I first came across these AI accusations, I did a mental scan of the album and found… nothing. Even now, nothing seems particularly out of place against Carti’s characteristically chaotic delivery and already machinic production. I just wouldn’t be able to tell.
Caribou's Dan Snaith on stage with the project's live band (Image credit: Shutterstock / Christian Bertrand) GOOD CREDITAt this point, you may be asking why people should care – whether or not AI is used on MUSIC, the end result probably sounds the same. However, I think forgiving its use so easily could set a dangerous precedent.
As my TechRadar colleague Rowan Davies recently reported, a coalition of more than 1,000 musicians recently released an album consisting of silence and studio ambience to protest the growing threat AI poses to music and the music industry.
The silent album, titled Is This What We Want?, protests proposed UK legislation that would allow AI developers to appropriate copyrighted music for the sake of training AI, effectively bypassing the rights of the musicians to be reimbursed and recognized for their contribution to the eventual output.
Enmeshing AI with the music itself would make it a whole lot harder to make for the music industry to resist the rising tide of consumption and redistribution that happens when AI takes from existing material. If you ask me, AI cannot be inspired – it lacks the human capacity to create and therefore cannot be considered to make anything legitimately new. As such, involving AI in the creation of music is damaging not only to musicians’ already tight pockets, but their role in society too.
At the very least, I’d like to see some kind of mandatory warning label on streaming services, like the “E” icon that marks songs with explicit language, to denote the use of AI in a track. This would still require self-reporting from artists and labels, but it’s the best I can imagine platforms like Spotify and Apple Music having the capacity to implement.
I’m expecting Playboi Carti to scoop up plenty of great reviews and possibly some awards for MUSIC. To myself and others raised on SoundCloud rap and the mid-pandemic hyperpop explosion, it’s the first genuinely monolithic record of the year. I just hope these rumors of using AI vocals are proven to be just that, rumors and nothing more, for the sake of the music industry at large.
You might also likeZoom has announced the next step in its agentic AI strategy with a range of new skills, agents and models coming to Zoom Workplace and Zoom Business Services.
The company said the new releases will help its users get on with being more productive and strengthening their relationships with customers with the help of more autonomous AI tools.
With the bold claim that over 45 new innovations have made it to Zoom’s portfolio, its agentic AI will cover apps like Zoom Meetings, Zoom Phone, Zoom Team Chat, Zoom Docs and Zoom Contact Center.
Zoom AI Companion gets an agentic boostThe platform’s AI Companion, which gets a 2.0 version launch, will continue to serve as the vehicle for Zoom to implement agentic AI across its portfolio, just with helpful enhancements like reasoning and memory for decision-making, problem solving and learning.
Zoom CPO Smita Hashim summarized: “AI Companion is evolving from a personal assistant to being truly agentic.”
Besides the usual task action and orchestration functionalities of agentic AI, Zoom will also help workers manage their calendars and generate content.
Later this spring, users will also be able to create and deploy customizable virtual agents for more contextual customer conversations, but they’ll also be able to take action. Third-party agents such as the ServiceNow Now Assist agent and custom agents are also set to be supported “soon.”
Agentic AI promises major boosts for things like detecting action items in meeting summaries with Zoom Tasks and extracting tasks from calls using the Zoom for Microsoft Teams app. Coming later in May, we’re also promised agentic AI for creating meeting agendas and also real-time summarization of meetings and phone calls.
It’s not just worker productivity that’s set for improvements, though, because AI Companion for Workspace Reservation will recommend which days employees should go to the office based on scheduled meetings and teammates’ scheduled in-office days, with the agentic portion of Zoom’s AI proactively booking desks or Zoom Rooms.
“We’re delivering value for our customers through AI agents and agentic skills that solve real customer problems, helping them connect, collaborate, and get more done, all within the Zoom platform our users trust and love,” Hashim added.
You might also likeIt's a great time to be a Hulu subscriber, and though the weather may be getting better as spring starts to blossom that doesn't mean you have to spend every waking minute outside. I'd say you're entitled to a few evenings spent indoors bingeing the best titles on the best streaming services, particularly on Hulu.
Although, it has to be said that April looks to be a bittersweet month for Hulu's new arrivals. On the one hand, the new roster of fresh movies packs yet another powerful punch of personal favorites that rank among the best Hulu movies already on the platform, including Darren Aronofsky's Black Swan (2010), Wes Anderson's The Royal Tenenbaums (2001) and Steven Spielberg's Jurassic Park (1993).
However, on the other hand, April marks the end of one of the best Hulu shows as we will unfortunately have to bid farewell to Elisabeth Moss in The Handmaid's Tale season 6 when the final season premieres on April 8. It's been a long ride, but even the best TV shows have to come to an end at some point – at least its sequel series The Testaments is in the works!
So while you prepare yourself for a climatic final season, take this as your sign to scour through Hulu's April schedule for your next TV obsession. Take it from me, there's plenty to choose from, you just have to know where to look.
Everything new on Hulu in April 2025Arriving on April 1
Arrival (movie)
Arrival (En Espanol) (movie)
The Best Exotic Marigold Hotel (movie)
Black Swan (movie)
Boys on the Side (movie)
Concussion (movie)
Concussion (En Espanol) (movie)
Copycat (movie)
Enough Said (movie)
The Equalizer (movie)
The Equalizer (En Espanol) (movie)
Gifted (movie)
The Good Thief (movie)
Gone Girl (movie)
Gulliver's Travels (movie)
The History of the World Part I (movie)
I Heart Huckabees (movie)
Interstellar (movie)
Interstellar (En Espanol) (movie)
Jumanji (movie)
Jumanji (En Espanol) (movie)
Jurassic Park (movie)
Jurassic Park III (movie)
The Karate Kid (movie)
The Karate Kid (En Espanol) (movie)
The Karate Kid Part II (movie)
The Karate Kid: Part II (En Espanol)
The Karate Kid Part III
The Karate Kid Part III (En Espanol)
Little Man (movie)
Little Man (En Espanol) (movie)
The Lost World: Jurassic Park (movie)
Made in America (movie)
Me, Myself and Irene (movie)
Mrs. Doubtfire (movie)
Oddity (movie)
Red Sparrow (movie)
The Revenant (movie)
Runaway Jury (movie)
Sexy Beast (movie)
Shark Tale (movie)
The Spy Who Dumped Me (movie)
Superbad (movie)
Superbad (En Espanol) (movie)
Tombstone (movie)
True Story (movie)
21 Jump Street (movie)
22 Jump Street (movie)
Wall Street (movie)
Wall Street: Money Never Sleeps (movie)
War of the Worlds (movie)
Widows (movie)
Wild (movie)
The Wolf of Wall Street (movie)
The Wolf Of Wall Street (En Espanol) (movie)
Year One (movie)
You Will Meet A Tall Dark Stranger (movie)
Arriving on April 2
Beyblade X season 1 (TV show)
Arriving on April 3
Oklahoma City Bombing: One Day in America (TV show)
Arriving on April 4
Dying for Sex (TV show)
Fire Force season 3 (TV show)
Classified (movie)
The Darjeeling Limited (movie)
Fantastic Mr. Fox (movie)
The Life Aquatic With Steve Zissou (movie)
The Royal Tenenbaums (movie)
Rushmore (movie)
Arriving on April 5
American Monster season 3 (TV show)
Bering Sea Gold season 3 (TV show)
Diners, Drive-Ins, and Dives seasons 1-2 (TV show)
I Love A Mama's Boy season 2 (TV show)
2025-04-05 00:00:00
The World According to Allee Willis (documentary)
Arriving on April 6
Witch Watch (TV show)
Arriving on April 8
The Handmaid's Tale season 6 (TV show)
Small Things Like These (movie)
Arriving on April 9
Angels & Demons (movie)
The Da Vinci Code (movie)
Arriving on April 10
Court Cam season 7 (TV show)
Houses of Horror: Secrets of College Greek Life season 1 (TV show)
Ca$h (movie)
Hesher (movie)
Niko: Beyond the Northern Lights (movie)
Red Dog (movie)
So Undercover (movie)
Spun (movie)
Arriving on April 11
Got to Get Out (TV show)
Garfield (movie)
Garfield: A Tail Of Two Kitties (movie)
Magpie (movie)
Arriving on April 12
Fixer Upper season 5 (TV show)
MythBusters season 5 (TV show)
The Family Chantel season 4 (TV show)
Arriving on April 15
Lake George (movie)
Arriving on April 16
No Man's Land season 2 (TV show)
Synduality Noir season 1 (TV show)
The Curious Case of Natalia Grace season 3 (TV show)
Arriving on April 17
The Stolen Girl (TV show)
Bible Secrets Revealed season 1 (TV show)
Gangland Chronicles season 1 (TV show)
Leah Remini: Scientology and the Aftermath seasons 1 & 2 (TV show)
Martin Short season 1 (TV show)
The Girl Who Wasn't Dead (movie)
Arriving on April 18
The Order (movie)
Arriving on April 19
Breaking Amish season 4 (TV show)
Disappeared season 6 (TV show)
Gypsy Sisters season 3 (TV show)
Moonshiners season 13 (TV show)
Arriving on April 21
Secrets of the Penguins (TV show)
No Hard Feelings (movie)
No Hard Feelings (En Espanol) (movie)
Arriving on April 22
In a Violent Nature (movie)
Arriving on April 24
Airline Wars season 1 (TV show)
Customer Wars season 4 (TV show)
Tell Me How I Died season 1 (TV show)
Tiny House World season 1 (TV show)
Husband, Father, Killer: The Alyssa Pladl Story (movie)
Arriving on April 25
Jessica Kirson: I'm the Man (TV show)
Azrael (movie)
Arriving on April 26
Chopped season 60 (TV show)
Four Weddings season 9 (TV show)
House Hunters Renovation season 16 (TV show)
Jessica Chambers: An ID Murder Mystery season 1 (TV show)
Arriving on April 29
Ernest Cole: Lost and Found (documentary)
You might also likeCanon's twin launch of the EOS R1 and EOS R5 Mark II stole 2024's headlines, with the latter camera winning TechRadar's camera of the year award. However, it's believed that the mid-range (and more affordable) EOS R6 Mark II remains Canon's most popular full-frame mirrorless camera, and that it could be updated soon with the Canon EOS R6 Mark III.
Canon Rumors initially touted an announcement for the end of 2024, which would have tallied with Canon's two-year cycle for EOS R6 cameras so far. However, here we are months later in March 2025 and there's still no sign of the third model, the potential EOS R6 Mark III, nor a rival Sony A7 IV successor for that matter – the rumored Sony A7 V, which is also taking longer than expected.
That leaves the Nikon Z6 III as the best full-frame mirrorless camera for most people. It might not hold that crown for long, however, if that latest EOS R6 Mark III launch rumors are accurate this time.
I've no doubt that a third model is in the pipeline, but I'm less convinced that it will come as soon as May – it's telling that there are so few leaked EOS R6 Mark III features. Still, that doesn't stop us from speculating about what features the next model could have, and what it needs to have to be a worthy EOS R6 Mark II upgrade and Nikon Z6 III rival.
Fully stacked: the latest stacked sensor type could be the single biggest upgrade for the EOS R6 Mark III. (Image credit: Sharmishta Sarkar / Future) 1. A new stacked sensorWhile a number of EOS R6 / EOS R6 Mark II users would hope for a higher-resolution sensor in a third model, the more likely scenario outlined by Canon Rumors is that the sensor will remain a 24MP unit. However, it will be a 'stacked' sensor type, like the one we saw in the EOS R3 – Canon's previous flagship before the EOS R1.
A stacked sensor delivers faster readout speeds, which can in turn improve a camera's overall performance for burst shooting and autofocus, plus its handling of rolling shutter distortion. The Nikon Z6 III features a partially stacked 24MP sensor, whereas the Z6 II has a regular 24MP sensor.
These sensors don't come cheap, and if Canon decides to put one in the R6 Mark III it will likely have an impact on the camera's price. However, it's the logical upgrade for Canon to start with; and keeping resolution to 24MP would leave sensible breathing space between the EOS R6 Mark III and the next model up – the higher-resolution EOS R5 Mark II, which has a 45MP stacked sensor.
The twin card slots of the EOS R5 Mark II. Expect much the same in the EOS R6 Mark III. (Image credit: Future | Tim Coleman) 2. Improved speedTo fully utilize a faster stacked sensor, the EOS R6 Mark III is also going to need a new processor. The EOS R1 / EOS R5 Mark II introduced a DIGIC accelerator – it's much like a second processor that streamlines how files are processed, easing bottlenecking – and that same secondary processor will likely find its way into an EOS R6 Mark III.
A DIGIC accelerator could enable faster and longer burst-shooting sequences, plus better pre-capture shooting and various other speed-dependent features.
However, it'll be of no use to have a stacked sensor and second processor if the cameras still relies on SD cards to store files. No, the EOS R6 Mark III will need to accept the faster CFExpress Type B cards too, and I expect one slot for each of those card types.
I posed this basketball player before a match to register them in the EOS R1. Following that, the camera prioritized this player as the point of focus throughout the game. This tech could find its way into the EOS R6 Mark III. (Image credit: Future | Tim Coleman) 3. The latest autofocusAgain, it's pretty typical for Canon's latest autofocus system to trickle down from its flagship models all the way to its mid-range models, such as the EOS R6 series. To that extent, I expect the EOS R6 Mark III to feature the same autofocus system as the EOS R1 / EOS R5 Mark II, which is pretty much the best in the business, especially if you photograph certain sports.
We've already covered just how sophisticated Canon's autofocus is in our in-depth reviews – some standout features include subject priority and sports priority modes. For the former, you can take a picture of a person and store it in the camera as a priority subject for the camera to focus on, such as the bride at a wedding. For the latter, users can select one of a number of sports, and the camera can assess where the key points of interest are based on the action, say the player kicking a ball.
What is unlikely to be inherited from Canon's flagship models is Eye Control AF, which works using a sensor in the viewfinder that knows where your eye is looking in the frame, and automatically adjusts the focus area to what has your attention.
Might we see an all-new screen in the EOS R6 Mark III that outdoes the one in the EOS R5 Mark II? (Image credit: Future | Tim Coleman) 4. A new multi-angle screenI don't expect many improvements with regards to the body and handling of the EOS R6 Mark III. However, Canon Rumors says the latest camera could feature a multi-angle LCD screen much like the one on the Sony A9 III, which would be a first for Canon.
A multi-angle screen effectively has twin hinges, meaning you can flip the screen out from the body, and then spin it again. This allows for easy viewing from awkward angles when you're shooting in horizontal and vertical formats, where a single-hinge type would be limited to horizontal tilting.
Apparently, the screen itself could be a fancier OLED type too, rather than LCD. That would make for easier viewing in bright light, although the tech would further increase the cost. File that one in the unlikely category.
An example of how much bigger Canon's in-camera AI upscale editor makes images. (Image credit: Future | Tim Coleman) 5. In-camera AI editingAnother feature that debuted in last year's flagship models was in-camera AI editing. Such AI tools aren't just gimmicks – they can prove genuinely useful on the go, and one such feature is upscaling – I wrote about my experience getting 400% bigger images with the EOS R5 Mark II. Another is denoising, to improve detail in low-light / high-ISO images.
I see no reason why these tools can't make their way into an EOS R6 Mark III if it features the upgrades already mentioned above. And a feature like upscaling, which could quadruple the camera's image size from 24MP to 96MP, arguably makes even more sense. The increasing sophistication of the tech really could put the megapixel race to bed.
SummaryI don't think we'll see a lot of completely new tech in the EOS R6 Mark III, whenever it finally launches. However, by inheriting a lot of EOS R3 / EOS R5 Mark II tech, it will be a healthy update of the EOS R6 Mark II, especially for users who need a speedier camera and improved autofocus accuracy.
We could see a faster stacked sensor, twin processors that includes the DIGIC Accelerator, Canon's best-ever autofocus, CFExpress Type B card compatibility, a new screen, plus a host of other hidden features, wrapped in much the same body as before.
If all those upgrades are delivered, I struggle to see Canon pitching the EOS R6 Mark III for anything less than the EOS R6 Mark II's launch price, which puts it north of $3,000 / £3,000. With the Nikon Z6 III and Canon EOS R6 Mark II already heavily discounted to around 50% less than that, the consequence of EOS R6 Mark III delays is clear.
You might also likeCybercriminals are using CSS in emails to track their victims, learn more about them, and redirect them to phishing pages, experts have warned.
Cybersecurity researchers at Cisco Talos outlined how CSS (Cascading Style Sheets) is used in emails to control the design, layout, and formatting of email content. Businesses use it not only to make the emails look better, but also to keep the layout consistent across different email clients. There is nothing inherently malicious about CSS but, as is the case with many other legitimate tools, it is being abused in attacks.
"The features available in CSS allow attackers and spammers to track users' actions and preferences, even though several features related to dynamic content (e.g., JavaScript) are restricted in email clients compared to web browsers," a Cisco Talos researcher said in a report.
Advanced filtering techniquesThrough CSS, cybercriminals can hide content in plain sight, thus bypassing email security solutions. They can also use it to redirect people to phishing pages, it was said. The tool can be used to monitor user behavior which, in turn, can lead to spear-phishing or fingerprinting attacks.
"This abuse can range from identifying recipients' font and color scheme preferences and client language to even tracking their actions (e.g., viewing or printing emails)," they said. "CSS provides a wide range of rules and properties that can help spammers and threat actors fingerprint users, their webmail or email client, and their system. For example, the media at-rule can detect certain attributes of a user's environment, including screen size, resolution, and color depth."
Cisco Talos said the new campaign builds upon a “hidden text ‘salting’” one they uncovered in late January 2025.
To tackle this threat , the researchers suggested IT teams adopt advanced filtering techniques that scan the structure of HTML emails, rather than just their contents. An email security solution could, thus, look for extreme use of inline styles or CSS properties such as “visibility: hidden”. Deploying AI-powered defenses is also recommended.
Via The Hacker News
You might also likeHP has been shaking things up at its Amplify 2025 event, unveiling new AI laptops and changing brand names in a bid to make them easier for would-be buyers to understand. The company offers "i" or "a" versions of its AI laptops, so you can tell at a glance whether a device has an Intel or an AMD processor.
However these aren’t the only players in town - and HP knows it. If you’re a highly mobile professional who needs to stay constantly connected and wants to offload time-consuming tasks to AI, HP has launched the EliteBook 6 G1 series laptop, which includes a 14-inch model with a 40–60 TOPS NPU that’s "purpose-built for knowledge creators to do more."
There are two EliteBook 6 Next Gen AI models to choose from: one with an AMD processor (G1a) and one with a Qualcomm processor (G1q), both in a 14-inch size. While Intel is offered in the standard AI version (less than 40 TOPS NPU), and in the higher-end EliteBook 8 G1 models, it’s not offered at all in the EliteBook 6 Next Gen AI lineup.
Choice of SnapdragonsThe 14-inch EliteBook 6 G1q Next Gen AI PC is powered by a 45 TOPS Snapdragon X, X Plus, or X Elite processor. Memory configurations go up to 64GB LPDDR5X, with storage options ranging from 256GB to 1TB PCIe NVMe M.2 SSD. The 14-inch display comes in multiple options, including WUXGA (1920 x 1200) and WQXGA (2560 x 1600).
Other features of note include an FHD camera, with optional 5MP+IR or AI-enhanced cameras. Audio is delivered through dual stereo speakers by Poly Studio and dual microphones with AI noise reduction.
The laptop also includes a spill-resistant keyboard (optionally backlit with Durakeys) and a Microsoft Precision touchpad. Connectivity options include two USB4 Type-C ports, two USB-A ports, HDMI 2.1, audio jack, RJ45, and an optional Nano SIM. Wireless support includes Wi-Fi 7 or Wi-Fi 6E with Bluetooth 5.4 or 5.3, along with optional 5G WWAN.
Battery options are 48 or 56Wh. The laptop weighs around 1.45 kg and comes in any color you like, as long as it’s silver. Bundled software includes MyHP, HP AI Companion, Microsoft Copilot, Poly Camera Pro, and more.
There’s no word yet on pricing or availability for HP’s sole Qualcomm AI laptop.
You might also likeWe’re big fans of Framework’s modular laptops, which let you choose the components you want, replace or upgrade parts, and even add third-party custom modules, such as this drone destroyer.
It’s almost a surprise that other major laptop manufacturers haven’t followed a similar path, but HP appears to have cottoned on to this approach with its new EliteBook, unveiled today at Amplify 2025.
The enterprise-ready EliteBook 8 G1, with an AMD or Intel processor, is designed to be easily repaired or upgraded, with HP saying the battery, fans, SSD storage and SODIMM memory can be swapped out in under 10 minutes, and the wireless LAN and mobile broadband M.2 cards are also fully accessible and quickly replaced. If that’s not enough, the modular keyboard can also be removed and switched, and the self-aligning display does not require single-use jigs for replacement.
Redesigned inside and outHP says the new EliteBook 8 G1 has been, "redesigned inside and out", offering up to 224% better power efficiency. The Series 8 G1 PCs are also made using a diverse range of recycled materials, including glass, cooking oil, rare earth magnets, magnesium, aluminum and ocean-bound plastic.
There is a choice between next-gen AI PCs (with 40–60 TOPS NPUs) and regular AI PCs (with less than 40 TOPS NPUs). They come in G1a (AMD) or G1i (Intel) models, available in 13-inch (with soldered-in memory), 14-inch, and 16-inch sizes.
All feature a WUXGA (1920x1200) display, a redesigned thinner chassis, new Glacier Silver color, larger trackpad, fingerprint reader in the power button, dual Thunderbolt 4 ports, USB-C and USB-A options, HDMI 2.1, and support for Wi-Fi 6E, Bluetooth 5.3 or 5.4, and optional NFC and Smartcard reader. They support up to 64GB of RAM and up to 2TB of SSD storage.
There’s no word on pricing or availability for the new models yet, but we should know soon.
You might also likeHP has launched a range of new AI laptops at its Amplify 2025 event with a host of new brand names.
The company's 800 Series is now EliteBook 8, and the 600 Series is now EliteBook 6 (both systems come in 13, 14, and 16-inch sizes).
The 400 Series is now ProBook 4 (in 14 and 16-inch sizes). ZBook Firefly is now the ZBook 8 G1i (Intel) and G1a (AMD), ZBook Power is now the ZBook X G1i, Elite Mini/SFF/Tower is now EliteDesk and Elite AiO is now EliteStudio. Clear?
Easy to understandAt the start of the year, Dell decided that the arrival of the AI era meant it was time for a fresh start, and in debuting a new Pro range of laptops, it waved goodbye to the beloved Latitude brand, which was first introduced back in 1994.
The ax swinging didn’t end there, though. The company also debuted a new Pro Max mobile workstation family, which replaced the 32-year-old Precision brand. Intel, which finally has a new CEO, has also been playing the name game recently to try to make things clearer for customers.
While HP’s rebranding isn’t anywhere near as major as Dell’s (it’s also worth noting that more people will be familiar with Dell’s Latitude and Precision brands than any of HP’s brands), it’s still another big change for consumers to get their heads around. At least it’s straightforward enough.
If you want an EliteBook 8 G1 Series laptop, you can differentiate between the models by size (13, 14, 16), processor (G1i for Intel or G1a for AMD), and Next Gen and regular AI models (which offer different TOPS). Once you know what’s what, you can quickly spot the difference between the HP EliteBook 8 G1a 13” Next Gen AI PC and the HP EliteBook 8 G1i 16” AI PC.
Is the rebranding totally necessary? That’s a matter of debate.
“We take a very conventional and pragmatic approach," Tom Butler, Executive Director of Commercial Portfolio and Product Management at Lenovo, recently told NoteBookCheck.
"There is a series name for our ThinkPad products, like T series, L series or E series, with screen sizes in the product name for clarity - T14, T16 and so on. In order to keep it logical and help people keep track, we also put a generational name after. At the moment, we are not making any changes in our direction. Lenovo has solid brand equity, as do our sub-brands like ThinkPad, ThinkBook or Lenovo Yoga for consumers.”
You might also likeNearly 10 years after the iconic Pebble smartwatch was discontinued, the iconic watch is back with two new “Pebble-like” smartwatch models sporting low-power screens and packing a 30-day battery life – designed by Pebble founder Eric Migicovsky.
The limited-run smartwatches will be available from July, running the open-source Pebble OS.
Core 2 Duo (Image credit: Core Devices)The first of the two watches is the Core 2 Duo, a device very similar to the old discontinued Pebble 2, with some improvements.
With four buttons and an ‘ultra-crisp’ 1.26-inch e-paper MIP screen similar to some of the best Garmin watches, this low-power screen allows the Core 2 Duo to maintain long battery life at an impressive 30+ days.
The smartwatch, which comes in white or black polycarbonate frames and matching synthetic straps, also features a speaker and microphone array, step tracking, and sleep tracking.
Interestingly, the sleep tracking feature doesn’t use a heart rate monitor (the Core 2 Duo doesn’t have one) but an accelerometer, which analyzes movement during sleep. A barometer and compass round out the hardware features.
“This is my dream watch," Migicovsky told TechRadar in an exclusive interview. "It is similar to the Pebble Time 2, which we announced in 2016 but never shipped, much to the chagrin of many people who emailed or texted me over the last eight years.
“The core reason why we’re making these is that the market is not meeting the needs of people who want exactly this feature set. There are plenty of options in the AMOLED space… but no one’s making something like this.”
Software-wise, both Pebble OS and its accompanying phone app are completely open-source, making the watch eminently hackable. Migicovsky told us there are already 10,000 apps available for the device thanks to the old Pebble OS infrastructure, and 12,000 developers have signed up to potentially create new ones since the initial announcement.
The new hardware allows developers to use the watch in new ways, such as basic ChatGPT integration using the watch’s speaker and microphone.
“The smartwatch is a great form factor for people being able to do a quick ChatGPT query,” says Migicovsky. “I’m just putting it out there so developers know we’ve got a speaker for something… We’re keeping our options open here.”
The Core 2 Duo will cost $149 (around £115 / AU$235) and will ship in July, available exclusively from Pebble's online store.
Core Time 2 (Image credit: Core Devices)As well as the Core 2 Duo, Migicovsky and his two-person Core Devices company are also debuting the Core Time 2, a premium version with a color screen and heart rate monitor.
Slightly bigger, the Core Time 2 sports a 64-color 1.5-inch e-paper display, comprising 88% more pixels than the Core 2 Duo display. The display is also a touchscreen, primarily because Migicovsky wanted to add complications reminiscent of the best Apple Watches.
“We’re keeping all our buttons, and they will be the primary interface, but I wanted to add a touchscreen – again, mostly to keep our options open – but one key use case is Complications,” says Migicovsky. “Otherwise on Pebble, you have to dig through a menu to get to an app, and at that point, I might as well pull out my phone.
“I love complications on the Apple Watch where you can have a little widget which displays a little information, then you tap on that widget for a larger display.”
The Core Time 2 still packs all the features listed above on the Core 2 Duo, including 30 days of estimated battery life, but has a metal frame instead of the polycarbonate one. The watch will retail for $225 (around £175 / AU$355) and is also shipping in July.
Both watches are available to pre-order now at store.rePebble.com
You might also like...Gaming peripheral maker HyperX has revealed two new gaming headsets as part of the HP Amplify Conference 2025.
The most significant is the HyperX Cloud III S, a new version of the popular HyperX Cloud III wireless gaming headset. It features both low latency 2.4GHz and Bluetooth connectivity modes, offering support for both PC and console. It weighs just 0.74lbs / 340g but boasts seemingly incredible battery life.
According to the manufacturer you're looking at up to 200 hours playtime on Bluetooth, or 120 hours in the 2.4GHz mode. That's an awful lot of juice per charge and should mean that you spend less time charging your headset, and more time gaming.
It is worth bearing in mind, however, that the headset will be quite slow to charge when the time does come to top it up. HyperX states that it can take up to five hours, which could be a source of annoyance when you just want to dive into a match with friends.
Elsewhere, it features a stainless steel and aluminum frame, plus support for 3D printed magnetic earcup plates. These are sold by HyperX in certain regions and roughly cost between $39.99 and $49.99 depending on the design.
Like other models in its family, the headset also has a detachable boom microphone. It has a uni-directional pickup pattern and should be more than sufficient for chatting with friends.
There's no word on the price at the time of writing, but if its just right this could very well have a chance at being one of the best gaming headsets right now.
(Image credit: HyperX)The other model revealed is the HyperX Cloud Jet Dual wireless gaming headset. This comes in either a sleek blue and white or more plain black and, again, has both 2.4GHz and Bluetooth connectivity. The frame in this model is plastic, suggesting that it could be a more budget-oriented pick.
This is also supported by the lesser battery life, which delivers up to 20 hours in the 2.4GHz mode or 25 hours via Bluetooth.
You might also like...Google is today launching a new upgrade for Gemini called Canvas that allows you to refine documents and code straight from within its AI chatbot.
Canvas is a 'new interactive space' that is 'designed to make creating, refining, and sharing work easy'. Think of Canvas as a writing tool akin to ChatGPT Canvas or Apple Intelligence Writing Tools but built into Gemini with easy exporting to Google Docs.
Canvas can generate written drafts, change the tone of voice, and suggest edits directly from within Gemini. The tool can also streamline the coding process by quickly 'transforming your coding ideas into working prototypes for web apps, Python scripts, games, simulations and other interactive apps.'
That might not sound like the most exciting AI upgrade for most of us, but it opens up even more possibilities with Gemini, which is only a good thing, and not even a week on from Google's last major AI updates.
Just last week Google added Search history to Gemini, allowing users to get even more personalized AI responses based on how they've previously used Google Search. Additionally, Deep Research, Gemini's data analysis and reporting tool was made free alongside Gems, a custom chatbot builder, perfect for creating specific use cases like a counseling bot with AI.
Gemini updates are coming thick and fast, ChatGPT should be worried (Image credit: Google)Google continues to add huge Gemini upgrades almost weekly, with the AI chatbot quickly taking over ChatGPT as my favorite AI chatbot. Last week's Deep Research upgrade to 2.0 Flash which also included free access without a premium plan is fantastic, and I've used Deep Research multiple times this week without paying a dime. It's an excellent tool for getting in-depth info, perfect for work or the sports nerd like me who wants to know about the best fantasy football assets.
I don't use AI writing tools so Canvas isn't that appealing to me, but I'm excited by the cadence of Gemini updates and how focused Google is on building the best AI chatbot possible.
Last week's Search history upgrade could make Gemini the best AI tool on the market, and while it hasn't rolled out to me yet, I'm looking forward to seeing how it improves the Google AI experience.
Not only has Google announced Gemini Canvas today, but it's also upgrading Deep Research to add Audio Overview functionality from NotebookLM, allowing users to create podcasts from the research reports.
While Google's Gemini updates might not always grab the headlines, the constant push to improve the AI tool is worth writing home about. Gemini is one of the best AI chatbots on the market, and it just keeps getting better.
You might also like