This is not a drill! The 8BitDo Ultimate 2 - an upgraded version of the gamepad that tops our list of the best Nintendo Switch controllers - is launching very soon.
The 8BitDo Ultimate's successor has some key upgrades over the original model including a pair of TMR thumbsticks. These achieve the same objective as Hall effect thumbsticks in combating stick drift, but are more energy efficient, which should lead to a slight increase in battery life. Although, those new RGB lighting rings could certainly eat into the extra battery life gains. Thankfully, as with the previous model, this one also comes with a charging dock.
It's also providing what the manufacturer calls '8Speed' technology, which aims to deliver ultra-responsive 2.4Ghz wireless connectivity of under 1ms. If true, that'll be a very impressive upgrade.
The Ultimate 2 also adds toggles for its Hall effect triggers, allowing players to swap between instant and non-linear trigger presses. As before, you're getting two remappable rear buttons as well as two new bumper buttons for additional secondary inputs.
8BitDo's proprietary software looks like it's getting an upgrade too. The 8BitDo Ultimate Software V2 will let players adjust RGB patterns and strength, as well as button mapping, stick and gyro aiming sensitivity, trigger press distance and much more.
As for price and availability, the 8BitDo Ultimate 2 can be pre-ordered now for $59.99 / £49.99 at 8BitDo's Amazon store. It appears to be delivering from March 8 in the US, but UK folks will have to wait a bit longer until April 25. Three colorways are available, too - White, Black and a lovely Purple.
This initial version of the controller is also only compatible with Windows and Android devices. But, with the Nintendo Switch 2 appearing over the horizon, it's reasonable to expect 8BitDo to release a version that's compatible with the Switch family of systems.
You might also like...As data privacy laws evolve and the demand for transparency grows, privacy offices are increasingly burdened with the rising cost of processing Data Subject Access Requests (DSARs). In fact, a 2024 survey indicated a staggering 246% increase in DSARs over the past two years. And they’re costing companies big time – to the tune of $1.5k per request. For offices that handle these privacy requests manually, the costs are incremental. What began as a regulatory obligation to grant individuals access to their personal data has ballooned into a costly and resource-draining task for privacy teams.
From labor-intensive manual reviews to the complexity of identifying, retrieving, and securely delivering data, DSARs require significant investments in both technology and personnel. The challenge lies not only in complying with these legal requirements but also in maintaining the balance between operational efficiency and safeguarding the personal data they are entrusted with.
But what actually is a DSAR – and why are they causing such a stir? Let’s dive in.
Why should businesses care about rising DSARs, anyway?A DSAR is a legal right granted to individuals under data privacy regulations – such as the GDPR in the EU or CCPA in California – that allows them to request access to their personal data held by an organization. Essentially, it’s a way for people to understand what data is being collected about them, how it’s being used, and to ensure their privacy rights are respected.
When someone submits a DSAR, an organization must provide a comprehensive report on all the data they hold on that individual. This could include everything from personal details to browsing history, transaction records, or even interactions with customer service.
For privacy teams (especially those that process these requests manually) DSARs can become a complex and resource-intensive process. The challenge is not just in identifying and retrieving the right data, but also ensuring it’s done securely, within the required timeframes, and in compliance with the law – which becomes more and more challenging as new regulations appear across the globe.
In some jurisdictions like Chile, with few legacy protections, new laws are created to provide for additional individual rights. Meanwhile, the United States continues multiplying the number of data subjects with DSAR rights and adding to the list of available rights. Still other authorities have increased enforcement of existing laws, including on topics related to DSAR handling.
Public awareness is also a driving force behind this trend. With data breaches on the rise (up 78% in 2023 alone) consumers are more informed about the risks their personal data faces. Increasing media attention, stricter breach notification laws, and high-profile enforcement actions are making consumers more cautious and proactive.
Compliance isn’t just ethical, it’s economicalMeeting DSAR requirements can set your business apart by reinforcing your reputation as an ethical, customer-centric organization. Customers are more likely to trust companies that take their privacy seriously. Being proactive in addressing DSARs and offering users easy access to their data builds credibility and strengthens brand loyalty. What’s more, businesses that excel in DSAR compliance not only minimize the risk of fines and legal penalties, but they also foster a culture of transparency that can lead to higher customer satisfaction and retention rates.
To turn DSAR compliance into a strategic advantage, here are three actionable tips businesses can implement to safeguard customer data and stay ahead of the competition:
Adopt Data Minimization and Secure Storage Practices: One of the best ways to reduce the burden of responding to DSARs is to minimize the amount of personal data collected in the first place. By adopting data minimization principles (that is, only collecting the data that’s necessary and for the minimum amount of time) businesses limit the scope of DSARs and reduce the risks associated with data breaches. Additionally, secure storage practices, such as encrypting sensitive data and using access controls, can help prevent unauthorized access while DSARs are being processed.
Create Clear, User-Friendly DSAR Processes: Make it as easy as possible for customers to submit DSARs by offering easy-to-read instructions and multiple channels for requests. Whether it’s through an online portal, customer service team, or dedicated privacy email address, ensuring that the process is simple and transparent encourages individuals to take advantage of their rights. Timely and clear responses, coupled with transparency about how their data is being used, can further cement your organization as a trusted entity in the eyes of your customers.
Implement Automated Data Mapping and Retrieval Systems: Manually processing DSARs can be inefficient, error prone, and difficult to scale. By investing in automated tools that help map out where personal data resides within an organization, businesses can dramatically speed up the process of retrieving that data when a request is made. Not only does this streamline compliance, but it also helps ensure that the data you provide is complete and accurate — critical for building trust.
By embracing DSAR compliance not just as a regulatory requirement but as a business opportunity, companies can position themselves as leaders in privacy and data ethics. Because the reality is: the organizations that are ethical, responsible and accountable for their customers' personal information are the organizations who are likely to differentiate their brand from the competition.
We've compiled a list of the best data loss prevention services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Amazon is set to host its first Amazon Devices event since 2023, scheduled for 10am EST / 3pm GMT / 2am ACT on Wednesday, February 26, with many expecting this to be the launching pad for a new, AI-enhanced Alexa, alongside a handful of rumored Echo and Fire TV devices.
After announcing Alexa 2.0 alongside an array of hardware launches back in 2023, all has been relatively quiet from Amazon regarding the future of the LLM (large language model)-boosted smart assistant. Elsewhere, however, rumors have spelled a challenging road to release for Alexa 2.0, including recent concerns that there may be further delays even with its unveiling on the horizon.
It could end up being a divisive event, however, and not because of AI-related concerns; Amazon could be about to fill out its "trends of the 2020s" bingo card by also announcing a subscription plan for the newly smartened Alexa.
We're on the ground in New York City attending the event, which is not publicly available to live stream, and we'll be sharing all the news live as it happens. Stay tuned!
The latest newsWelcome to our live blog coverage of Amazon’s Devices and Services event! Our very own Lance Ulanoff and Jacob Krol are on the ground in New York City to attend the event in person, and I’ll be covering all the news as we learn more from Amazon about its 2025 products.
Stay tuned, because this could be a big moment for Amazon!
Instead of a Devices and Services event in 2024, Amazon opted to trickle various product releases throughout the year, including a surprise launch for the all-new Echo Show 21, as well as second generations of the Echo Spot and Echo Show 15.
We’ve reviewed them all - check them out!
(Image credit: Shutterstock)With rumors flying about Alexa 2.0’s subscription fee, we’ll hopefully be learning more later today on what that means for the original Alexa. Logic would dictate they wouldn’t fully ditch it and would leave that as the ‘basic’ option in all Echo devices, right? Right?
Anyway, for a refresher on what Alexa can do, check out our list of the Best Alexa Skills and commands.
(Image credit: Amazon)Especially at release, Alexa was a real game-changer, but with the passage of time comes new technology and new demands on aging software. That’s certainly the case with Alexa; the voice assistant has seen many quality of life updates and new features, but it’s starting to show its age.
There’s plenty that could be improved, but I wrote yesterday about five specific features that would make Alexa 2.0 genuinely worthwhile. Check it out!
February was a busy month for Disney+, but it seems like March is going to be even more eventful for one of the world's best streaming services.
Indeed, from the arrival of Daredevil's standalone Marvel Cinematic Universe (MCU) TV series – it's about time! – in Daredevil: Born Again, to the release of Stranger Things star Sadie Sink's dystopian punk-rock opera film O'Dessa and more besides, you won't struggle to find something worth streaming between March 1 and 31. So, without further ado, here's everything that's coming to Disney+ in the weeks ahead.
March 1For more Disney-based coverage, read our guides on the best Disney+ shows, best Disney+ movies, how to watch the Marvel movies in order, and Marvel Phase 5.
One in four UK businesses lack a documented strategy to address generative AI (GenAI) threats, according to research from Ivanti. Let that sink in for a moment. Would we accept the same casual approach to, say, workplace health and safety? Likely not. Yet here we are, watching a technological revolution unfold while many organizations take a dangerously passive stance toward securing it.
The speed of GenAI's evolution has caught many security teams flat-footed. While 47% of security professionals in the UK view GenAI as a net positive for cybersecurity — and they're right to see its potential — this optimism sometimes masks a troubling lack of preparation.
Consider this eyebrow-raising reality check: Nearly half of UK IT and security professionals (49%) believe phishing will become a greater threat due to GenAI. And I’d argue they’re right to be concerned. The problem is that their concern isn’t translating into action. A quarter of organizations haven't documented any strategy to address these risks. We're seeing unprecedented technological advancement coupled with unprecedented organizational inertia. It's not great.
The data silo trapThe challenge goes deeper than just keeping pace with GenAI's evolution. A remarkable 72% of organizations report that their IT and security data are siloed across systems. These fragments of critical security information might as well be locked in separate vaults. And 63% say these silos actively slow their security response times.
Think about that. In an era where AI-powered threats can evolve and spread at machine speed, many security teams are still piecing together threat data from disparate systems like a jigsaw puzzle. That's not just inefficient — it's downright dangerous.
The training paradoxMost security teams recognize that human error is still a prime vulnerability. That's why 57% have turned to anti-phishing training as their first line of defense against sophisticated social-engineering attacks. It's currently the most popular protective measure against AI-driven threats.
I’m the first to assert that anti-phishing training is critical, particularly given how often well-meaning employees unintentionally create pathways for exploitation by falling for increasingly sophisticated phishing schemes.
But strong employee training is far from sufficient. It means using yesterday’s tools to fight today’s threats. Emphasizing best practices to combat AI threats is sort of like using a personal floatation device to keep safe while lounging in shark-infested waters. Should you wear the personal flotation device? Certainly. But it won’t save you from the real threat.
The good news is that cybersecurity professionals are aware of the gaps left by traditional anti-phishing defenses. Only 32% believe this training is "very effective" against AI-powered social engineering attacks. However, and I risk sounding like a broken record here, the concern and awareness aren’t translating into action.
Beyond traditional defensesAs GenAI capabilities expand, they create new attack surfaces faster than traditional security measures can adapt. As I’ve argued, the old playbook of reactive security measures and siloed defenses simply won't cut it anymore. What will cut it? In short, a holistic approach to exposure management that addresses both immediate threats and systemic vulnerabilities.
What does this mean in practice? Security teams need to rethink their approach altogether, and that means addressing key elements such as the following:
Continuous monitoring and assessment
Traditional periodic security assessments can't keep pace with AI-driven threats. Organizations need real-time visibility across their entire attack surface, from traditional assets to new AI tools. This means moving beyond scheduled vulnerability scans to implement continuous monitoring that can detect and respond to threats as they emerge.
Breaking down data silos
Those fragmented security and IT data stores? They're not just an inconvenience—they're a liability. With 63% of organizations reporting slower security responses due to siloed data, the need for unified visibility isn't just a nice-to-have—it's a critical security requirement when facing sophisticated AI-powered threats that can exploit gaps between systems.
Evolving beyond basic training
Remember — security awareness training is important, but it can't be your only defense. We need to augment human awareness with sophisticated detection and response capabilities. Fight fire with fire.
Data-driven security responses
When facing AI-powered threats, gut instinct and experience aren't enough. Security teams need comprehensive data visibility to spot patterns and anomalies that signal emerging threats. This means breaking down those data silos that 72% of organizations currently struggle with and implementing systems that can provide unified threat visibility.
What are you waiting for?GenAI isn't just another technology trend to monitor — it's actively reshaping the threat landscape. While 47% of security professionals view GenAI positively, this optimism must be matched with concrete action.
Organizations can't afford to take a wait-and-see approach to GenAI security. The technology's rapid evolution, combined with existing challenges like data silos and training limitations, necessitates an intentional, comprehensive, layered and proactive stance.
Those who delay implementing comprehensive security strategies are already falling behind, and since GenAI continues to shapeshift and grow in sophistication by the day, falling even a little bit behind makes it prohibitively difficult to catch up.
The time for documented strategies, unified security visibility and enhanced threat detection isn't coming — it's here. It’s time to stop wondering whether your organization will need to adapt to AI-driven security challenges, and start focusing on how quickly and effectively you can do it.
A final plea: don’t wait until after you face a serious breach. In this case, “wait and see” translates to “wait and pay the price.”
We've compiled a list of the best firewall software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Version 2.00 has arrived for the popular fighting game Granblue Fantasy Versus: Rising, and it might just be the largest content update the game has seen since its late 2023 launch.
Headlining the Version 2.00 update is the addition of a new playable character in Sandalphon. Easily one of the Granblue Fantasy mobile game's most popular characters, it's great to see him finally arrive in Rising.
Sandalphon is the first of five characters to be added as part of Character Season Pass 2 which players can purchase now. He's set to be followed by Galleon (Spring 2025), Wilnas (Summer 2025), Meg (Fall 2025), and Ilsa (Early 2026).
Battle Pass Round 7 has also been added, offering a huge amount of free and premium rewards including new character colors, titles, music, and an all-new 'Arbitrator of the Shore' skin for playable character Zooey. Zooey's also my main in the game, so this is a pass I'll definitely be grabbing (as well as hoping for some much-needed buffs for the character).
Crucially, Version 2.00 has brought plenty of new content to Granblue Fantasy Versus: Rising. There's a new Survival mode, which tasks players to progress through as much of a 100-fight gauntlet as possible, earning rewards along the way. Hopefully, it's going to be a chunky offline timesink, especially as buffs unique to Survival mode should offer it some nifty roguelite elements.
A new online training mode feature has also been added, letting you hop online to practice combos or fundamentals with a friend or coaching buddy. This is a feature that's quickly becoming standard in many of the best fighting games, so I'm happy to see it finally arrive in Rising.
Of course, it wouldn't be a major new version without significant system and character balance adjustments, and developers Cygames and Arc System Works have managed just that. Full balance changes are available to view now over at the game's official website. Key takeaways here include the rebalancing of powerful universal skills such as Brave Counters as well as invincible and counter skills.
Lastly, both the Standard and Deluxe versions of Granblue Fantasy Versus: Rising have received their biggest discount yet at 61% off for a limited time. It's a fantastic time to start playing the game yourself, or as a means of picking up Character Pass Season 1 at a significantly reduced price. Not a bad way to save cash on the game if you've recently splashed out on one of the best fight sticks.
You might also like...Artificial intelligence and Large Language Models are trained on hoards of online information, including songs, articles, comments, books, drawings, pictures, and more - so if you’ve ever commented on an Instagram post, posted a photo to Twitter, or uploaded a video to YouTube - the likelihood is, your work has been used to train a model at some point or another.
These models don’t ask for permission, either, nor does it notify the creator - and these models make millions from the content. OpenAI reportedly used over a million hours of YouTube video data to train GPT-4, and Meta uses public posts from Instagram and Facebook to train its AI model - but British creatives are coming together to fight back.
Artists, singers, authors, journalists, and scriptwriters (and more) - who collectively generate over £120 billion per year for the nation's economy, have come together to urge the UK government to apply British copyright law to AI companies, and to ensure ‘content theft’ is not legitimised by leaving this issue unchecked.
Make It FairThe ‘Make it Fair’ campaign comes at the end of the British government’s AI and copyright consultation period, in which it is reviewing ways to boost trust and transparency between sectors, and “ensuring AI developers have access to high-quality material to train leading AI models in the UK and support innovation across the UK AI sector”.
Owen Meredith, the CEO of News Media Association, which launched the campaign, added the UK's “gold-standard” copyright laws have underpinned growth and job creation in the British economy, and without the content they produce, AI innovation would not exist.
“And for a healthy democratic society, copyright is fundamental to publishers’ ability to invest in trusted quality journalism,” Meredith said.
“The only thing which needs affirming is that these laws also apply to AI, and transparency requirements should be introduced to allow creators to understand when their content is being used. Instead, the government proposes to weaken the law and essentially make it legal to steal content.
AI is at the forefront of productivity discussions in the UK right now, as the PM released plans to ‘turbocharge AI’ into the public sector, including the idea to ‘unlock’ public data by handing it over to ‘researchers and innovators’ to train AI models.
You might also likeComino made the headlines with the launch of Grando, its water-cooled AMD-based workstation with eight Nvidia RTX 5090 GPUs. During the extensive email exchange I had with its CTO/co-founder and commercial director, I found out Grando is far more versatile than I’d come to expect.
Dig in its configurator and one will notice that you can configure the system with up to eight RX 7900 XTX GPUs because, why not?
“Yes, we can pack 8x 7900XTX, with an increased lead time though. In fact, we can pack any 8 GPUs + EPYC in a single system”, Alexey Chistov, the CTO of Comino, told me when I queried further.
Indeed, while it doesn’t currently offer Intel’s promising Arc GPU, it will if the market demands such solutions.
“We can design a waterblock for any GPU, it takes around a month” Chistov highlighted, “But we don't go for all possible GPUs, we choose specific models and brands. We only go for high-end GPUs to justify the extra price for liquid-cooling, because if it could properly work air-cooled - why bother? We try to stick with 1 or 2 different models per generation not to have multiple SKUs (stock keeping units) of waterblocks. You can have an RTX 4090, H200, L40S or any other GPU that we have a waterblock for in a single system if your workflow will benefit from such a combination.”
An RTX 5090 on its retail packaging on a desk (Image credit: Future) The Rimac of HPCSo how can Comino achieve such flexibility? The company pitches itself as an engineering company with its slogan proudly saying "Engineered, not just assembled". Think of Comino as the Rimac of HPC: obscenely powerful, nimble, agile and expensive. Like Rimac, it focuses on the apex of its line of business and absolute performance.
Its flagship product, Grando, is liquid-cooled and was designed to accommodate up to eight GPUs from the onset, which means that it will very likely be futureproof for multiple Nvidia generations; more on that in a bit.
One of their main targets, Chistov, told me, “is to always fit a single PCI slot, that's how we can populate all the PCIe slots on the motherboard and fit eight GPUs in a GRANDO Server. The chassis is also designed by the Comino team so everything works as "one”. That’s how a triple-slot GPU like the RTX 5090 can be modified to fit into a single slot.
With that in mind, it is preparing a “solution capable of operating on the coolant temperature of 50C without throttling, so if you drop the coolant temperature to 20C and set the coolant flow to 3-4 l/m the waterblock can remove around 1800W of the heat from the 5090 chip with the chip temperature around 80-90C”
That’s right, one single Comino GPU waterblock could remove 1800W of heat from a single "hypothetical 5090" that could generate that amount of heat IF the coolant temperature on the inlet is around 20 degrees Celsius AND if the coolant flow is not less than 3-4 liters per minute.
Packing eight of such "hypothetical GPUs" and some other components could lead to a total system power draw of 15 kW and indeed if such a system at full load would have a constant coolant temperature of 20C AND coolant flow per waterblock not less than 3-4 liters per minute, such system would operate "normally".
Who will need that sort of performance?So what sort of user splashes out on multi-GPU systems. Chistov, again. “There is no benefit to adding an additional 5090 if you are a gamer, this won't affect performance, because games can't utilize multiple GPUs like they used to using SLI or even DirectX at some point of time. There are several applications we are focused on for multi-GPU systems:
Specifically for the RTX 5090, the most important improvement for AI workloads is the 50% improvement in memory capacity (up to 32GB) which means that Nvidia’s new flagship is better suited for inference as you can put a far bigger AI model in memory. Then there’s the far higher memory bandwidth which helps as well.
In his review of the RTX 5090, TechRadar’s John Loeffler calls it the supercar of graphics cards, and asks whether it was simply too powerful, suggesting that it is an absolute glutton for wattage.
“It's overkill”, he quips, “especially if you only want it for gaming, since monitors that can truly handle the frames this GPU can put out are likely years away.”
You might also likeAnthropic just released a new model called Claude 3.7 Sonnet, and while I'm always interested in the latest AI capabilities, it was the new "extended" mode that really drew my eye. It reminded me of how OpenAI first debuted its o1 model for ChatGPT. It offered a way of accessing o1 without leaving a window using the ChatGPT 4o model. You could type "/reason," and the AI chatbot would use o1 instead. It's superfluous now, though it still works on the app. Regardless, the deeper, more structured reasoning promised by both made me want to see how they would do against one another.
Claude 3.7’s Extended mode is designed to be a hybrid reasoning tool, giving users the option to toggle between quick, conversational responses and in-depth, step-by-step problem-solving. It takes time to analyze your prompt before delivering its answer. That makes it great for math, coding, and logic. You can even fine-tune the balance between speed and depth, giving it a time limit to think about its response. Anthropic positions this as a way to make AI more useful for real-world applications that require layered, methodical problem-solving, as opposed to just surface-level responses.
Accessing Claude 3.7 requires a subscription to Claude Pro, so I decided to use the demonstration in the video below as my test instead. To challenge the Extended thinking mode, Anthropic asked the AI to analyze and explain the popular, vintage probability puzzle known as the Monty Hall Problem. It’s a deceptively tricky question that stumps a lot of people, even those who consider themselves good at math.
The setup is simple: you're on a game show and asked to pick one of three doors. Behind one is a car; behind the others, goats. At a whim, Anthropic decided to go with crabs instead of goats, but the principle is the same. After you make your choice, the host, who knows what’s behind each door, opens one of the remaining two to reveal a goat (or crab). Now you have a choice: stick with your original pick or switch to the last unopened door. Most people assume it doesn’t matter, but counterintuitively, switching actually gives you a 2/3 chance of winning, while sticking with your first choice leaves you with just a 1/3 probability.
Crabby ChoicesWith Extended Thinking enabled, Claude 3.7 took a measured, almost academic approach to explaining the problem. Instead of just stating the correct answer, it carefully laid out the underlying logic in multiple steps, emphasizing why the probabilities shift after the host reveals a crab. It didn’t just explain in dry math terms, either. Claude ran through hypothetical scenarios, demonstrating how the probabilities played out over repeated trials, making it much easier to grasp why switching is always the better move. The response wasn’t rushed; it felt like having a professor walk me through it in a slow, deliberate manner, ensuring I truly understood why the common intuition was wrong.
ChatGPT o1 offered just much of a break down, and explained the issue well. In fact, it explained it in multiple forms and styles. Along with the basic probability, it also went through game theory, the narrative views, the psychological experience, and even an economic breakdown. If anything, it was a little overwhelming.
GameplayThat's not all Claude's Extended thinking could do, though. As you can see in the video, Claude was even able to make a version of the Monty Hall Problem into a game you could play right in the window. Attempting the same prompt with ChatGPT o1 didn't do quite the same. Instead, ChatGPT wrote an HTML script for a simulation of the problem that I could save and open in my browser. It worked, as you can see below, but took a few extra steps.
(Image credit: Anthropic)While there are almost certainly small differences in quality depending on what kind of code or math you're working on, both Claude's Extended thinking and ChatGPT's o1 model offer solid, analytical approaches to logical problems. I can see the advantage of adjusting the time and depth of reasoning that Claude offers. That said, unless you're really in a hurry or demand an unusually heavy bit of analysis, ChatGPT doesn't take up too much time and produces quite a lot of content from its pondering.
The ability to render the problem as a simulation within the chat is much more notable. It makes Claude feel more flexible and powerful, even if the actual simulation likely uses very similar code to the HTML written by ChatGPT.
You might also likeLuma Labs has added a score to the AI videos produced on its Dream Machine platform. The new feature brings audio to your video, custom-generated to match a written prompt or created by the AI, and is based solely on what's happening in the video. That could mean chirping birds at the sunrise scene, a spaceship’s distant hum for your sci-fi animation, the chaotic clatter of a busy café, or anything else you care to hear.
The new feature is free in beta for all users. After generating a video with Dream Machine, you’ll see a new “Audio” button along the row at the bottom of the video next to the existing "Extend" and "Enhance" buttons. Click it, and you get two choices: let the AI decide the best fitting sounds on its own, or take the wheel and provide a text prompt describing exactly what you want. Maybe you’ve got a dreamy nature scene and want to hear a distant waterfall, or maybe you want to hear how the AI does it; either way, it works.
Sound IdeaThis update is big because AI-generated videos, while sometimes visually stunning, have always felt incomplete without sound. It's a lot of work to painstakingly add audio yourself. Even some of the biggest names in AI video don't have audio as an option yet, including OpenAI's Sora.
Of course, AI sound generation on its own isn't unique. There are a lot of AI music makers, even full voice and song producers. But, the production within the platform linked to the video already there makes Dream Machine a real standout. That said, it isn't perfect. You can tell from the way the motion and sound don't quite match with this dog as it swims.
On the other hand, when prompted correctly, this crackling fire and laughter of people around it sounds pretty good.
But, I wouldn't rely on Dream Machine to create sound on its own without any guidance in a prompt. With a blank audio prompt, the AI took the same short clip of people around a fire and came up with something a lot spookier.
You might also like...SanDisk has launched an 8TB version of its popular E61 portable SSD, expanding its offerings for users who require extensive storage capacity.
Thr new model aims to cater to professionals such as video editors, photographers, and data analysts, who often handle large files and need reliable storage solutions.
However, despite the excitement around the 8TB capacity, concerns linger about the reliability of SanDisk’s SSDs due to a major data corruption issue that surfaced in 2024.
Bigger - but also better?The 8TB SanDisk E61 comes with a compact and lightweight form factor measuring 100.8 x 52.55 x 9.6 mm, meaning it is quite portable, and features a silicone shell that offers protection from drops of up to 3 meters, along with an IP45 rating, providing resistance against dust and water, making it ideal for outdoor use or travel.
Its USB-C 3.2 Gen 2 interface boasts read speeds of up to 1500MB/s and write speeds of 1000MB/s, ensuring fast data transfers for large files. Furthermore, it has a plug-and-play feature which means users can start using the SSD immediately without the need for additional drivers or software. It also includes 256-bit AES hardware encryption, ensuring data security for sensitive information, whether for personal or professional use.
While the new 8TB SanDisk E61 offers a compelling set of features, concerns remain about the product’s reliability. In 2023, SanDisk’s portable SSDs, including the SanDisk Extreme and Extreme Pro models, were plagued by a major firmware issue that caused widespread data corruption and drive failures. Users reported losing access to critical data, with the drives suddenly becoming unreadable. A class-action lawsuit was filed, accusing Western Digital (SanDisk's parent company) of failing to address the issue adequately.
In response, Western Digital issued a firmware update to mitigate the problem, but the lawsuit claimed that the core issue remained unresolved. As a result, many users, particularly professionals handling large volumes of important data, continue to worry about the long-term reliability of SanDisk SSDs.
Nevertheless, with this new 8TB version, SanDisk has an opportunity to restore customer confidence by ensuring that the previous data corruption issues have been fully addressed. The high price point of around $714 makes it a premium product, especially for professionals who depend on safe, secure, and reliable data storage.
You might also likeOpenAI has just announced, via X, that it is starting to roll out a “preview” version of Advanced Voice mode for ChatGPT free users while also rolling out its Deep Research agent to all Plus, Team, Edu, and Enterprise users.
Advanced Voice Mode, which is currently only available to ChatGPT Plus users, launched initially in the mobile app versions of ChatGPT and arrived in the desktop app version of ChatGPT in November last year. It is one of the nicest features of ChatGPT; it’s a way to communicate with the chatbot using your voice in a free-flowing, natural conversation. It’s almost like talking to a real person, and you have the ability to interrupt the chatbot if you find its reply is going on too long. There are a variety of different voices to choose from too, so you can customize the experience.
OpenAI has previously experimented with offering 10 minutes of Advanced Voice Mode a month to ChatGPT free users, but the new rollout is going to “give all ChatGPT free users a chance to preview it daily across platforms." The company is also being a bit secretive about what the daily limit is for Advanced Voice Mode for free users, as it clearly wants to retain the ability to adjust it depending on demand. The only detail on usage it offers is that ChatGPT Plus users will get “5x the free limit."
Starting today, we’re rolling out a version of Advanced Voice powered by GPT-4o mini to give all ChatGPT free users a chance to preview it daily across platforms.The natural conversation pace and tone are similar to the GPT-4o version while being more cost effective to serve.February 25, 2025
ChatGPT 4o-mini-poweredThe ChatGPT free version of Advanced Voice Mode will be powered by ChatGPT 4o-mini, while Plus users will continue to have access to Advanced Voice Mode powered by ChatGPT 4o. In its statement, OpenAI said: “Starting today, we’re rolling out a version of Advanced Voice powered by GPT-4o mini to give all ChatGPT free users a chance to preview it daily across platforms. Plus users will continue to have access to Advanced Voice powered by 4o with the existing daily rate limit, which is more than 5x the free limit, as well as access to video and screensharing in Advanced Voice.”
Reacting to the news some X users expressed concern that the 4o-mini model might be “dumbed down” and expressed frustration that the daily limit remained in place for ChatGPT Plus subscribers “We’re paying for the best, not a crippled version. Get it together”, said X user Emanuele Dagostino.
Gemini Live, Google's voice mode chatbot, is entirely free for Android users.
Advanced Voice Mode in the ChatGPT Mac app. (Image credit: OpenAI) Deep ResearchAt the same time, OpenAI is rolling out its Deep Research agent tool to all its paid subscribers, rather than just its Pro subscribers. Built using the o3 model, Deep Research is a tool for carrying out in-depth research using the Internet that drastically reduces the time taken by researchers.
The o3 model is optimized for data analysis and can handle text, images, and PDF files that it can access via the web.
Deep Research can work independently. You simply give it a prompt, and it goes off and analyzes and synthesizes hundreds of online sources for you, reducing a job that would take human researchers many hours to a few minutes.
You might also like