Nvidia’s RTX 5080 and 5090 graphics cards sold out in very quick fashion, just as the rumors predicted, on launch day yesterday – and there was a fair bit of chaos and clamor surrounding the release of these first Blackwell GPUs.
At the time of writing, the day after launch, everything remains sold out at the big US and UK retailers I’ve just taken a scout round, at least in terms of standalone graphics cards.
Even the seriously pricey third-party RTX 5080 models at the premium end of the spectrum sold out in the blink of an eye.
Yes, the only option to get a Blackwell GPU currently is to buy a full PC with one of the boards in it, where you’re obviously paying a lot of money for a high-end machine with a big markup.
As for the clamor, there were big lines at some retail stores in the US, with folks queuing for their shot at an RTX 5090 days before launch. As VideoCardz reports, there were somewhat chaotic scenes in Japan where, at a shop called PC Koubou, would-be Nvidia GPU buyers ended up scaling the fence of a kindergarten next door (in an effort to get in and purchase a GPU, presumably).
That was one of the stores in Japan where a lottery system was implemented to try and make buying a Blackwell GPU a fair process, but clearly, it went awry here.
All in all, there are accusations of Nvidia making the RTX 5000 a ‘paper launch’ meaning that there was only a very small amount of inventory available on release day.
As VideoCardz points out, theorizing on Reddit – which we should be particularly careful around – suggests that there were only 250 units of the RTX 5090 at Micro Centers in the US, and just 2,400 or so of the RTX 5080. Certainly, the flagship GPU was predicted to be vanishingly thin on the ground anyway, going by the rumors, but the RTX 5080 was expected to achieve somewhat better stock levels than this rough tally from Reddit suggests.
(Image credit: Future) Analysis: Do not feed the scalpersAs mentioned, the only real shot you have currently of getting a Blackwell GPU is buying a full PC, inevitably a very expensive premium model that’s going to run you a few grand. You could, in theory, then replace the RTX 5000 GPU with your own (that you’re upgrading from), and sell the PC second-hand (as nearly new), but that’s potentially a lot of hassle and headaches, so most people won’t consider that option (I certainly wouldn’t).
The other choice, which again, isn’t really any kind of choice, is to buy an RTX 5090 or 5080 on an auction site from a scalper who has seriously jacked up the price. Don’t do this – do not feed the price gougers, whatever you do, please. It’s interesting to see on the likes of eBay that there are a good many more reasonably priced Blackwell listings which are made just to trap bots, and clearly state that they are only a photo of the GPU in the description. (As well as those trying to sell their in-place pre-orders, of course).
Just the usual chaos around the launch of a thin-on-the-ground new generation of GPUs, then. I’d suggest, for now, that you just try to be patient. (Don’t feed the scalpers, did I say that already? Just imagine the collective sweating going on if those listings don’t shift, and they have to keep dropping and dropping prices).
Keep your eyes peeled on our live blogs where TechRadar is still maintaining a watch on all the major retailers – for RTX 5090 graphics cards, and also RTX 5080 GPUs – and we’ll alert you there if any stock comes back in. But for now, the chances of buying an RTX 5090 or 5080 still seem very remote to say the least.
Of course, the attention will soon turn to the launch of RTX 5070 models next month, and how stock will shake out there. And after that, the eyes of gamers will be fixed on what AMD’s doing with RDNA 4 in March. As we know that some RX 9070 graphics cards are already at retailers, hopefully Team Red should have a much better next-gen launch for stock levels than we witnessed with Nvidia yesterday.
You might also like...The Samsung Galaxy S25 series was highly anticipated, but with the arguable exception of the Samsung Galaxy S25 Ultra, these phones didn’t get very substantial spec upgrades – and it looks like the same could be true of this year’s Galaxy Z models.
According to leaker @PandaFlashPro (via GSMArena), the Samsung Galaxy Z Flip 7 will have 12GB of RAM, and a choice of 256GB or 512GB of storage, while the Samsung Galaxy Z Fold 7 will also have 12GB of RAM, and come in 256GB, 512GB, and 1TB configurations.
Those are exactly the same specs as the Samsung Galaxy Z Flip 6 and Samsung Galaxy Z Fold 6 have, so these rumored new phones don’t sound like much of an upgrade.
"Confirmed" Galaxy Z Flip 7 256GB | 512GB | 12GB RAMGalaxy Z Fold 7 256GB | 512GB | 1TB | 12GB RAMPowered by Snapdragon 8Elite World-Wide.January 30, 2025
Bigger screens and more power, but little elseThe Samsung Galaxy Z Fold 7 might at least have a good chipset, with the source adding that it will use a Snapdragon 8 Elite one globally, just like the Galaxy S25 series. But the Galaxy Z Flip 7 might not even get that, with this source not mentioning its chipset, and a previous leak pointing to a less promising Exynos 2500.
Sadly, a recent GalaxyClub report also claimed that the Samsung Galaxy Z Flip 7 will have the same cameras as its predecessor, namely a 50MP main, 12MP ultra-wide, and 10MP front-facing camera.
So, from what we’ve heard so far, it seems like very little will be upgraded on these phones other than their chipsets. And in the case of the Samsung Galaxy Z Flip 7, it might not even be the chipset upgrade we want, as Samsung might opt for a less capable option than the top-tier Snapdragon 8 Elite used in its current high-end phones.
Still, one early leak did suggest that the next Galaxy Z models might at least have bigger screens than their predecessors, and thankfully they might not get a price rise either, going by an earlier claim from @PandaFlashPro. So, it’s not all bad news.
The Samsung Galaxy Z Fold 7 and Samsung Galaxy Z Flip 7 will probably be announced in July, so we should know exactly what they’re capable of then, but stay tuned for further leaks and rumors in the meantime.
You might also likeNew research has revealed a shocking 85% of bosses monitor staff online activity through software.
The figures from online privacy company ExpressVPN relate to all types of workers, with the research following previous data from the company revealing 78% of remote workers are monitored, too.
Bosses are now widely tracking emails, websites, keystrokes and even monitoring screens in real-time to keep an eye on their workers’ productivity, however nearly half of workers are feeling the pressure, and the surveillance could actually be causing them to be less productive.
Employee monitoring software is all too commonTwo in five (42%) workers believe that monitoring should be considered unethical, with more than half (51%) willing to quit if they’re being subjected to online monitoring. However, four in five (83%) employers enforce monitoring without the option for employees to opt out.
A clear disconnect has been identified, with 70% of employers believing that monitoring boosts trust, morale and productivity. On the flip side, 46% of workers feel stressed or anxious about it.
Over a third (35%) feel that they lack their employer’s trust, and one in four (26%) feel pressure to do work quickly rather than thoughtfully. Pressure to be active rather than productive and pressure to work longer hours were also identified, raising questions over how effective monitoring tools are at determining productivity, rather than just working hours.
Scheduling emails to send later, logging into communication apps on mobile devices and setting up automated status changes are just some of the ways workers are now using to fool monitoring systems.
Employer observations aren’t just being used to keep tabs on perceived productivity, though – 38% use the data for performance reviews, with 30% of employees facing warnings and 17% facing pay cuts or suspensions as a consequence of misbehavior.
Looking ahead, workers are calling for stronger government regulation to protect their privacy: “The call for regulation reflects a deep-seated desire for accountability, fairness, and respect in a workplace where privacy can often feel like a luxury.”
You might also likeCybercrime is a costly affair, with nearly £11.5 million stolen last Christmas alone, according to the UK's National Cyber Security Centre. That’s £695 on average per victim. With the festive season in full swing, the rush to snag post-Black Friday bargains and buy Christmas gifts online has led to a sharp rise in threat levels. For cybercriminals, this is the perfect opportunity to deploy their latest techniques, targeting unsuspecting shoppers to steal money and personal data.
UK Fraud Minister Lord Hanson issued a stark warning in November on the dangers of holiday scams. However, the sheer volume of online interactions, the sophistication of cyberattacks, and the increasing reliance on digital shopping during the holiday season make it far more challenging to identify a scam at first glance.
AI-Driven Phishing: More Deceptive Than EverPhishing has long been one of the most common forms of cybercrime, but the emergence of AI has revolutionized the way these attacks are carried out. Previously, phishing emails were easy to spot, often riddled with spelling mistakes and strange phrasing. However, with AI, cybercriminals can now analyze the communication styles of businesses, studying their marketing emails and messages to replicate the tone, branding, and even the content of legitimate communications.
Attackers can now seamlessly impersonate colleagues, executives, and even customers, making it harder for targets to identify a scam. It has become easier–and cheaper– than ever to undertake these targeted spear-phishing attacks, which are much more likely to succeed.
AI and Human Behavior: Exploiting VulnerabilitiesAI’s ability to analyze human behavior has also made it easier for cybercriminals to exploit psychological triggers. By studying past interactions and identifying patterns in behavior, attackers can craft messages that play on an individual’s emotions. For example, during the busy holiday season, cybercriminals exploit the stress of missed package deliveries. Imagine receiving a seemingly legitimate text from a courier service, urging payment for redelivery. One victim, distracted and eager to resolve the issue, entered card details on a convincing fake site—only to realize later the text came from an unknown mobile number, not the courier. It’s a reminder that vigilance can’t take a holiday.
AI can also be used to time phishing emails or fake social media ads to coincide with busy shopping periods such as Black Friday and Christmas sales. Cybercriminals can also create fake websites offering massive discounts or time-limited offers, hoping to lure in shoppers eager to make a purchase quickly. Under pressure, people are more likely to fall for scams.
In the same way, AI can be used to create fake bank alerts or financial notifications that play on a customer’s fear of fraud or account security issues. These phishing attacks, which often contain urgent warnings or threats, push the recipient into a state of panic, encouraging them to click on a malicious link or provide sensitive details. It can also be very hard to spot when the destination site or notifications look identical to the official source.
In fact, while it may seem simple to check if a website is secure by looking for the HTTPS prefix or a padlock icon, these are no longer foolproof indicators of a secure site. Cybercriminals have become adept at creating fake sites that look identical to trusted brands, making it easy for consumers to be misled.
Deepfake Technology: Social Engineering with a New FaceAlongside phishing, AI is increasingly being used in social engineering attacks, particularly through deepfake technology. Earlier last year, ARUP lost $25 million to fraudsters after an employee was tricked into believing he was carrying out the orders of his CFO. And everyday people aren’t immune either. A kitchen fitter from Brighton was scammed for £76,000 because he believed a deepfake advert purporting to be Martin Lewis, the money-saving expert.
This method is highly effective because it bypasses traditional security measures we rely on, such as email filters, multi-factor authentication, or the ‘sniff test’, which means that something is awry. Deepfakes create a sense of urgency and authority, making it easier to manipulate people into taking actions they would otherwise refuse. And their realism, especially when duplicate social media profiles are concerned, makes such scams harder to detect, even for those with extensive training.
Protecting Against AI-Enhanced ThreatsAs the sophistication of AI-driven phishing and social engineering attacks grows, it is essential for both businesses and consumers to adopt proactive security measures. For individuals, vigilance is key. Avoid clicking on links in unsolicited and junk emails, texts that claim to come from businesses or government agencies, or even ads seen on social media platforms. Always manually type in the URL of a website, rather than clicking on embedded links, to ensure that you are visiting a legitimate site.
Multi-factor authentication should also be implemented wherever possible, as it adds an additional layer of security beyond traditional login credentials. Password managers can also help users create and store strong, unique passwords for each account, reducing the risk of credential theft. Passkeys, which rely on biometrics and device management, are the next level of protection that is slowly being adopted.
For businesses, investing in advanced threat detection and response systems is essential. These systems can identify and mitigate phishing and social engineering attacks before they cause significant damage. Machine learning algorithms within these systems can detect patterns of malicious activity that traditional security measures might miss. Regular employee training is also crucial, as the human element remains one of the most vulnerable points of attack.
Moreover, businesses should work to ensure that their employees and customers are aware of the risks posed by deepfakes and other forms of AI-driven social engineering. Implementing robust verification processes, such as requiring multiple confirmations for financial transactions, can also help reduce the risk of falling victim to these kinds of scams. Ultimately, staying ahead of evolving AI threats requires collective vigilance and a stronger commitment to safeguarding personal information.
Checkout our list of the best business password managers.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro