If you needed more rumor spillage around AMD’s Ryzen 7 9800X3D, well, dig in, because now we have a full high-res product shot of the box for the chip.
The leaked pic comes from VideoCardz, which regularly picks up GPU and CPU-related rumors, and the photo of the packaging clearly shows it’s a Ryzen 7 CPU.
As a quick refresher, the rumors around AMD’s next-gen 3D V-Cache processors have insisted for a long time now that we’ll only get the Ryzen 7 9800X3D to kick off, even though Team Red has another pair of Ryzen 9 CPUs as part of this first clutch of Zen 5 X3D releases.
They are the Ryzen 9 9900X3D and Ryzen 9 9950X3D, and while typically you might expect the latter flagship to come out first, this time around, AMD is apparently pushing out the mainstream Ryzen 7 workhorse as the initial launch. (It was the other way around with Ryzen 7000X3D, with the Ryzen 9 processors arriving first, followed by the top-notch Ryzen 7800X3D later).
According to VideoCardz, the only packaging shot AMD is circulating is this pic for the Ryzen 7 9800X3D, so the conclusion (add salt, naturally) is that this really is the only CPU due to be launched early in November by Team Red.
AMD recently confirmed that next-gen Ryzen X3D is arriving on November 7, so we can theoretically expect the Ryzen 9800X3D to go on sale then – in just two weeks’ time. We may see a reveal of the processor as soon as tomorrow, October 25, if other rumors are right.
(Image credit: VideoCardz / AMD) Analysis: Boxing clever against IntelIt’s still possible that AMD could have a triple X3D launch up its sleeve, perhaps – or maybe something entirely different, even – but the odds seem to have very much settled on the initial offering being the Ryzen 9800X3D. Indeed, this isn’t the first time the Ryzen 7 box has been leaked, but this time we’ve got a very clear shot of it – complete with the assertion that there aren’t any Ryzen 9 box photos floating around.
AMD’s timing is notable if it is indeed set to reveal the Ryzen 9800X3D tomorrow, as is the word from the grapevine, mainly because Intel’s Core Ultra 200S (Arrow Lake desktop) processors go on sale later today.
Team Red looks to be doing a traditional bit of extracting the wind from its rival’s sails, and from that perspective, it also makes sense that the Ryzen 9800X3D should be first off the 3D V-Cache diving board. This is the CPU that most PC gamers are interested in, and the one that’ll make the biggest splash, and indeed dent in Intel’s hopes. The Ryzen 9800X3D is an excellent candidate to become the new best CPU for gaming – a title currently held by the 7800X3D, we should note – certainly based on some of the spec and performance leaks we’ve seen so far, anyway.
Not all of those leaks are positive, admittedly, but VideoCardz also highlights a post on X from leaker Hoang Anh Phu which claims that AMD is calling 9000X3D the ‘2nd-generation’ of 3D V-Cache. To clarify, it’s actually the third series of X3D chips, but this suggests the architectural changes are enough to qualify as a whole new generation compared to 7000X3D – and that we can anticipate major reworkings and presumably impressive gains as a result.
You might also likeNvidia CEO Jensen Huang has confirmed a design flaw in its top-end Blackwell AI chips which had affected production was an entirely internal problem, which has now been fixed.
"We had a design flaw in Blackwell," Huang said at an event in Copenhagen, Reuters reported. "It was functional, but the design flaw caused the yield to be low. It was 100% Nvidia's fault."
First identified in August 2024, the delay to Blackwell B100/B200 processors had raised eyebrows around the world, but Huang reassured that it was Nvidia's own doing that caused the issue.
Blackwell delaysBlackwell chips have been in high demand since Nvidia unveiled the platform earlier in 2024, with Huang describing it as, "the world's most powerful chip," offering previously unheard-of levels of AI computing power.
Set to begin shipping in the latter part of 2024, Blackwell binds together two GPU dies, which are connected by 10 TB/second chip-to-chip link into a single, unified GPU. This uses using TSMC's CoWoS-L packaging technology, which relies on an RDL interposer equipped with local silicon interconnect (LSI) bridges that need to be located specifically to allow fast data transfer - the misalignment of which resulted in the issue.
Initial media reports had claimed the issue had caused friction with manufacturing partner TSMC, but Huang dismissed the claims as "fake news".
"In order to make a Blackwell computer work, seven different types of chips were designed from scratch and had to be ramped into production at the same time," he said.
"What TSMC did, was to help us recover from that yield difficulty and resume the manufacturing of Blackwell at an incredible pace."
Blackwell is set to be up to 30x faster than its Grace Hopper predecessor when it comes to AI inference tasks, whilst also reducing cost and energy consumption by up to 25x.
More from TechRadar ProApple could have a busy start to 2025, as reportedly three major new products will land next spring (meaning sometime between late March and late June).
This is according to the usually reliable Mark Gurman, who claimed in an article on Bloomberg (via 9to5Mac) that the iPhone SE 4, iPad 11, and new iPad Air models will all launch during that period, as will upgraded iPad keyboards.
Now, most of that doesn’t come as much of a surprise, as Gurman has previously said that we’d see the iPhone SE 4 and new iPad Air models in the first half of 2025, but now he’s narrowed it down to the spring – so in other words you shouldn’t expect them before March.
Arguably the more interesting claim though is that the iPad 11 will also launch in that window, since this is a new claim from Gurman. Previously, it was very unclear when we might see this new entry-level iPad, so while we’d take this claim with a pinch of salt, it at least provides a possible window.
The iPad 10.9 (2022) (Image credit: Future) Bringing AI to the whole iPad familyIt would certainly make sense for Apple to launch the iPad 11 before too long, since the current model – the iPad 10.9 (2022) – doesn’t support Apple Intelligence. Assuming the new one does, that will mean the latest models of all Apple’s tablets are ready for AI.
We expect similar support from the iPhone SE 4, but the iPad Air (2024) already supports AI, so that won’t be an addition to the next iPad Air model. Still, it will likely be more powerful and therefore even better at handling AI tasks than the current iPad Air.
AI aside, Gurman mentions multiple models of the upcoming entry-level iPad, apparently codenamed ‘J481’ and ‘J482’. He doesn’t say how these models differ, but it could be that the slate will be offered in two different screen sizes, just like the Air and Pro models are. If these leaks are right, then we’ll find out in somewhere between five and eight months.
You might also likeGarmin's Forerunner range, which includes some of the best running watches on the market, is getting new software features to improve swim workout tracking and a new Meditation activity.
It's the latest public beta update for several Garmin Forerunner models, specifically the Forerunner 965, Forerunner 265, Forerunner 165, Forerunner 955, and the Forerunner 255.
Garmin says Public Beta Version 21.14 features a slew of fixes for bugs, updated mobile translations, and a new Moon Phase glance. However, the swimming and meditation updates are the headlines.
Garmin says the new update brings "Improvements to support pool swim workout with pace alerts and critical swim speed" and "Improvements to the pool swim rest screen and alert tones." While the Forerunner range is primarily distinguished as running watches, this update should bring much better functionality for swimming workouts.
Garmin also says it has added a new Meditation activity and Meditation Videos. This should give users more focused tracking during meditation, and even guidance for people who need some pointers.
How to download the Garmin Forerunner beta (Image credit: Garmin)Garmin's public beta software program gives users access to software features that aren't in the public domain yet. It works across all of the best Garmin Watches, and enrolling is pretty simple. To join the beta program, simply sign into Garmin Connect from a computer, select the devices icon in the upper right, select the device you'd like to enroll, and click 'Join Beta Software Program'. Agree to the terms and you're all set.
If you're already enrolled, you can download the latest 21.14 release by going to Main Menu > Settings > System > Software Update > Check For Updates, if your device hasn't updated automatically.
Even if you aren't interested in the new swimming or meditation features, there are a host of tweaks and fixes to Navigation and heart-rate measuring that you'll probably benefit from. Here are the full release notes:
In recent years, cyberattacks continue to grow nearly exponentially year over year. This intensity will only increase with sophisticated technologies such as generative AI in the hands of threat actors.
In 2023, security experts reported a staggering 75% increase in cyberattacks - 85% of which were caused by Generative AI. Relentlessly fast and precise, GenAI cyberthreats automatically determine optimal attack strategies, self-modify code to avoid detection, and launch automated attacks around the clock in a completely automated way.
For businesses to defend against these enhanced attacks, they must find a way to leverage AI themselves. But it’s not as simple as fighting fire with fire - AI cybersecurity tools are also vulnerable to attacks, with even the slightest interference with datasets or inputs risking system failures. Businesses cannot rely on a single solution to meet the rising level of AI cyberthreats, especially when the full extent of their capabilities is yet to be determined. The only way through this growing security emergency is with proactive security planning that provides multiple contingencies for preventing, detecting and eliminating cyberthreats across overlapping security tools and protocols. This comprehensive approach is known as defense in depth.
The list of vulnerabilities that cyberattacks can exploit is a long one. LLMs are particularly good at quickly identifying these weak spots, like zero-day vulnerabilities. These particular vulnerabilities can quickly become single points of failure that can be used to bypass existing security measures, opening the floodgates for threat actors to send cascading failures through cybersecurity infrastructure and gain extensive access to business systems.
Cybersecurity teams should be operating on the assumption that all software and hardware in use contains bugs that can be exploited to access business systems, whether in their own IT infrastructure or third-party services. For this reason, businesses cannot rely solely on any one security defense but employ more in-depth and layered security defenses.
The defense in depth philosophyDefense in depth focuses on three key levels of security: prevention, detection and response. It prioritizes the ‘layering’ of multiple defenses across these levels to extensively protect all security controls, including both tools and best-practice procedures across staff teams.
Technical controls such as firewalls and VPNs, administrative and access controls such as data handling procedures, continuous security posture testing and monitoring, and security documentation, and even physical controls like biometric access, must all be accounted for. If one tool or approach proves to be inadequate, another will be there to back it up - that is why the philosophy is also known as defense in depth. It ensures that there are no single points of failure in a business system, guarding against complete disruption if a component malfunctions.
The key principle is that these three levels work together: if prevention fails, detection can identify the threat. If detection fails, a strong response can limit the damage.
It is a dynamic solution, not a static one. The goal for cybersecurity teams is to create a live, responsive ecosystem that can be easily assessed and adapted. Reporting measures and regular testing protocols are a must for any cybersecurity strategy, but especially for defense in depth, which entails a wide variety of tools and processes that are easy to lose track of. What works today may not work tomorrow, especially with the rapid developments of AI cyberthreats.
For a defense in depth approach to be successful, cybersecurity teams must choose their tools carefully and strategically.
The need for diverse toolsDiverse tools are key to establishing defense in depth. While AI is now a must-have for every cybersecurity strategy, it would be unwise to stack your defenses with only AI software, as they will all be vulnerable to similar types of attacks (such as adversarial attacks, which entails feeding AIs incorrect data to encourage incorrect behavior).
Diverse cybersecurity strategies prevent attackers from exploiting a single system vulnerability, slowing down even AI-enabled attacks so that they can be identified and eliminated before systems are compromised. For example, data protection practices should include not only encryption, but additional fortifications such as data loss prevention tools, as well as processes for data backup and recovery.
Businesses should also utilize as much of their own data as possible when forming their cybersecurity defense in order to create tailored AI tools that can more effectively determine unusual user behavior or network activity than an external AI tool could.
Naturally, tools should be chosen in accordance with a business’s system and operations - for example, businesses with critical online services may employ more defenses against DDoS attacks.
Invest in staff trainingEducating system users on the importance of data protection and authentication is equally important. A network monitoring tool can detect a threat, but user education and processes will strengthen diligence around credential data protection, for example by preventing shared passwords and encouraging the use of single sign-ons or two-factor authentication, leading to fewer attackers gaining unauthorized access in the first place.
Cybersecurity teams need to plan for all possible scenarios, including new or optimized attacks that have been enhanced by AI or other emerging technologies. It is crucial that teams are given the resources to research potential unknown threats and stay up to date with industry developments and emerging risks.
The most important takeaway is that, while no single security measure can be entirely foolproof, defense in depth provides a level of redundancy and resiliency that makes it much harder for an attacker to breach the system, so businesses don’t have to be helpless. The more organizations that adopt the defense in depth philosophy, the more difficult it becomes for threat actors to exploit the data of businesses and their customers.
We've rated the best identity management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Voice is our primary means of communication, and telephony has enabled us to connect using our voices for over a century. The phone call as we know it has evolved from analogue to digital, from fixed to mobile, and from low speech quality to natural speech quality. One major advancement, however, was still lacking: how to enable a fully authentic, immersive sound to be transmitted, live.
The introduction of the IVAS (Immersive Voice and Audio Services) codec, standardized by 3GPP in Release 18 in June this year represents a major advancement in audio technology. Unlike traditional monophonic voice calls, IVAS enables the transmission of immersive, three-dimensional audio, offering a richer, more lifelike communication experience. This innovation is made possible using new audio formats optimized for conversational spatial audio experience. One such example is a new Metadata-Assisted Spatial Audio format, MASA, which uses only two audio channels and metadata for spatial audio descriptions. Spatial audio calls allow users to experience sound as though it were happening in real life, complete with features like head tracking.
Below we will explore the challenges of bringing 3D live calling to mobile phones, the requirements addressed in spatial communication and the new IVAS codec, and the game-changing impact live 3D audio will have for people, mobile operators, and business smartphones.
Bringing 3D calling to Mobile PhonesThe last major innovation in voice calling was the EVS codec, introduced in 2014 and recognized by consumers as HD Voice+. While it significantly enhanced call quality, like all previous codecs, it only offered a monophonic listening experience.
With the introduction of 3D audio calling—the biggest leap in voice-calling audio technology in decades—comes the challenge of creating an authentic, immersive experience in everyday communication. While voice technology has evolved significantly – from analog to digital, fixed to mobile, and from low quality to natural speech quality – transmitting spatial audio, where sounds are perceived as naturally coming from all around, is far more complex to recreate in mobile environments.
Achieving this level of immersive sound experience has been easier in controlled settings like movie theaters and video games, where sound design is a core element, but reproducing it in everyday mobile calls introduces a range of technical hurdles including real-time spatial sound processing, hardware constraints, and ensuring compatibility across devices.
The Immersive Voice and Audio Services (IVAS) voice codec is therefore the most significant step forward in voice-call audio technology for decades.
How to Tackle and Overcome Spatial Communication ChallengesThere have been several challenges to overcome for Immersive Voice to become a robust spatial audio solution. A key issue is noise reduction, crucial for enhancing speech clarity in settings like concerts or nature. Traditional noise reduction methods often only filter out continuous sounds, such as air conditioning hums or traffic noise, but often leave other background noise. Wind interference also poses a challenge by introducing unwanted noise and causing fluctuations in audio levels.
However, recent advancements in machine learning and intelligent noise reduction have addressed these issues. Immersive audio technology, for example, is designed to intelligently adjust how much background noise is reduced depending on the surrounding environment, as well as providing users control, allowing individuals to manually adjust the levels of noise reduction. This ensures that the essential sounds are transmitted while minimizing unwanted background noise.
Immersive audio setups with multiple microphones and loudspeakers also face a major obstacle – acoustic echo. This happens when microphones pick up sound from nearby speakers, causing unwanted feedback. The problem is even more challenging in setups with spatial audio, where the placement and number of loudspeakers affect sound quality and the device's ability to capture spatial audio. Traditional Acoustic Echo Cancellation (AEC) methods often do not work well in these complex environments. To solve this, a machine-learning-based spatial AEC solution was created, which removes the loudspeaker sound from the microphone input using a reference signal. This improves audio quality, especially for spatial audio in real-time voice applications.
Introducing the IVAS codecTo bring spatial audio to mobile phone calling, in addition to Over-the-Top (OTT) services, the 3rd Generation Partnership Project (3GPP) recently adopted a new voice codec standard. Developed through the collaboration of 13 companies, the IVAS codec standard was included in the 3GPP's Release 18, building on the widely used Enhanced Voice Services (EVS) codec. Importantly, the IVAS codec maintains full backwards compatibility, ensuring seamless interoperability with existing voice services.
One of the key innovations during IVAS standardization was the creation of a new parametric audio format, Metadata-Assisted Spatial Audio (MASA), designed specifically for devices with limited form factors, like smartphones. The IVAS codec integrates a built-in renderer that supports head-tracked binaural audio and multi-loudspeaker playback using the MASA format.
Additionally, an immersive voice client SDK can serve as the IVAS front-end, capturing spatial audio from device microphones and converting it into the standardized MASA format. This technology enables true 3D immersive audio experiences for various types of voice calls.
The Power of 3D Live Audio: What it Means for People, Operators, and BusinessesNew immersive 3D audio revolutionizes the audio experience for consumers, enterprises, and industries. For consumers, it deepens engagement in interactions with friends and family by sharing local sounds, whether live-streamed or recorded, and offers full immersion in synchronized metaverse experiences. For enterprises, 3D audio voice calling unlocks new capabilities, from enhanced customer experience through directional audio to transforming team collaboration and decision-making. In industrial settings, audio analytics can drive automated processes like predictive maintenance, streamlining operations, and boosting efficiency.
In order to enable these experiences across diverse network conditions, service providers need scalable solutions that optimize performance regardless of bandwidth constraints. The 3GPP IVAS standard codec accommodates bitrates ranging from 13.2 to 512 kbit/s, ensuring immersive audio quality whether used in congested networks or high-quality streaming environments. This scalability empowers service providers to support more users while delivering rich audio experiences.
Looking to the future, it is expected that voice-based user behavior will continue to evolve. Beyond traditional calls, spatial audio communication will expand to include semi-synchronous messaging through popular apps, people sending voice clips to each other, and more extensive use of group calls. With the rise of extended reality devices and services across industries, the scope of voice communication is set to become even broader, with immersion as a defining feature. A key factor in this evolution will be standardization and the integration of the IVAS codec into the latest 5G advanced standard, which is essential to ensure the interoperability needed to bring 3D calling to every phone at the push of a button.
We've rated the best business phone systems.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Liquid Web has unveiled the launch of a new GPU hosting service designed to keep pace with growing high-performance computing (HPC) requirements.
The new offering will harness Nvidia GPUs and is catered specifically toward developers focused on AI and machine learning tasks, the company confirmed.
Users capitalizing on the new service can expect a range of benefits, according to Liquid Web, including “accelerated performance with Nvidia GPUs”.
"Untapped potential"Sachin Puri, Chief Growth Officer of Liquid Web, said the new service will support developers at an “affordable price” and offer reserved hourly pricing options for users.
“AI has infinite untapped potential, and we want to empower developers and AI innovators to harness the full power of NVIDIA GPUs,” Puri said.
“With performance benchmarks that speak for themselves, this solution is the clear choice for accelerating AI workloads and maximizing value. And this is just the beginning — stay tuned as we expand our lineup to deliver even more powerful solutions in the near future.”
Liquid Web CTO Ryan MacDonald noted the firm’s on-demand servers can offer “up to 15 percent” better GPU performance over virtualized environments, or at an equal-to-lower cost.
“Our on-demand NVIDIA-powered servers with pre-configured GPU stack let customers quickly deploy AI/ML and deep learning workloads — maximizing performance and value while focusing on results rather than setup,” says Ryan MacDonald, Chief Technology Officer of Liquid Web.
What users can expect from Liquid Web’s GPU hosting serviceThe service will leverage a range of top-tier GPUs, including Nvidia L4,L40S, and H100 GPUs. These, the company said, offer “exceptional processing speeds” that are ideal for AI and machine learning applications, large language model (LLM) development, deep learning, and data analytics.
As part of the service, users will also have access to on-demand bare metal options, which Liquid Web will enable enterprises to “gain full control” over their infrastructure.
The move from Liquid Web comes amid a period of sharpened enterprise focus on AI, with enterprises ramping up development globally.
Analysis from Grand View Research shows the globally AI market is expected to top $1.81 trillion in value by 2030, marking a significant increase compared to 2023, where the market value stood at $196.63 billion.
There have been growing concerns over both costs and hardware availability throughout this period, however, with GPU prices skyrocketing and wait times for hardware growing.
That’s why flexibility is a key focus for the service, according to Liquid Web. The company noted that users can customize GPU hosting to meet specific performance needs on an as-needed basis, and aims to target companies ranging from startup level to larger enterprises.
More from TechRadar ProAn unseen, non-human hand moving the cursor across your computer screen and typing without using the keyboard in fiction is usually a sign of malicious AI hijacking something (or a friendly ghost helping you solve mysteries like the TV show Ghost Writer). Thanks to Anthropic's new computer use feature for its AI assistant Claude, there's a much more benevolent explanation now.
Fueled by an upgraded version of the Claude 3.5 Sonnet model, this AI – dubbed 'computer use' – lets you interact with your computer much like you would. It takes the AI assistant concept a step beyond text and a voice, with virtual hands typing, clicking, and otherwise manipulating your computer.
Anthropic bills computer use as a way for Claude to handle tedious tasks. It can help you fill out a form, search and organize information on your hard drive, and move information around. While OpenAI, Microsoft, and other developers have demonstrated similar ideas, Anthropic is the first to have a public feature, though it's still in beta.
"With computer use, we're trying something fundamentally new," Anthropic explained in a blog post. Instead of making specific tools to help Claude complete individual tasks, we're teaching it general computer skills—allowing it to use a wide range of standard tools and software programs designed for people."
The computer use feature is due to Claude 3.5 Sonnet's improved performance, particularly with digital tools and coding software. Though somewhat overshadowed by the spectacle of the computer use feature, Anthropic also debuted a new model called Claude 3.5 Haiku, a more advanced version of the lower-cost Anthropic model, though once capable of matching Anthropic's previous highest performing model, Claude 3 Opus, while still being much cheaper.
Invisible AI assistanceYou can't just give an order and walk away, either. Claude's control of your computer has some technical troubles as well as deliberate constraints. On the technical side, Anthropic admitted Claude struggles with scrolling and zooming around a screen. That's because the AI interprets what's on your screen as a collection of screenshots, and then it tries to piece them together like a movie reel. Anything that happens too quickly or that changes perspective on the screen can flummox it. Still, Claude can do quite a lot by manipulating your computer, as seen above.
Unrestrained automation has obvious perils even when working perfectly, as so many sci-fi movies and books have explored. Claude isn't Skynet, but Anthropic has placed restraints on the AI for more prosaic reasons. For instance, there are guardrails stopping Claude from interacting with social media or any government websites. Registering domain names or posting content is not allowed without human control.
"Because computer use may provide a new vector for more familiar threats such as spam, misinformation, or fraud, we're taking a proactive approach to promote its safe deployment. We've developed new classifiers that can identify when computer use is being used and whether harm is occurring," Anthropic wrote. "Learning from the initial deployments of this technology, which is still in its earliest stages, will help us better understand both the potential and the implications of increasingly capable AI systems."
You Might Also Like