Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

We Tried MealPro Meal Delivery, and It Was Better Than Takeout

CNET News - Thu, 05/15/2025 - 07:00
This premade meal delivery service provides tasty, healthy meals with the nutrition information to back it up.
Categories: Technology

Android Auto is getting a big Gemini upgrade soon – and also a slightly baffling media controls change

TechRadar News - Thu, 05/15/2025 - 06:46
  • Google just made some major changes to Android Auto
  • The introduction of Gemini allows for easy conversational voice prompts
  • But tweaks to media controls have frustrated some

Google has introduced a number of updates recently, including bringing its AI-powered assistant Gemini to smartwatches, televisions and into the vehicle environment through updates to Android Auto.

Software update 14.4, which is available now in the beta testing program, has made a number of subtle tweaks to its so-called ‘Coolwalk screen’, which essentially offers a number of applications in one, easy-to-navigate interface on Android Auto-compatible vehicle head units and vehicles running the Android Automotive OS.

The main issue lies with the media playback controls, which have been shuffled around just to annoy those that have formed enough muscle memory to play, pause and rewind without having to even look at the infotainment display.

Auto Evolution reported that the play/pause button is now aligned to the left off the screen on left-hand-drive vehicles, placing it closer to the driver but switching it with where the rewind/previous button used to reside.

However, seeing as the updates are currently only available in the beta testing program, the search giant still has plenty of time to listen to user feedback and make further changes if it deems them necessary.

Gemini jumps in on the road trip

Google is pushing its AI assistant to a number of smart devices, including watches, headphones and smart glasses, allowing users to receive recommendations and answers to common questions using conversational voice prompts.

The feature is also upgrading the current Google Assistant voice commands that feature in Android Auto infotainment systems and those cars running a native Android Automotive operating system.

This means that both drivers and passengers can request specific locations along the route, such as service stations that are good for walking dogs or locating the fastest charging stations in the vicinity.

When users connect their favoured messaging app, Google says that Gemini can summarize any messages received and even go so far as translating them into another language before sending – should you have lots of bilingual buddies.

Gemini looks set to take away some of the awkward app shuffling that motorists are tasked with, thanks to the ability to now ask the AI assistant to summarize the news headlines (with or without sports) and even answer those difficult questions that kids inevitably pose on a long journey.

Google says Gemini will be available on Android Auto in the "coming months", followed by those cars running the native Android Automotive OS.

you might also like
Categories: Technology

Your Subscriptions Are Out of Control. CNET Survey Shows Americans Spend Over $1,000 a Year and Are Sick of It

CNET News - Thu, 05/15/2025 - 06:00
The economy has over half of subscribers second-guessing their monthly subscription charges. Here's how to cut back and save money.
Categories: Technology

Stagflation or Recession: With Tariffs on Pause, Is the Economy Really OK?

CNET News - Thu, 05/15/2025 - 06:00
Trump's trade war is changing economic forecasts every day.
Categories: Technology

Google's About to Tell Us More About Its Android XR Plans for Glasses

CNET News - Thu, 05/15/2025 - 05:00
What to expect from Google I/O about the company's AI-infused strategy for AR and VR.
Categories: Technology

The boombox is back in a cool new Bluetooth version that still plays cassettes, now just need to remember how to breakdance

TechRadar News - Thu, 05/15/2025 - 02:00
  • We Are Rewind's GB-001 is a real cassette boombox, but it's got Bluetooth too
  • More than enough power to rock reasonably sized blocks
  • £379 / €449 (about $505 / AU$781)

One of the things I miss about the 1980s, other than my youth, my waistline and my faith in humanity, is the boombox. The boombox is one of the most important bits of audio tech ever made: it took music out of the bedroom or living room and into the streets, soundtracking rap battles and breakdancing and changing pop culture for the better.

And now it's back! Back! BACK!

The new We Are Rewind GB-001 looks like a boombox. It plays tapes like a boombox. And it records onto those tapes like a boombox. But it also comes with some very welcome improvements over the '80s devices it's so clearly inspired by.

That's the stuff… (Image credit: We Are Rewind) We Are Rewind GB-001 boombox: key features and pricing

The big differences this time around are batteries and Bluetooth. The former means your boombox won't have the battery-munching habit I remember from my long-gone Sanyo: the 3,000mAh rechargeable and user-replaceable battery is good for 10 hours on speakers and 15 with headphones.

Secondly, there's Bluetooth 5.4 so you can stream from your smartphone if you'd rather not carry a satchel full of audio tapes around with you. You can also connect an external sound source via the 3.5mm jack or use an external mic for recording.

It's a lot louder than my boombox ever was, too: 104W of power courtesy of Class D amplification.

The spec includes dynamic power control to reduce distortion, Dolby B emulation for those noise-reduced cassettes, and support for both normal (Type I) and chrome (Type II) tapes. And at 6.81kg (just under 15lbs) it's not so heavy you won't want to take it anywhere.

The new boombox is unveiled today, 15 May, at the High End Munich 2025 hi-fi show – we're planning to track it down there ASAP. There's no confirmed release date yet, but the official price is very reasonable at £379 / €449 (about $505 / AU$781).

You might also like
Categories: Technology

Why Red Teaming belongs on the C-suite agenda

TechRadar News - Thu, 05/15/2025 - 01:41

Cyber threats have evolved far beyond the domain of the IT department. With the introduction of the  Cyber Security and Resilience Bill to the UK parliament, cyber security is now a national priority, and the stakes for businesses are higher than ever.

The bill proposes tougher regulations and potential fines of up to £100,000 for failing to address specific threats, making proactive cyber defense a financial imperative for businesses when the legislation is passed. Although many organizations invest in digital safeguards, the method that offers a genuine test of trust resilience is Red Teaming.

During Red Teaming simulations, an independent ‘Red Team’ assumes the role of real attackers, probing systems, processes, and personnel to expose vulnerabilities. However, when treated solely as a technical exercise, Red Teaming can fail to result in meaningful action. Without executive engagement, even serious vulnerabilities may go unresolved.

Converting technical insights into business impact

One of the biggest challenges in Red Teaming is making sure that insights connect with senior stakeholders. Often, reports focus on niche technical exploits or zero-day vulnerabilities. While these details matter to security engineers, they don’t paint the broader picture of a successful attack.

Organizations that understand it map technical findings to financial, operational, and reputational risks. Instead of discussing abstract vulnerabilities, Red Team outputs highlight and articulate real-world consequences, such as: “A compromise of this server could disrupt our online platform for 48 hours, costing an estimated £X in lost sales,” or “An attacker could access 200,000 customer records, risking regulatory penalties of up to 4% of global turnover.” This type of language cuts through the technical jargon and positions the issues in terms that grab board-level attention.

This approach can even help shape an organization's risk appetite. By working closely with security teams, C-suite leaders and directors can begin to define thresholds around acceptable risk. For instance, once they see the severity and ease with which specific systems can be breached, many executives quickly realize that “low probability”  vulnerabilities may still represent  “high impact”  scenarios that must be addressed.

Facilitating concrete security advancements

Ensuring that Red Team results spur real change requires more than technical remediation lists. It calls for clear, focused advice that aligns with the organization's primary goals. This guidance often shapes how future incidents will be handled and informs security spending.

Crucially, an iterative feedback loop is needed. After a Red Team engagement finishes, forward-thinking companies should schedule post-engagement debriefs that gather board members, department heads, and security leaders around the same table.

Together, they can examine what went wrong and what went right. This culture of transparency turns Red Team insights into targeted, high-level decisions. For instance, if a simulated attack revealed weaknesses in cloud services, senior leaders might pivot the budget to upgrade protections and work with external suppliers to strengthen service-level agreements.

In the UK, major financial institutions were among the first to adopt advanced threat-led testing under programs such as CBEST. Lessons from these exercises demonstrate how immediate executive action can be pivotal. Reports are not simply filed away; boards commission follow-up work to verify that vulnerabilities have been adequately fixed and introduce ongoing mini-tests to measure improvement. Ultimately, this keeps cybersecurity elevated as a business priority rather than dropping off the radar until major incidents occur.

Presenting the business value of Red Teaming

Business leaders often grapple with the return on investment when it comes to cyber security. However, linking Red Teaming directly to measurable risk reduction helps ease those concerns. The cost of a Red Team exercise is typically much less than the fallout from a data breach or ransomware attack. By helping organizations tackle weaknesses before attackers do, Red Teaming can prevent costly incidents that cause disruptions and damaged reputations.

In a landscape where customer and investor trust is invaluable, proactive efforts to strengthen defenses can make a competitive difference. Many organizations now see cyber security as an enabler of digital transformation. By identifying weaknesses within new technologies, be they cloud services, Internet of Things devices, or mobile applications, Red Team engagements provide a safety net for innovation. Executive teams can confidently pursue new products or service offerings, knowing potential security pitfalls will be flagged early.

There is growing recognition that Red Teaming provides unique validation for cyber security investments. Boards commonly ask if the millions spent on firewalls and endpoint detection tools are genuinely effective. Red Team exercises offer a reality check. If attackers easily circumvent defenses without detection, it becomes clear where future resources should be focused. Over time, regular Red Team engagements create a measurable decline in critical findings, demonstrating tangible improvement in security posture.

Turning security into a strategic priority

Red Teaming goes far beyond a routine security audit. It exposes an organization's technical and strategic vulnerabilities, offering leadership a holistic view of their risk landscape. When its findings are translated into business impact, Red Teaming helps leaders understand cyber risk in terms of financial loss, operational disruption, and reputational damage. This reframing moves cyber security out of the IT silo and firmly onto the strategic agenda.

Driving meaningful improvements requires cross-functional collaboration and shared accountability. With the UK’s Cyber Security and Resilience Bill raising the bar for organizational preparedness, Red Teaming offers a practical, repeatable way to measure and improve cyber resilience over time. It gives leaders the confidence to act early, adapt quickly, and strengthen their defenses before a real adversary strikes. Those who embrace it will not only reduce risk but also build a more agile, trusted, and future-ready organization.

We list the best forensic and pentesting Linux distro.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

I found the world's largest external SSD, and at 30.72TB, it is even roomier than the biggest hard disk drives out there

TechRadar News - Wed, 05/14/2025 - 22:28
  • Palm-sized SSD with 30TB capacity offers jaw-dropping storage in an ultra-compact aluminum shell
  • Glyph Blackbox Plus U.2 is faster than any portable HDD, but slower than PCIe Gen4 SSDs
  • Formatted for macOS, but reformatting for Windows may confuse less tech-savvy users

Glyph has unveiled its Blackbox Plus U.2 External SSD, a high-capacity, high-speed storage solution aimed at professional content creators, data-intensive workflows, and enterprise users.

Its standout 30.72TB capacity exceeds even the largest external HDD models, which typically top out around 24TB.

Measuring just over five inches in length and under an inch thick, the device is about the size of a rugged smartphone and easily fits in one hand.

Blackbox Plus U.2 still lags behind PCIe Gen4 internal SSDs

The Blackbox Plus U.2 is built on enterprise-grade NVMe technology and offers sustained data transfer speeds of up to 1,050 MB/s, fast, though notably slower than top-tier PCIe Gen4 internal SSDs.

That said, this portable SSD connects via a 10Gb USB 3.2 Gen 2 Type-C interface, which is backward compatible with USB 3.0 and Thunderbolt 3. This ensures broad compatibility across various hardware setups.

The drive ships preformatted for macOS but can be reformatted for Windows. Still, setup may pose a challenge for users unfamiliar with drive formatting or cross-platform configurations.

Cooling is handled through a fanless aluminum enclosure that doubles as a heatsink. While this passive system eliminates mechanical noise, it may not be ideal in high-temperature environments under sustained loads.

The device requires an external power supply, which impacts portability. Compared to bus-powered SSDs, this setup is bulkier and less convenient for mobile or casual users.

In the box, Glyph includes a USB-C to USB-C cable, a USB-C to USB-A cable, and a three-pronged power adapter. Buyers also get a three-year hardware warranty, two years of Level-1 data recovery, and a one-year advance replacement program.

The Blackbox Plus U.2 is available in 7.6TB, 15.36TB, and 30.72TB models, priced at $899.95 (with a $200 discount), $2,399.95, and $4,999.95, respectively. Preorders are now open on Glyph’s official website.

You might also like
Categories: Technology

Today's NYT Mini Crossword Answers for Thursday, May 15

CNET News - Wed, 05/14/2025 - 21:12
Here are the answers for The New York Times Mini Crossword for May 15.
Categories: Technology

Apple wants to connect thoughts to iPhone control – and there's a very good reason for it

TechRadar News - Wed, 05/14/2025 - 19:00
  • Apple announced plans to support Switch Control for Brain-Computer Interfaces
  • The tool would make devices like iPhones and Vision Pro headsets accessible for people with conditions like ALS
  • Combined with Apple’s AI-powered Personal Voice feature, brain-computer interfaces could allow people to think words and hear them spoken in a synthetic version of their voice

Our smartphones and other devices are key to so many personal and professional tasks throughout the day. Using these devices can be difficult or outright impossible for those with ALS and other conditions. Apple thinks it has a possible solution: thinking. Specifically, a brain-computer interface (BCI) built with Australian neurotech startup Synchron that could provide hands-free, thought-controlled versions of the operating systems for iPhones, iPads, and the Vision Pro headset.

A brain implant for controlling your phone may seem extreme, but it could be the key for those with severe spinal cord injuries or related injuries to engage with the world. Apple will support Switch Control for those with the implant embedded near the brain’s motor cortex. The implant picks up the brain’s electrical signals when a person thinks about moving. It translates that electrical activity and feeds it to Apple's Switch Control software, becoming digital actions like selecting icons on a screen or navigating a virtual environment.

Brain implants, AI voices

Of course, it's still early days for the system. It can be slow compared to tapping, and it will take time for developers to build better BCI tools. But speed isn’t the point right now. The point is that people could use the brain implant and an iPhone to interact with a world they were otherwise locked out of.

The possibilities are even greater when looking at how it might mesh with AI-generated personal voice clones. Apple's Personal Voice feature lets users record a sample of their own speech so that, if they lose their ability to speak, they can generate synthetic speech that still sounds like them. It’s not quite indistinguishable from the real thing, but it’s close, and much more human than the robotic imitation familiar from old movies and TV shows.

Right now, those voices are triggered by touch, eye tracking, or other assistive tech. But with BCI integration, those same people could “think” their voice into existence. They could speak just by intending to speak, and the system would do the rest. Imagine someone with ALS not only navigating their iPhone with their thoughts but also speaking again through the same device by "typing" statements for their synthetic voice clone to say.

While it's incredible that a brain implant can let someone control a computer with their mind, AI could take it to another level. It wouldn't just help people use tech, but also to be themselves in a digital world.

You might also like
Categories: Technology

Marvel Rivals' Galacta's Gift Event Makes Ranking Up Even Easier

CNET News - Wed, 05/14/2025 - 18:35
Marvel Rivals' ranked competitive mode has been mired in controversy since the game's release. Despite this, it's about to be even easier to climb the ladder.
Categories: Technology

Best Internet Providers in Connecticut

CNET News - Wed, 05/14/2025 - 18:33
Check out CNET experts' recommendations for internet service providers in Connecticut.
Categories: Technology

Cheap(er) 15.36TB PCIe Gen 5 SSDs on the way as Adata launches new enterprise brand, but don't expect these to fit your PC case

TechRadar News - Wed, 05/14/2025 - 17:38
  • Adata T7P5 SSD brings 15.36TB and blistering Gen 5 speed to enterprise storage
  • Trusta isn’t just fast, it’s built for AI, virtualization, and high-efficiency data environments
  • With 13,500MB/s read speeds, the T7P5 crushes most consumer and prosumer storage options

With enterprise demand for AI servers and high-performance storage infrastructure booming, Adata is making a bold move into the data center and AI markets with the launch of its new enterprise brand, Trusta.

Revealed ahead of Computex 2025, Trusta promises to deliver advanced PCIe Gen 5 SSDs in massive capacities, blurring the line between performance and practicality.

Trusta’s flagship model, the T7P5 SSD, leads the new T7 Series and is built to handle demanding workloads such as AI training, vector databases, and virtual desktops.

T7P5 SSD delivers extreme speeds

This SSD offers blazing-fast read and write speeds of up to 13,500 MB/s and 10,400 MB/s, respectively, with capacities ranging from 1.92TB up to a staggering 15.36TB, making it one of the fastest enterprise SSDs introduced to date.

Unlike consumer models, the T7P5 is built in enterprise form factors like U.2, E1.S, and E3.S, ensuring compatibility with server and cloud hardware. However, it’s a large SSD, and it won’t fit inside a typical business desktop.

For enterprises with less intensive needs, Adata also offers the T7P4 PCIe Gen 4 SSD, which delivers up to 7,400 MB/s read and 5,050 MB/s write speeds, in capacities up to 7.68TB.

The entry-level T5 Series, which includes the T5P4B, T5S3B, and T5S3, supports both PCIe Gen 4 and SATA III interfaces. These drives are targeted at system boot operations and applications requiring data reliability over raw speed.

Still, performance isn’t just about headline numbers. Trusta integrates Flexible Data Placement (FDP) technology to optimize data flow, particularly under high-load conditions where latency and efficiency matter most.

For enterprise IT buyers and planners, Trusta’s lineup offers a compelling look at next-gen storage. But for average consumers in search of the best external SSD or a high-capacity Gen 5 upgrade, these drives are out of reach, both in terms of form factor and intended use case.

Via TechPowerUp

You might also like
Categories: Technology

PGA Championship 2025: TV Schedule and How to Watch All the PGA Tour Golf From Anywhere

CNET News - Wed, 05/14/2025 - 16:00
It's the second major of the year as the world's top players head to the Quail Hollow Club.
Categories: Technology

Audible’s AI narration sounds impressive, but I'd rather hear the story told by a human

TechRadar News - Wed, 05/14/2025 - 16:00

Audiobooks have saved my sanity on many long commutes and have been great company while I'm cleaning or doing other chores. When the performance is good, it's easy to fall into the story. Audible wants authors and their readers to embrace AI as an alternative to human narration, but I am skeptical. Audible is offering publishers access to a fully integrated AI production pipeline. That includes auto-generating entire audiobooks with synthetic voices.

Their pitch is appealing on the surface: there are millions of books out there, and only a sliver of them ever make it into audio. Making audiobooks is expensive, time-consuming, and involves real people who need to be paid fairly for their time. An AI narrator is faster, cheaper, and a lot of people might not even notice it's not a human performing.

But "good enough" shouldn't be the standard for art, and audiobooks are very much an art form. Great narration adds depth, color, rhythm, and even new meaning to a text. It transforms reading aloud from words on a page you can hear to a real performance. Even if AI gets close in a technical sense, and I've heard AI audio that matches a human performance for at least a few minutes, we’ll still know the difference.

Human narration has nuance because it has context. The narrator understands not just the definition of the words they're saying, but the emotion and history behind them. They know the difference between a sigh of relief and a sigh of resignation. AI can approximate those sounds, sometimes amazingly so, but it's like a pet trick. A dog can cover its eyes, but that's not actually the dog feeling embarrassed.

The more AI voices fill our earbuds, the more we risk turning one of the most intimate forms of storytelling into something that feels robotic, flat, and eerily lifeless. It’s like auto-tuning a lullaby. It might hit the right notes, but it doesn’t sing.

AI narration needs

All of that said, I'm not against using AI for audiobooks in the right setting. Like any technology, it's about how AI narration is deployed, not whether it exists. There are so many books and new ones emerging all the time. If you’re an independent author with no budget to hire a narrator, or a publisher with a shelf of titles no one has touched in a decade, AI narration could breathe life into your books.

Synthetic voices don’t replace anything in those contexts; they just provide access. And an AI voice could supplement human readers with a multi-voice performance if you use the self-service version of Audible's AI narration platform. Using AI to supplement rather than replace all human voices feels like a better option to me.

One area I'm all in on for AI voices is translating texts. Audible has a beta test for AI-powered translation tools that could bring books to people unable to understand them in their original language. If there’s anything worse than a great book not having an audiobook, it’s a great book not being accessible in your language. Audible is starting the program by offering to translate English books into Spanish, French, German, and Italian.

The translation service can simply translate text and then give the new work an AI narrator, but what is more interesting to me is the speech-to-speech mode. That means an audiobook performed by a human in English could be replicated in a different language while sounding like the original performer.

The narrator of a bestselling English audiobook could now “speak” fluent Spanish in their own voice, introducing that story to new listeners around the world. That’s my favorite way to think about how to use AI. It can expand the reach of art without diluting its heart.

It's not quite the same as original, human narration, but it's a solution to a problem. That's how Audible should pitch AI audiobooks. We should absolutely use AI narration to make books accessible. But if it's possible to give it a human touch, that should be the first thought.

It's important not to lose sight of how this AI audiobook shift affects the performers who often build careers lending their voices to other people’s stories. If AI starts gobbling up midlist titles, budget-conscious publishers might see no reason to hire real readers anymore. AI doesn’t have to be the enemy. But it shouldn’t be the default.

You might also like
Categories: Technology

See if You're Able to Survive Five Nights at Freddy's on PlayStation Plus Soon

CNET News - Wed, 05/14/2025 - 15:39
PlayStation Plus subscribers will also be able to explore the Chornobyl Exclusion Zone in Stalker and play some other great games, too.
Categories: Technology

I sat down with two cooling experts to find out what AI's biggest problem is in the data center

TechRadar News - Wed, 05/14/2025 - 15:29
  • AI data centers overwhelm air cooling with rising power and heat
  • Liquid cooling is becoming essential as server density surges with AI growth
  • New hybrid cooling cuts power and water but faces adoption hesitance

As AI transforms everything from search engines to logistics, its hidden costs are becoming harder and harder to ignore, especially in the data center. The power needed to run generative AI is pushing infrastructure beyond what traditional air cooling can handle.

To explore the scale of the challenge, I spoke with Daren Shumate, founder of Shumate Engineering, and Stephen Spinazzola, the firm’s Director of Mission Critical Services.

With decades of experience building major data centers, they’re now focused on solving AI’s energy and cooling demands. From failing air systems to the promise of new hybrid cooling, they explained why AI is forcing data centers into a new era.

What are the biggest challenges in cooling a data center?

Stephen Spinazzola: The biggest challenges in cooling data centers are power, water and space. With high-density computing, like the data centers that run artificial intelligence, comes immense heat that cannot be cooled with a conventional air-cooling system.

The typical cabinet loads have doubled and tripled with the deployment of AI. An air-cooling system simply cannot capture the heat generated by the high KW/ cabinet loads generated by AI cabinet clusters.

We have performed computational fluid dynamic (CFD) on numerous data center halls and an air-cooling system shows high temperatures above acceptable levels. The air flows we map with CFD show temperature levels above 115 degrees F. This can result in servers shutting down.

Water cooling can be done in a smaller space with less power, but it requires enormous amount of water. A recent study determined that a single hyper-scaled facility would need 1.5 million liters of water per day to provide cooling and humidification.

These limitations pose great challenges to engineers while planning the new generation of data centers that can support the unprecedented demand we’re seeing for AI.

How is AI changing the norm when it comes to data center heat dissipation?

Stephen Spinazzola: With CFS modeling showing potential servers shutting down with conventional air-cooling within AI cabinet clusters, the need for direct liquid cooling (DLC) is required. AI is typically deployed in 20-30 cabinet clusters at or above 40 KW per cabinet. This represents a fourfold increase in KW/ cabinet with the deployment of AI. The difference is staggering.

A typical Chat-GPT query uses about 10 times more energy than a Google search – and that’s just for a basic generative AI function. More advanced queries require substantially more power that have to go through an AI Cluster Farm to process large-scale computing between multiple machines.

It changes the way we think about power. Consequently, the energy demands are shifting the industry to utilize more liquid-cooling techniques than traditional air cooling.

We talk a lot about cooling, what about delivering actual power?

Daren Shumate: There are two overarching new challenges to deliver power to AI computing: how to move power from UPS output boards to high-density racks, and how to creatively deliver high densities of UPS power from utility.

Power to racks is still accomplished with either branch circuits from distribution PDUs to rack PDUs (plug strips) or with plug-in busway over the racks with the in-rack PDUs plugging into the busway at each rack. The nuance now is what ampacity of busway makes sense with the striping and what is commercially available.

Even with plug-in busway available at an ampacity of 1,200 A, the density of power is forcing the deployment of a larger quantity of separate busway circuits to meet density and the striping requirements. Further complicating power distribution are specific and varying requirement of individual data center end users from branch circuit monitoring or preferences of distribution.

Depending upon site constraints, data center cooling designs can feature medium voltage UPS. Driven by voltage drop concerns, the MV UPS solves concerns for the need to have very large feeder duct banks but also introduces new medium voltage/utilization voltage substations into the program. And when considering medium voltage UPS, another consideration is the applicability of MV rotary UPS systems vs. static MV solutions.

What are the advantages/disadvantages of the various cooling techniques?

Stephen Spinazzola: There are two types of DLC in the market today. Emersion Cooling and cold plate. Emersion Cooling uses large tanks of a non-conducing fluid with the servers positioned vertically and fully emersed in the liquid.

The heat generated by the servers is transferred to the fluid and then transferred to the buildings chilled water system with a closed loop heat exchanger. Emersion tanks take up less space but require servers that are configured for this type of cooling.

Cold-plated cooling uses a heat sink attached to the bottom of the chip stack that transfers the energy from the chip stack to a fluid that is piped throughout the cabinet. The fluid is then piped to an end of row Cooling Distribution Unit (CDU) that transfers the energy to the building chilled water system.

The CDU contains a heat exchanger to transfer energy and 2N pumps on the secondary side of the heat exchanger to ensure continuous fluid flow to the servers. Cold plate cooling is effective at server cooling but it requires a huge amount of fluid pipe connecters that must have disconnect leak stop technology.

Air cooling is proven technique for cooling data centers, which has been around for decades; however, it is inefficient for the high-density racks that are needed to cool AI data centers. As the loads increase, it becomes harder to failure-proof it using CFD modeling.

You're presenting a different cooler, how does it work and what are the current challenges to adoption?

Stephen Spinazzola: Our patent pending Hybrid-Dry/AdiabaticCooling (HDAC) design solution uniquely provides two temperatures of cooling fluid from a single closed loop, allowing for a higher temperature fluid to cool DLC servers and a lower temperature fluid for conventional air cooling.

Because HDAC simultaneously uses 90 percent less water than a chiller-cooling tower system and 50 percent less energy than an air-cooled chiller system, we’ve managed to get the all-important Power Usage Effectiveness (PUE) figure all the way down to about 1.1 annualized for the type of hyperscale data center that is needed to process AI. Typical AI data centers produce a PUE ranging from 1.2 to 1.4.

With the lower PUE, HDAC provides an approximate 12% more usable IT power from the same size utility power sized feed. Both economic and environmental benefits are significant. With a system that provides both an economic and environmental benefit, HDAC requires only “a sip of water”.

The challenge to adoption is simple: nobody wants to go first.

You might also like
Categories: Technology

Today's NYT Connections: Sports Edition Hints and Answers for May 15, #234

CNET News - Wed, 05/14/2025 - 15:16
Here are some hints and the answers for the NYT Connections: Sports Edition puzzle, No. 234, for May 15.
Categories: Technology

Chinese CPU vendor swaps AMD Zen architecture for homegrown one to deliver 128-core monster to give EPYC and Xeon a run for their money

TechRadar News - Wed, 05/14/2025 - 15:11
  • Hygon’s C86-5G breaks free from AMD Zen, unleashing 128 cores of homegrown muscle
  • SMT4 powers each core to run four threads, stacking up to 512 threads total
  • AVX-512 instructions make it a strong fit for AI, analytics, and scientific computing

Hygon, a key player in China’s semiconductor industry, is advancing its server processor lineup with the upcoming C86-5G, a flagship, high-performance CPU featuring 128 cores and 512 threads, positioning it to compete directly with AMD’s EPYC and Intel’s Xeon platforms.

According to TechPowerUp, this marks Hygon’s complete break from AMD’s Zen architecture and the introduction of its first fully homegrown design, the result of five years of domestic R&D in CPU development.

The new lineup is made possible through four-way simultaneous multithreading (SMT4), allowing each core to handle four threads.

Built for parallel workloads and high throughput

While SMT4 is not a new concept - it has appeared in processors like Intel’s Xeon Phi and IBM’s Power8 - its use in a modern, domestically developed Chinese processor is a notable milestone.

The 128-core configuration in the C86-5G represents a major leap from its predecessor, the C86-4G, which had 64 cores and 128 threads using traditional SMT2.

Designed for enterprise and server workloads, the C86-5G features 16 channels of DDR5-5600 memory, potentially supporting up to 1TB using 64GB DDR5 modules. This is a step up from the previous model’s 12 channels of DDR5-4800.

On the connectivity front, while Hygon has not yet disclosed the exact PCIe 5.0 lane count, it has confirmed support for Compute Express Link 2.0 (CXL 2.0), aligning the chip with industry standards used by AMD’s EPYC 9005 (Turin) and Intel’s 5th Gen Xeon (Emerald Rapids). The earlier C86-4G already offered 128 lanes of PCIe 5.0, so similar or better support is expected.

Although the specific microarchitecture has not been detailed, Hygon states it is based on an "enhanced self-developed microarchitecture" that follows the Zen-based Dhyana design of the first generation.

According to the company, the architecture delivers a 17% improvement in instructions per cycle (IPC), though this remains unverified in the absence of benchmark testing.

The chip also supports AVX-512 instructions for high-performance computing tasks and is built to handle physical stress in demanding environments. It is expected to support standard server memory modules like RDIMMs and is intended for large-scale data center deployments.

While Hygon still trails AMD and Intel in overall performance, the C86-5G's technical specifications, including I/O capabilities, memory bandwidth, threading, and core count, place it in a competitive position.

Although there is no official launch date yet, development is likely well underway, given that the C86-4G has been on the market since 2024.

You might also like
Categories: Technology

Today's NYT Strands Hints, Answers and Help for May 15, #438

CNET News - Wed, 05/14/2025 - 15:00
Here are hints and answers for the NYT Strands puzzle No. 438 for May 15.
Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology