Investment in AI infrastructure is booming globally as organizations look to build the strong foundations necessary for an AI-powered economy. Data centrers can handle vast volumes of data traffic with minimal latency, making them indispensable in realizing the true potential of AI. Multiple stakeholders, including tech giants, investment firms and government bodies are betting big when it comes to AI and the underlying infrastructure. For example, SK Telecom has vowed to create an “AI infrastructure superhighway, " including constructing gigawatt-scale data centers across Asia and the Pacific.
The scale of this growth is not to be underestimated. According to analysis from McKinsey, global demand for data center capacity could rise at an annual rate of between 19 and 22 percent from 2023 to 2030, meaning that to meet demand, at least twice the data center capacity built since 2000 would have to be built in less than a quarter of the time.
Key security risksMuch like any other investment, this infrastructure will require protection from cybersecurity threats. As data centers become increasingly indispensable, they become more of a target, allowing cybercriminals to disrupt operations, access data and steal valuable outputs.
Many have already started to recognize the significant threat that cybercriminals pose to data centers. For example, the UK government announced in September that data centers powering the economy will be designated as Critical National Infrastructure (CNI) alongside energy and water systems. This designation will allow the government to support the sector in the event of critical incidents, minimizing impacts on the economy.
Threats to AI infrastructure will be amplified by the emergence of quantum computing. Quantum computers will not be superior to classical computers in every application, but they will vastly increase our ability to compute and break existing public key cryptography protocols, widening the threat vector facing data centers.
Overall, there are three main areas of risk:- Compromising data: Much of the data used to generate results from an AI model comes from Operational Technology (OT) systems. These systems, traditionally isolated from external networks and almost like a "forgotten" sub-sector, now face heightened vulnerability to cyber threats in the form of both tampering on-site and attacks enabled through their connectivity to IT networks. Any compromise of the source data in these systems risks reducing the integrity of AI-generated outputs, as manipulated data can produce misleading results.
- Intercepting data: In transit, source data and AI outputs are vulnerable to interception and theft, enabling adversaries to access knowledge without detection. “Harvest Now, Decrypt Later” in which data is stolen for future decryption when decryption capabilities improve, poses a substantial threat. Adversaries may be able to gain the upper hand by intercepting confidential strategies, ideas and technical knowledge created by AI.
- The threat of unauthorized access: Advancements in AI allow for the creation of sophisticated and convincing deep-fakes, which can facilitate real-time impersonation. As computational power continues to increase, traditional identity and access management systems risk becoming obsolete. This presents a critical challenge for securing the valuable outputs generated by AI, as unauthorized access to these outputs can occur even within an encrypted system if attackers acquire the necessary credentials. Establishing protective measures.
To minimize these risks, data center stakeholders must take concrete steps to increase the physical and cyber security of their investments. These measures should be integrated at the earliest possible stage to ensure that AI infrastructure is secure by design.
- Zero trust architecture: Data centers should be equipped with a Zero-Trust Architecture, which will ensure that inherent trust in the network is removed, the network is assumed hostile and each request is verified based on an access policy.
- Physical security: It's important to restrict access to crucial infrastructure with systems like surveillance cameras, on-site biometric authentication, and perimeter fencing to avoid physical tampering. Being positioned in a remote and isolated location can also eliminate many physical threats.
- Crypto agility: Secure encryption protocols are vital to protecting data during transmission and the rise of quantum computing means that current cryptographic methods will be inadequate. Next-generation cryptography, employing a combination of existing algorithms and post-quantum algorithms, is essential to enhance data security while ensuring system interoperability. By designing systems that are inherently quantum-safe, flexible, and backward-compatible, organizations can protect confidential information.
- Multi-factor biometrics: Effective access management protocols are crucial to safeguarding AI-generated outputs. Implementing rigorous Multi-Factor Authentication (MFA) and, ideally, Multi-Factor Biometrics (MFB), restricts access to authorized users only. Regular employee training on secure protocols and phishing prevention can also prevent breaches - employees should be equipped to identify schemes and recognize potential signs of deep-fake threats, which can now be synthesized near real time.
Future-proofing the AI economyAs the AI economy continues to grow, it is essential for stakeholders to prioritize comprehensive security measures to safeguard critical infrastructure such as data centers. Securing AI systems protects strategic AI-generated assets, ensuring that investments yield the intended competitive advantage rather than allowing outputs to fall into the hands of adversaries, who will be able to front-run the rightful data owner without spending the same capital investment. To truly capitalize on AI’s potential, organizations must ensure that this infrastructure is secure by design and can withstand future threats, including those enabled by the development of quantum computers.
We've featured the best encryption software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Businesses are playing catch-up with AI. The rapid pace of AI in general, and generative AI (GenAI) in particular, has put an extraordinarily powerful new tool in our hands while we’re still learning how and where to use it. We’ve been impressed by the technology’s capabilities, but now we must get serious about AI's real-life business impact. The question is: Do we know how?
Skills, governance, and culture are essential to making GenAI a success. A recent IBM study found that while CEOs globally realize the need for GenAI governance, only 39% said they have good GenAI governance in place. Yet they’re driven to take risks and make significant GenAI investments, even without understanding its full value. This is because the danger of falling behind competitors and missing out on the potential gains it could bring would be worse.
All the while, hackers continue to target employees with AI generating smarter phishing emails, texts, and even deep fakes. In one of the largest corporate frauds known to date, a finance worker transferred more than $25 million to scammers using deepfake technology to pose as a company’s Chief Financial Officer (CFO) on a video conference call.
Creating guardrails and governance around using GenAI more safely, securely, and effectively, while cultivating a culture around AI is essential to GenAI’s success in business. Here are three steps you can take to develop governance and culture in your organisation that pave the way for success with AI.
1. Establish an AI governance structureThis first critical step is how we began our AI journey. It’s tempting to jump right in with new technology like GenAI, but wise to take a step back and structure how to leverage it across your organization.
Effective AI integration begins with a solid governance structure. To establish AI governance, you need to define a set of core principles reflecting your organization's values and covering aspects like data privacy, security, and ethical AI use.
A governance or executive steering committee defines guidelines on how to leverage GenAI safely and securely. This group should be cross-functional and include top-level executives from business units across your organization, including representatives from legal and security. Together, they determine the best tools, platforms, and standards for your organization, establishing security, privacy, and legal foundations to guide your GenAI journey.
Without a governing body in place, your business cannot address any potential privacy, confidentiality, or data leakage issues. For example, imagine employees using GenAI ‘x’, inputting valuable or confidential company information as part of their work. Unless this risk was identified, reviewed, and mitigated, that company information could be used to further train the GenAI ‘x’ algorithm and potentially leak private company information publicly.
Legal risks are another growing business concern. Lawsuits are growing against AI companies whose models may include confidential, copyrighted, or proprietary information – and companies that leverage infringing services may also be liable or sued. A governance model works to ensure legal protections are in place. For example, ensuring your agreement with an AI service provider includes a strong indemnity in your favor will help mitigate your risk exposure if you are sued based on your use of a service provider’s AI solution.
Before embarking on an AI journey for your organization, it’s critical to create a governance team or steering committee to oversee the process. The risks – and rewards – are too great to leave using GenAI to chance. Remember, governance is not about crushing innovation. It’s about creating an environment where employees can leverage GenAI for greater efficiency and innovation – safely. The focus should be on aligning with your organization's values and creating a safe environment for your AI journey.
2. Operationalize AI excellenceOnce the ground rules are established, the next step is to put the foundational governance policies, Generative AI tools, platforms and standards into practice. A working committee, or center of excellence, can bring GenAI to life and continually improve its use throughout your organization.
A working committee develops a common architecture, framework, and use cases that the whole company can leverage within the guides established by the governing committee. Ideally, your AI platform should align with the other tech platforms standardized across your organization. By taking this approach, you can develop proprietary GenAI tools so that everyone across the organization can use AI securely, keeping all information within our walls; our organization's information isn’t used by external entities to train AI models.
This approach democratizes the use of GenAI. The nature of the technology is such that whether you are a business user or an IT team member, you can be very proficient in AI based on the nature of the tools made available throughout an organization.
Turning AI principles into actionable policies that organizations can follow effectively and responsibly is a significant challenge. Governance develops the guiding roadmap and the working committee delivers AI platforms, tools and use cases. But when it comes to helping ensure employees embrace the GenAI tools, that’s where culture comes in.
3. Build AI culture from the bottom up and the top downIt’s the human touch that brings AI culture to life. Companies should look to have a full-circle AI culture, where people from every level of the organization share their passion and knowledge about AI.
Senior leaders should be participating and leading by example, learning about the GenAI tools and leveraging them, encouraging their teams to do the same. Additionally, every employee should receive GenAI training. This will help employees upskill, build a baseline of understanding, reduce fear of AI, and adopt internal AI tools more broadly.
It’s important to foster a culture that supports learning and innovation when it comes to GenAI. Empower those using GenAI to share their experiences related to working and learning with new technology and provide learning and training opportunities to your AI evangelists. AI culture is being built this way as more and more employees leverage GenAI and know how to get the most out of it.
GenAI for the generationsGenAI in business isn’t just for data scientists. Its capabilities can be leveraged across your organization by every employee. As AI technologies become more prevalent, it’s crucial employees are equipped to handle the opportunities and threats that come with it. A governing structure and working committee can provide the GenAI roadmap that brings people together as they delve into this tech journey successfully.
We've featured the best AI chatbot for business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
A US government contractor was forced to shut down parts of its infrastructure in order to contain a ransomware attack.
ENGlobal Corporation, a US-based provider of engineering and automation services, filed a new 8-K report with the US Securities and Exchange Commission (SEC) recently, in which it said that the attack is still being remedied and that the deadline is still unknown.
“On November 25, 2024, ENGlobal Corporation became aware of a cybersecurity incident. The preliminary investigation has revealed that a threat actor illegally accessed the Company’s information technology (“IT”) system and encrypted some of its data files,” the company said in the filing.
Unknown attackers“Upon detecting the unauthorized access, the Company immediately took steps to contain, assess and remediate the cybersecurity incident, including beginning an internal investigation, engaging external cybersecurity specialists, and restricting access to its IT system.”
To tackle the problem, ENGlobal shut parts of its network down, meaning that its systems are “limited to essential business operations”.
“The timing of restoration of full access to the Company’s IT system remains unclear as of the date of this filing,” the document reads, concluding that it still doesn’t know if the attack will impact the company financially.
The company did not discuss who the attackers were, or if they exfiltrated any sensitive files from its systems, which is standard practice in ransomware attacks. No threat actors claimed responsibility yet, either.
ENGlobal Corporation specializes in projects for the energy, government, and industrial sectors. The company focuses on delivering solutions in areas such as modular process systems, automation integration, and advanced technologies for energy and sustainability. According to The Register, it reported $39 million in revenue last year.
It counts roughly 130 employees, and primarily operates within the US. Its headquarters are in Houston, Texas, and has offices in Denver, Tulsa, and Henderson.
Via The Register
You might also likeAI video generation has come a long way since we saw the first demos of OpenAI’s Sora way back in February. Despite it being recently leaked for a few hours, we’re still waiting for Sora to be released to the public. Still, a lot of other apps have emerged, filled the void since then, and are actually delivering on the promise of delivering a fully AI-generated movie.
The most recent video to knock our socks off has been a 10-minute Batman fan film created entirely in AI. While it’s obviously taken its visual cues from the 2022 The Batman movie, it’s incredibly realistic. The fact that it’s now possible for a fan to make their own great-looking Batman movie using AI tools that are available to everyone will leave many people questioning how long it will be before actors, movie sets, and film production crews become a thing of the past.
But let’s not get ahead of ourselves; we’re a long, long way from that yet, and a lot of the creatives currently involved in producing AI movies see AI as just another tool that is going to be utilized in current movie production, rather than something that replaces it.
But the point is, we’re now at the stage where pretty much anybody can produce watchable 10-minute fan films, and I feel like we’ve reached a kind of milestone in the process.
Meet your bad guy. (Image credit: kavan-the-kid/DC) Breaking down the toolsThe film was posted on Reddit by user kavan-the-kid, who lists the tools he used to produce it as KLING AI, Hailuo AI, Runway Act-One, Midjourney, Topaz Labs AI, Lima Dream Machine, Eleven Labs, Magnific, Blender, Character Creator 4, Adobe Premiere Pro, Adobe After Effects, Adobe Photoshop and Davinci Resolve.
Kavan-the-kid said it took just 3 weeks to produce the movie, and when asked about his production process, he replied, “Started in Midjourney, then AI video tools like KLING AI, MiniMax and Runway Act-One. Cloned the voices in Eleven Labs. Topaz for upscaling.” Reddit users responded with phrases like “mind blowing,” “amazing,” and “impressive."
If you find the Batman fan movie inspiring and want to get into producing your own AI fan films, you might like to start with our roundup of the best AI image generators and take it from there.
You might also like...Those with Windows 10 PCs who are blocked from upgrading to Windows 11 due to a lack of TPM 2.0 may have been hoping Microsoft might relent on that requirement – but the software giant has clarified that this won’t happen.
Neowin noticed that in a post addressing IT admins, Microsoft explained that TPM 2.0 is a ‘non-negotiable’ element for the future of Windows computers running Windows 1
To rewind briefly, TPM stands for Trusted Platform Module, and this hardware security measure can be present as a standalone module in your PC, or more commonly, it’s simply enabled in your motherboard firmware (meaning diving into the BIOS, and we explain what to do here).
The problem arises when you have an older chipset and there’s no TPM 2.0 capability (and indeed Windows 11 requirements rule out older CPUs too). That leaves you with a potentially tricky and expensive upgrade to perform in order to get Windows 11.
Microsoft is firm on the need for TPM 2.0 (and complementary security features like secure boot) because it implements a tighter level of security for Windows 11 PCs, something the company feels is necessary.
As the blog post tells us: “From supporting more intricate encryption algorithms to adding cryptographic functionality, TPM 2.0 is essential to counteracting present-day cyber risks.”
Microsoft adds: “By instituting TPM 2.0 as a non-negotiable standard for the future of Windows, we elevate the security benchmark.”
(Image credit: fizkes / Shutterstock) Analysis: No hope of Microsoft budging an inchOf course, this is all about businesses, which are obviously in much greater need of defenses against the ‘present-day cyber risks’ flying around than your average consumer sat at home with their PC.
However, the use of language like ‘non-negotiable standard’ makes it seem extremely unlikely that any exceptions are going to be made for consumers (especially given other moves Microsoft is busy making, which we’ll come back to shortly). Even though those consumers are likely much less bothered about super-tight security than business organizations are.
You might be wondering: why would Microsoft make any exceptions, anyway, all this time after the launch of Windows 11? Well, with Windows 10 End of Life arriving next year, in October 2025, quite a lot of negative noise has (understandably) been made about hundreds of millions of PCs that don’t qualify for Windows 11’s requirements potentially ending up in landfill.
So, given that, perhaps there were still embers of hope that Microsoft could change its mind about Windows 11’s spec requirements in some way – but it’s looking very much like the company won’t budge. Also backing that up is the fact that Microsoft has, for the first time ever, given consumers an option to pay for extended support (security updates) for Windows 10 for an extra year, through to October 2026.
That appears to be Microsoft’s sole compromise in terms of consumers with Windows 10 PCs, while it busily badgers them about buying a new Windows 11 computer (preferably a Copilot+ PC, of course).
We should note that there are unofficial workarounds to get Windows 11 on a PC that doesn’t have TPM 2.0 (or falls short of other requirements), but they aren’t recommended – and Microsoft very much cautions against this route.
In fact, Microsoft just added an ugly watermark on the desktop (yes, the same as when running an unactivated copy of Windows) and an annoying pop-up for those using Windows 11 on an unsupported PC, having leveraged one of those aforementioned fudges to get the OS installed. If you needed more evidence of Microsoft’s apparently concrete stance on Windows 11’s upgrade requirements, well, there it is.
You might also like...China has hit back against the latest round of US sanctions by declaring US semiconductors "no longer safe" for use by Chinese organizations.
Earlier this week, the US issued its third round of sanctions that look to limit China’s ability to produce semiconductors domestically by restricting companies from selling chip assembly technology to China.
China did not offer any evidence or reasoning as to why US semiconductors were unsafe, but it is likely an attempt to steer Chinese markets away from using imported chips from companies such as Nvidia, AMD, and Intel and on to domestic productions.
Chip war continuesChina’s warning isn’t a ban on US chips per se, but the Internet Society of China told companies to carefully consider their choice of semiconductor manufacturers, and urged organizations to use China’s growing domestic market of chips “proactively.”
Last year, China added several specific semiconductors produced by Intel, AMD, and other chip manufacturers to a blacklist, forcing companies to shift away from Western technology. Moreover, a group representing China’s cybersecurity professionals claimed that Intel was installing backdoors into CPUs to be used by the NSA for espionage.
Additionally, China has banned the export of gallium, antimony, and germanium to the US due to their use in both military and civilian applications. China accounts for 98% of the world's raw gallium production, 48% of antimony, and 83% of germanium, causing prices to skyrocket and likely forcing the US to seek alternative sources.
As part of the warning against the use of US semiconductors, the China Association of Communication Enterprises has urged Beijing to investigate potential supply chain vulnerabilities in critical infrastructure that use Western-produced semiconductors.
Via Reuters