US authorities are urging Americans to use encrypted messaging apps to secure their sensitive data against foreign attackers.
The security call comes in the wake of an "unprecedented cyberattack" on the countries' telecoms companies, NBC News reported. The attack is considered among the largest intelligence compromises in US history and isn't yet fully fixed.
The China-linked Salt Typhoon group was first spotted targeting US telecoms with a new backdoor malware a few months ago. It has reportedly hacked the likes of AT&T, Verizon, and Lumen Technologies to spy on their customers' activities.
The need for strong encryption"Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible,” said Jeff Greene, executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday – as per NBC News.
Encryption refers to scrambling the data into an unreadable form to prevent third-party access. From messaging apps like WhatsApp, Signal, and Session to secure email services like ProtonMail and Tuta, online communications are expected to remain private from the sender to the receiver (end to end) thanks to this technology.
Besides encrypting chats and calls leaving your device, FBI officials also suggest keeping your smartphone up-to-date and enabling two-factor authentication whenever possible to protect your accounts against phishing attacks.
Do you know?(Image credit: Getty Images)The US Cybersecurity and Infrastructure Security Agency (CISA) has also published new guidance for helping enterprises defend against Salt Typhoon's threats, which includes a series of best practices and other security tips to stay protected.
Tech and privacy experts have welcomed the US authorities' endorsement of using encrypted communication software. They have long advocated for the necessity of these tools on both privacy and security grounds, in fact, strongly rejecting any attempts from lawmakers to undermine their efficiency in combating crime.
Commenting on this point, Greg Nojeim of the Center for Democracy & Technology (CDT) – a member of the Steering Committee of the Encryption Coalition – said: "If anti-encryption advocates had their way, the United States would now be defenseless to this type of mass snooping from a foreign power."
That said, Salt Typhoon hackers aren't just targeting the content of Americans' communications, but also their call record metadata – as Reuters reported.
Metadata privacy is becoming a growing issue nowadays. Attackers can now harvest the power of AI tools to find patterns and trace back people's data even without the need to access the encrypted content.
Artificial Intelligence is introducing a new wave of technological capabilities, and therefore, businesses are increasingly looking for ways to integrate it into their products and day-to-day operations. As businesses race to unlock the potential of artificial intelligence, they increasingly recognize that cloud infrastructure is essential. It may come as a surprise, however, that although 67% of companies report having advanced cloud infrastructure, only 8% have fully integrated AI into their business processes (Infosys & MIT, 2024). And this figure highlights a clear disconnect that, despite cloud maturity, businesses are lagging behind when it comes to AI implementation.
This article will explore the reasons behind this lag and outline the key strategies for businesses to align their cloud infrastructure with the specific demands of AI to unlock its full potential.
Why is there a disconnect between cloud and AI readiness?There are many factors to consider when implementing AI and the most important of which is cost. Currently, the biggest challenge in adopting AI is the significant upfront investment required to create an AI-ready environment. The hardware costs alone often don't match the short lifecycles of the technology, as it evolves rapidly and organizations need to continually upgrade their systems to meet the demands. As a result, it can be difficult to justify the long-term ROI of AI.
Many organizations are rushing to integrate AI tools into their operations without fully considering the infrastructure implications. Despite widespread recognition of AI's potential, as evidenced by 98% of executives expecting increased AI spending on the cloud, businesses often neglect the specific technical requirements of AI.
To effectively support AI workloads, organizations must prioritize compatibility, scalability, security, and cost-effectiveness. However, performance remains a critical factor, and striking the right balance between Graphics Processing Units (GPU) requirements and costs is essential. AI's demanding nature necessitates cloud environments capable of handling intensive data processing, low-latency response times, and specialized hardware like GPUs or custom accelerators.
Another factor influencing the disconnect is the IT industry’s ongoing skills gap, with 84% of UK businesses currently struggling to source the talent they need to address their IT challenges. Since there are a limited number of skilled professionals who can manage AI workloads, even businesses that have prepared their cloud infrastructure may lack the expertise needed to fully embrace AI’s capabilities.
Key considerations for AI1. AI workloads
The specific AI requirements of different companies can vary significantly. For example, a company developing an advanced image recognition system may have different infrastructure needs than one building a sophisticated chatbot. To address these unique demands, bespoke cloud optimization strategies are essential for businesses to consider.
Each AI project has unique resources and high-performance computing requirements. For example, one of our customers is developing an alternative to neuro-symbolic architecture, combining neural and symbolic learning, which acts similarly to the human brain. The company needed a hosting provider for training one of their products - the Expert Verbal Agent (EVA) model, an LLM designed for thoughtful queries and problem-solving. Unlike many AI models, which only run on GPUs as their computational model, EVA can use CPU, GPU, or both. Consequently, they required a CPU-powered server for software development and testing.
2. Scalability
Scalability is vital for AI, but it must be balanced with cost-effectiveness. An AI environment should be able to adapt to changing demands, providing additional processing power when needed but this can be expensive.
AI workloads can be unpredictable and fluctuate in size. It's common for AI workloads to be needed only for short, intensive periods of time, for example to regenerate a model. This often means that the hardware involved sits idle for long periods of time and therefore does not generate a return on investment. This is an important consideration for companies looking to build AI-enabled platforms, who should consider leasing time on pre-built environments as an alternative to ensure the best and most resource-efficient outcome. While public cloud models offer flexibility, they tend to be more expensive for such projects, especially during peak usage periods. Organizations need to carefully consider their scalability demands and choose the infrastructure that is right for them.
3. Security
Security is critical in AI projects, especially when outsourcing GPU or processing components. Sensitive data must be protected to safeguard customer privacy. While public cloud models can be convenient, they may not offer the same level of security as private or hybrid cloud solutions, where servers are dedicated solely to a business. Businesses should evaluate the sensitivity of their data and select a cloud environment that aligns with the security and control requirements of their AI workloads.
AI security in the cloud is a critical concern as organizations increasingly leverage the power of artificial intelligence (AI) to process and analyze vast amounts of data in cloud-based environments. The first key aspect of AI security in the cloud involves protecting the AI models and data. Encryption and access controls are vital to ensure that sensitive AI models and training data are safeguarded from unauthorized access or breaches. Additionally, regular audits and monitoring are essential to detect any unusual activities or vulnerabilities that could compromise AI systems in the cloud.
4. Performance
Certain AI tasks require specific hardware to run most effectively. In some scenarios, GPUs are essential, while some projects require specialized AI chips or TPUs (Tensor Processing Units). These chips are specifically designed to deliver the best performance when processing machine learning workloads. It is extremely important for companies to understand the specific technical needs of each project when choosing the perfect architecture for running an AI model, as there are many variations of hardware that can be used for these platforms.
Understanding the memory requirements of the AI model being trained is also extremely important. Some models will not fit on a basic graphics card, while others will require huge amounts of onboard RAM to be processed at all. NVIDIA's latest cards, such as the H100 NVL, have a whopping 188GB of HBM3 memory, allowing very large models to be trained. Cloud providers often have access to advanced hardware and infrastructure that can significantly improve the performance of AI algorithms and reduce training time.
Steps to bridge the disconnectTo bridge the gap between cloud readiness and AI integration, businesses can start by understanding their key requirements and clarifying desired goals for the AI. This will allow the creation of a comprehensive brief which is an essential first step.
Next, evaluate existing cloud capabilities and identify key goals and requirements in order to identify any gaps in performance, scalability, or data handling – all necessary for the effective use of AI applications. Furthermore, establishing data management, security, and compliance policies ensures that quality data is readily available for AI initiatives.
Companies should also consider which cloud infrastructure best suits the unique needs of each AI project. For example, if security and regulatory compliance are priorities, hybrid or private cloud models, with infrastructure dedicated to a business rather than shared across businesses, maybe a better fit than public cloud options.
Finally, incorporating regular performance evaluations and iterative infrastructure adjustments will help maintain alignment with evolving AI capabilities, ensuring a strong foundation that adapts as AI technology advances.
Working with a Managed Service ProviderThese steps can seem overwhelming to tackle alone, which is why some businesses opt to work with a Managed Service Provider (MSP) on their AI integration. Currently, 65% of UK businesses work with MSPs as they offer a holistic approach to AI optimization by supporting infrastructure design, compliance, and ongoing optimization. MSPs also help companies with their security posture through continuous monitoring to protect cloud environments from threats and vulnerabilities.
Additionally, MSPs can help bridge the skills gap, which remains a common barrier to successful AI adoption. In fact, 46% of businesses use MSPs to address the ongoing skills shortage. MSPs can help businesses achieve their AI goals cost-effectively by providing the most efficient infrastructure and hardware backed by their expertise and service. Collaborating with cloud infrastructure management experts also reduces the risk of misconfigurations as well as unnecessary costs, ensuring that businesses have an optimized and secure foundation for AI.
Cloud readiness and AI go hand-in-handAs AI continues to transform our lives and modern businesses AI integration will be essential for companies aiming to stay competitive. By tailoring cloud infrastructure to AI-specific requirements and leveraging the expert knowledge of MSPs, organizations can overcome the most pressing hurdles (financial, technical, and talent-related), to make the most of AI's potential. With a strategic approach and the right support, businesses can lay a solid foundation that can not only meet current demand but also adapt as AI technology evolves.
We've featured the best cloud storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Companies today operate in a world defined by information overload. The figures are mindboggling. In 2010, the global datasphere totaled 2 trillion gigabytes (2 zettabytes). By 2020, it had expanded to 64 zettabytes, and by 2026, the International Data Corporation (IDC) expects that it will reach 221 zettabytes.
With approximately 252,000 new websites being created every day, it is becoming increasingly difficult for knowledge workers to locate the right resources and information needed to make informed decisions.
Solving the information overload problemTo solve this problem, many knowledge workers have begun leveraging generative AI platforms as a new information finding resource. It’s easy to see why – when provided with a simple query, AI can quickly provide answers, removing the need for knowledge workers to endlessly scour through search engine results to no avail. And so it’s no surprise that the launch of OpenAI’s new search platform, ChatGPT Search, was highly anticipated, touted by many as a competitor that could truly take on Google as the primary resource for knowledge workers.
However, while the potential is there, significant concerns about the use of generative AI as an information gathering tool remain. Yes, these new tools are fast, scalable and cheap. Yet, in a very human way, they can also lie. In their eagerness to respond, we’ve seen many AI platforms hallucinating information which to date has caused several notable, high-profile blunders.
In multiple incidents, lawyers using ChatGPT have been found to cite non-existent legal cases – something that can result in significant impacts, from case dismissals and fines to a broader erosion of trust in the legal system. Elsewhere, meanwhile, an NYC chatbot was found to be providing incorrect and illegal information and advice to business owners.
As a result, skepticism over the use of AI rightly remains. In fact, in a survey of 1,000 business decision makers, we found that over three quarters (78%) of knowledge workers say popular generative AI models like ChatGPT are eroding people’s trust in AI.
While these are powerful tools for consumers, they’re simply not built to drive effective decision making in the business world, as these high-profile blunders have shown.
Instead, many business leaders continue to put their faith in trusty search engine, with this being the second most trusted information gathering method behind official/third party reports. In fact, almost three quarters of decision makers (72%) never or rarely go past the first page of a search engine when seeking information, showing just how big of an influence search engines have on decision making.
Four steps to improving trust in AI as an information gathering toolSuch is the degree of trust in Google, the Department of Justice is considering breaking up the tech giant’s monopoly as an antitrust remedy, driving debate over where people can and should source information. However, if AI is to rival Google, we need to build trust in it, finding ways to eliminate key issues such as hallucinations.
I personally see a future in which generative AI will augment our knowledge, advise on potential choices, interrogate our thoughts to expose weaknesses in our thinking, and even make decisions autonomously. But for it to do any of these things, we first need to trust it wholly – from the content that trains it, to the references it uses and analyses it applies.
Can we bridge the gap that currently exists, and turn AI into a viable tool for supporting effective, trusted decision making? To even begin to do so, it is critical that several steps are taken:
1 – Craft effective inputs to guide AI responses The first step is to ensure we are guiding AI in the right way. By providing clear context and specific instructions, using examples to demonstrate desired output formats and implementing constraints to limit unwanted responses, we can reduce the scope for ambiguity and misinterpretation and boost output relevance and accuracy.
2 – Retrieve relevant information from external knowledge bases Second, it’s also important to leverage relevant information from external sources to guide more effective outputs. By integrating up-to-date, curated information sources into the input process, ideally through efficient retrieval mechanisms, we can both increase factual accuracy and benefit from verifiable sources for generated content.
3 – Guide AI to break down complex problems with reasoning processes Third, it’s possible to assist AI in solving complex problems with the right processes. By prompting the AI to show its work or explain its reasoning, encouraging intermediate steps in problem solving, and implementing self-correction mechanisms, logical consistency will be improved.
4 – Implement self-awareness and self-evaluation capabilities We can also develop mechanisms for the AI to assess its own confidence levels and recognize where knowledge gaps exist. Doing so can help encourage the AI to provide caveats or qualifications with its outputs, serving to enhance transparency into AI certainty and limitations.
If trust can be achieved, then the opportunity is massiveFor AI to become an effective information gathering tool, it is vital that guardrails such as these are put in place to ensure that it can be trusted. To reiterate, decision makers are to be rightly wary of AI right now. Indeed, our survey shows that 80% have knowingly made a business decision based on information they were not sure about, with 88% of decision makers having discovered inaccuracies in information used for business decisions post decision.
However, if current issues can be addressed, and the trust gap that currently exists can be bridged, then the opportunity for AI to excel in supporting knowledge workers is significant.
We’re talking about a powerful tool that can quickly answer queries. If the right mechanisms can be put in place to ensure those answers are credible, logical and accurate, then users will be able to source exactly the information they need at speed. Critically, 95% of decision makers believe that better access to information will improve decision making. By taking the right steps to ensure that AI becomes a trustworthy information gathering asset, the decision making process can be vastly accelerated for knowledge workers.
We've featured the best AI website builder.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro