AI isn’t just something to adopt; it’s already embedded in the systems we rely on. From threat detection and response to predictive analytics and automation, AI is actively reshaping how we defend against evolving cyber threats in real time. It’s not just a sales tactic (for some); it’s an operational necessity.
Yet, as with many game-changing technologies, the reality on the ground is more complex. The cybersecurity industry is once again grappling with a familiar disconnect: bold promises about efficiency and transformation that don’t always reflect the day-to-day experiences of those on the front lines. According to recent research, 71% of executives report that AI has significantly improved productivity, but only 22% of frontline analysts, the very people who use these tools, say the same.
When solutions are introduced without a clear understanding of the challenges practitioners face, the result isn’t transformation, it’s friction. Bridging that gap between strategic vision and operational reality is essential if AI is to deliver on its promise and drive meaningful, lasting impact in cybersecurity.
Executives love AIAccording to Deloitte, 25% of companies are expected to have launched AI agents by the end of 2025, with that number projected to rise to 50% shortly thereafter. The growing interest in AI tools is driven not only by their potential but also by the tangible results they are already beginning to deliver
For executives, the stakes are rising. As more companies begin releasing AI-enabled products and services, the pressure to keep pace is intensifying. Organizations that can’t demonstrate AI capabilities, whether in their customer experience, cybersecurity response, or product features, risk being perceived as laggards, out-innovated by faster, more adaptive competitors. Across industries, we're seeing clear signals: AI is becoming table stakes, and customers and partners increasingly expect smarter, faster, and more adaptive solutions.
This competitive urgency is reshaping boardroom conversations. Executives are no longer asking whether they should integrate AI, but how quickly and effectively they can do so, without compromising trust, governance, or business continuity. The pressure isn’t just to adopt AI internally to drive efficiency, but to productize it in ways that enhance market differentiation and long-term customer value.
But the scramble to implement AI is doing more than reshaping strategy, it’s unlocking entirely new forms of innovation. Business leaders are recognizing that AI agents can do more than just streamline functions; they can help companies bring entirely new capabilities to market. From automating complex customer interactions to powering intelligent digital products and services, AI is quickly moving from a behind-the-scenes tool to a front-line differentiator. And for executives willing to lead with bold, well-governed AI strategies, the payoff isn’t just efficiency, it’s market relevance.
Analysts distrust AIIf anyone wants to make their job easier, it’s a SOC analyst, so their skepticism of AI comes from experience, not cynicism. The stakes in cybersecurity are high, and trust is earned, especially when systems that are designed to protect critical assets are involved. Research shows that only 10% of analysts currently trust AI to operate fully autonomously. This skepticism isn’t about rejecting innovation, it’s about ensuring that AI can meet the high standards required for real-time threat detection and response.
That said, while full autonomy is not yet on the table, analysts are beginning to see tangible results that are gradually building trust. For example, 56% of security teams report that AI has already boosted productivity by streamlining tasks, automating routine processes, and speeding up response times. These tools are increasingly trusted for well-defined tasks, giving analysts more time to focus on higher-priority, complex threats.
This incremental trust is key. While 56% of security professionals express confidence in AI for threat detection, they still hesitate to let it manage security autonomously. As AI tools continue to prove their ability to process vast amounts of data and provide actionable insights, initial skepticism is giving way to more measured, conditional trust.
Looking aheadClosing the perception gap between executive enthusiasm and analyst skepticism is critical for business growth. Executives must create an environment where analysts feel empowered to use AI to enhance their expertise without compromising security standards. Without this, the organization risks falling into the hype cycle, where AI is overpromised but underdelivered.
In cybersecurity, where the margin for error is razor-thin, collaboration between AI systems and human analysts is critical. As these tools mature and demonstrate real-world impact, trust will grow, especially when their use is grounded in transparency, explainability, and accountability.
When AI is thoughtfully integrated and aligned with practitioner needs, it becomes a reliable asset that not only strengthens defenses but also drives long-term resilience and value across the organization.
We list the best cloud firewall.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
It’s a scenario that plays out far too often: A mid-sized company runs a routine threat validation exercise and stumbles on something unexpected, like an old infostealer variant that has been quietly active in their network for weeks.
This scenario doesn’t require a zero-day exploit or sophisticated malware. All it takes is one missed setting, inadequate endpoint oversight, or a user clicking what they shouldn’t. Such attacks don’t succeed because they’re advanced. They succeed because routine safeguards aren’t in place.
Take Lumma Stealer, for example. This is a simple phishing attack that lures users into running a fake CAPTCHA script. It spreads quickly but can be stopped cold by something as routine as restricting PowerShell access and providing basic user training. However, in many environments, even those basic defenses aren’t deployed.
This is the story behind many breaches today. Not headline-grabbing hacks or futuristic AI assaults—just overlooked updates, fatigued teams and basic cyber hygiene falling through the cracks.
Security Gaps That Shouldn’t Exist in 2025Security leaders know the drill: patch the systems, limit access and train employees. Yet these essentials often get neglected. While the industry chases the latest exploits and talks up advanced tools, attackers keep targeting the same weak points. They don’t have to reinvent the wheel. They just need to find one that’s loose.
Just as the same old techniques are still at work, old malware is making a comeback. Variants like Mirai, Matsu and Klopp are resurfacing with minor updates and major impact. These aren’t sophisticated campaigns, but recycled attacks retooled just enough to slip past tired defenses.
The reason they work isn’t technical, it’s operational. Security teams are burned out. They’re managing too many alerts, juggling too many tools and doing it all with shrinking budgets and rising expectations. In this kind of environment, the basics don’t just get deprioritized, they get lost.
Burnout Is a Risk FactorThe cybersecurity industry often defines risk in terms of vulnerabilities, threat actors and tool coverage, but burnout may be the most overlooked risk of all. When analysts are overwhelmed, they miss routine maintenance. When processes are brittle, teams can’t keep up with the volume. When bandwidth runs out, even critical tasks can get sidelined.
This isn’t about laziness. It’s about capacity. Most breaches don’t reveal a lack of intelligence. They just demonstrate a lack of time.
Meanwhile, phishing campaigns are growing more sophisticated. Generative AI is making it easier for attackers to craft personalized lures. Infostealers continue to evolve, disguising themselves as login portals or trusted interfaces that lure users into running malicious code. Users often infect themselves, unknowingly handing over credentials or executing code.
These attacks still rely on the same assumptions: someone will click. The system will let it run. And no one will notice until it’s too late.
Why Real-World Readiness Matters More Than ToolsIt’s easy to think readiness means buying new software or hiring a red team, but true preparedness is quieter and more disciplined. It’s about confirming that defenses such as access restrictions, endpoint rules and user permissions are working against the actual threats.
Achieving this level of preparedness takes more than monitoring generic threat feeds. Knowing that ransomware is trending globally isn’t the same as knowing which threat groups are actively scanning your infrastructure. That’s the difference between a broader weather forecast and radar focused on your ZIP code.
Organizations that regularly validate controls against real-world, environment-specific threats gain three key advantages.
First, they catch problems early. Second, they build confidence across their team. When everyone knows what to expect and how to respond, fatigue gives way to clarity. Thirdly, by knowing the threats that matter, and the ones focused on them, they can prioritize those fundamental activities that get ignored.
You may not need to patch every CVE right now, just the ones being used by the threat actors targeting you. What areas of your network are they actively doing reconnaissance on? Those subnets probably need more focus to patching and remediation.
Security Doesn’t Need to Be Sexy, It Needs to WorkThere’s a cultural bias in cybersecurity toward innovation and incident response. The new tool, the emergency patch and the major breach all get more attention than the daily habits that quietly prevent problems.
Real resilience depends on consistency. It means users can’t run untrusted PowerShell scripts. It means patches are applied on a prioritized schedule, not “when we get around to it.” It means phishing training isn’t just a checkbox, but a habit reinforced over time.
These basics aren’t glamorous, but they work. In an environment where attackers are looking for the easiest way in, doing the simplest things correctly is one of the most effective strategies a team can take.
Discipline Is the New InnovationThe cybersecurity landscape will continue to change. AI will keep evolving, adversaries will go on adapting, and the next headline breach is likely already in motion. The best defense isn’t more noise or more tech, but better discipline.
Security teams don’t need to do everything. They need to do the right things consistently. That starts with reestablishing routine discipline: patch, configure, test, rinse and repeat. When those fundamentals are strong, the rest can hold.
For CISOs, now is the time to ask a simple but powerful question: Are we doing the basics well, and can we prove it? Start by assessing your organization’s hygiene baseline. What patches are overdue? What controls haven’t been tested in months? Where are your people stretched too thin to execute the essentials? The answers won’t just highlight the risks, they’ll point toward the pathway to resilience.
We list the best patch management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
File-sharing platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools.
The offending language buried in the ToS said that using WeTransfer gave the company the right to use the data "for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy."
That part about machine learning and the general broad nature of the text seemed to suggest that WeTransfer could do whatever it wanted with your data, without any specific safeguards or clarifying qualifiers to alleviate suspicions.
Perhaps understandably, a lot of WeTransfer users, who include many creative professionals, were upset at what this seemed to imply. Many started posting their plans to switch away from WeTransfer to other services in the same vein. Others began warning that people should encrypt files or switch to old-school physical delivery methods.
Time to stop using @WeTransfer who from 8th August have decided they'll own anything you transfer to power AI pic.twitter.com/sYr1JnmemXJuly 15, 2025
WeTransfer noted the growing furor around the language and rushed to try and put out the fire. The company rewrote the section of the ToS and shared a blog explaining the confusion, promising repeatedly that no one's data would be used without their permission, especially for AI models.
"From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We’ve since updated the terms further to make them easier to understand," WeTransfer wrote in the blog. "We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension."
While still granting a standard license for improving WeTransfer, the new text omits references to machine learning, focusing instead on the familiar scope needed to run and improve the platform.
Clarified privacyIf this feels a little like deja vu, that's because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change to the company's fine print implied that Dropbox was taking content uploaded by users in order to train AI models. Public outcry led to Dropbox apologizing for the confusion and fixing the offending boilerplate.
The fact that it happened again in such a similar fashion is interesting not because of the awkward legal language used by software companies, but because it implies a knee-jerk distrust in these companies to protect your information. Assuming the worst is the default approach when there's uncertainty, and the companies have to make an extra effort to ease those tensions.
Sensitivity from creative professionals to even the appearance of data misuse. In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the stakes are very real. The lawsuits and boycotts by artists over how their creations are used, not to mention suspicions of corporate data use, make the kinds of reassurances offered by WeTransfer are probably going to be something tech companies will want to have in place early on, lest they face the misplaced wrath of their customers
You might also likeA post on LinkedIn seeking graphic designers for Xbox is going viral for the irony of terrible AI-generated graphics. Principal Development Lead for Xbox Graphics, Mike Matsel, shared a post announcing the roles, accompanied by what at first glance appears to be an innocuous cartoon of a woman at a workstation typing code. Except the code is on the back of her monitor, and that's just the beginning of the issues with the image.
The fact that Microsoft concluded the latest of several rounds of layoffs, affecting a total of more than 9,000 people, including many in the Xbox division, just a few weeks ago, makes it even more awkward.
(Image credit: LinkedIn/Mike Matsel)The more you examine the image, the more obvious it becomes that it was (poorly) produced with AI. The computer is unconnected to anything, the desk sort of fades away into nothingness, and the shadows don't make sense. Plus, would Microsoft want a graphic of someone clearly using Apple headphones? Not to mention the fact that, in 2025, you're very unlikely to see someone with the corded iPhone headphones of nearly 20 years ago.
The image does at least sell the idea that Microsoft desperately needs graphic designers, or at least people who know when graphics are very wrong. The dozens of comments on the post emphasize just how annoying many people find the post. A lot are from developers and graphic designers who might otherwise be interested in the positions.
Awkward AIThe fact that this wasn’t just a bad image, but one that undermines the entire point of the job being advertised, is truly mind-boggling. It’s like handing out flyers for a bakery that uses clip art of a melting candle with "bread" written on the attached label.
It's so bizarrely bad that more than a few commenters wondered if it was on purpose. It might be a way to draw attention to the open positions, or, unlikely as this may be, a form of malicious compliance from someone instructed to use AI to announce the open jobs after their colleagues in those positions were recently let go. Or maybe it was the sharpest satire ever seen on LinkedIn.
Those are wildly unlikely theories, but it's telling that they aren't totally impossible. An ad symbolizing everything people are worried about, especially regarding the very artistic jobs being advertised, would be far too blatant to use in a joke. Still, apparently, that's just reality now.
The fact that Microsoft is currently investing billions of dollars in AI only adds to the dissonant reaction. Even if it wasn't formally approved by Microsoft, it still has their Xbox logo on it. Then again, even senior executives can faceplant when discussing and using AI.
Just last week, Executive Producer at Xbox Game Studios Publishing Matt Turnbull suggested that people recently let go could turn to AI chatbots to help get over their emotional distress and find new jobs. He took down the essay encouraging former employees to use AI tools to both find jobs and for "emotional clarity," eventually, but this graphic disaster remains visible to the public, as opposed to the code hiding behind the back of the monitor.
You might also likeAmazon Web Services (AWS) has unveiled Kiro, an IDE which uses AI agents to streamline the development process.
Available now in preview, Kiro looks to cut down on potential issues with "vibe coding", the process where agents are being asked to create and build software with minimal human interaction.
As well as helping with coding, Kiro can also automatically create and update project plans and technical blueprints, solving one of the most troublesome issues for developers who are still getting to grips with the potential AI brings.
AWS KiroAnnouncing the launch, AWS said Kiro is looking to help transition from “vibe coding to viable code.”
It works by breaking down prompts into structured components, which can then be used to guide implementation and testing, as well as tracking any changes as the code evolves, ensuring no inconsistencies break through.
There's also Model Context Protocol (MCP) support for connecting specialized tools, steering rules to guide AI behavior across your project, and agentic chat for ad-hoc coding tasks.
Finally, it can also automatically check through code to make sure nothing is amiss, making sure developers can submit or launch code without fear of any problems.
Kiro looks, “to solve the fundamental challenges that make building software products so difficult — from ensuring design alignment across teams and resolving conflicting requirements, to eliminating tech debt, bringing rigor to code reviews, and preserving institutional knowledge when senior engineers leave," Nikhil Swaminathan, Kiro’s product lead, and Deepak Singh, Amazon’s vice president of developer experience and agents, said.
"Kiro is great at ‘vibe coding’ but goes way beyond that—Kiro’s strength is getting those prototypes into production systems with features such as specs and hooks."
For now, Kiro is free to use during the preview period, but it seems AWS is looking at introducing three pricing tiers: a free version with 50 agent interactions per month; a Pro tier at $19 per user per month with 1,000 interactions; and a Pro+ tier at $39 per user per month with 3,000 interactions.
"Kiro is really good at "vibe coding" but goes well beyond that," Amazon CEO Andy Jassy wrote in a post on X.
"While other AI coding assistants might help you prototype quickly, Kiro helps you take those prototypes all the way to production by following a mature, structured development process out of the box. This means developers can spend less time on boilerplate code and more time where it matters most – innovating and building solutions that customers will love.
You might also likeA critical flaw in the wireless systems used across US rail networks has remained unresolved for more than a decade, exposing trains to remote interference.
The vulnerability affects End-of-Train (EoT) devices, which relay data from the last carriage to the front of the train, forming a link with the Head-of-Train (HoT) module.
Although the issue was flagged in 2012, it was largely dismissed until federal intervention forced a response.
Ignored warnings and delayed responsesHardware security researcher Neils first identified the flaw in 2012, when software-defined radios (SDRs) began to proliferate.
The discovery revealed that these radios could easily mimic signals sent between the HoT and EoT units.
Since the system relies on a basic BCH checksum and lacks encryption, any device transmitting on the same frequency could inject false packets.
In a concerning twist, the HoT is capable of sending brake commands to the EoT, which means an attacker could stop a train remotely.
“This vulnerability is still not patched,” Neils stated on social media, revealing it took over a decade and a public advisory from the Cybersecurity and Infrastructure Security Agency (CISA) before meaningful action was taken.
The issue, now catalogued as CVE-2025-1727, allows for the disruption of U.S. trains with hardware costing under $500.
Neils's findings were met with skepticism by the American Association of Railways (AAR), which dismissed the vulnerability as merely “theoretical” back in 2012.
Attempts to demonstrate the flaw were thwarted due to the Federal Railway Authority's lack of a dedicated test track and the AAR denying access to operational sites.
Even after the Boston Review published the findings, the AAR publicly refuted them via a piece in Fortune.
By 2024, the AAR’s Director of Information Security continued to downplay the threat, arguing that the devices in question were approaching end-of-life and didn’t warrant urgent replacement.
It wasn’t until CISA issued a formal advisory that the AAR began outlining a fix. In April 2025, an update was announced, but full deployment is not expected until 2027.
The vulnerability stems from technology developed in the 1980s, when frequency restrictions reduced the risk of interference, but today’s widespread access to SDRs has altered the risk landscape dramatically.
“Turns out you can just hack any train in the USA and take control over the brakes,” Neils said, encapsulating the broader concern.
The ongoing delay and denial mean US trains are probably sitting on a keg of gunpowder that could lead to serious risks at any time.
Via TomsHardware
You might also likeAlthough more and more applications are getting AI overhauls, new F5 research had claimed only 2% of enterprises are highly ready for AI.
More than one in five (21%) fall into the low-readiness category, and while three-quarters (77%) are considered moderately ready, they continue to face security and governance hurdles.
This comes as one in four applications use AI, with many organizations splitting their AI usage across multiple models including paid models like GPT-4 and open-source models like Llama, Mistral and Gemma.
Enterprises aren't benefitting from the AI they have access toAlthough 71% of the State of AI Application Strategy Report respondents said they use AI to enhance security, F5 highlighted ongoing challenges with security and governance. Fewer than one in three (31%) have deployed AI firewalls, and only 24% perform continuous data labelling, potentially increasing risks.
Looking ahead, one in two (47%) say they plan on deploying AI firewalls in the next year. F5 also recommends that enterprises diversify AI models across paid and open-source opens, scale AI usage to operations, analytics and security, and deploy AI-specific protections like firewalls and data governance strategies.
At the moment, it's estimated that two-thirds (65%) use two or more paid models and at least one open-source model, demonstrating considerable room for improvement.
"As AI becomes core to business strategy, readiness requires more than experimentation—it demands security, scalability, and alignment," F5 CPO and CMO John Maddison explained.
The report highlights how enterprises that lack of maturity can stifle growth, introduce operational bottlenecks and present compliance challenges.
"AI is already transforming security operations, but without mature governance and purpose-built protections, enterprises risk amplifying threats," Maddison added.
You might also like