While just a fraction of Republicans in Congress are holding town halls during the August recess — in-person and virtual — the questions from voters, and answers from lawmakers, strike a similar tune.
(Image credit: AP/Getty Images)
In one neighborhood of the city, Latinos worry about immigration and urban problems but may soon be grouped in with suburban voters.
(Image credit: Rodolfo Gonzalez)
A single missile can cost millions of dollars and only hit a single critical target. A low-equity, AI-powered cyberattack costs next to nothing and can disrupt entire economies, degrading national power and eroding strategic advantage.
The rules have changed: the future of warfare is a series of asynchronous, covert cyber operations carried out below the threshold of kinetic conflict. Battles will still be fought over land, sea, and sky, but what happens in the cyber domain could have a greater bearing on their outcome than how troops maneuver on the battlefield.
We were always heading in this direction, but AI has proven a dangerous accelerant. The entire military industrial base must become fortified against these risks, and that starts with continuous, autonomous validation of its cyber security defenses.
The Case for Autonomous ResilienceToday’s adversaries, whether state-sponsored actors or independent cybercrime syndicates, are deploying AI-driven agents to handicap critical systems across the entire military supply chain. These attackers aren’t focused on headline-making digital bombs, but a slow attrition, applying continuous pressure to degrade functionality over time. They’re also working anonymously: AI-enabled cyberattacks are executed by autonomous agents or proxies, making attribution slow or impossible.
Consider a hypothetical attack on the U.S. Navy. The Navy depends on a vast, decentralized web of small and mid-sized suppliers for everything from propulsion components to shipboard software systems. While these systems and suppliers may coalesce into the most technologically advanced Navy in the world, their interdependence is almost akin to human biology, in the way that a hit to one system can thoroughly destabilize another.
An adversary doesn’t need to breach the Navy directly. Instead, they can launch persistent cyberattacks on the long tail of maritime subcontractors, degrading national capability over time instead of in one massive, headline-making blow.
Third-party vendors, which often lack the financial resources to properly patch vulnerabilities, may be riddled with unsewn wounds that attackers can use as an entry point. But major security vulnerabilities aren’t the only way in. AI-driven agents can autonomously compromise outdated email systems, misconfigured cloud services, or exposed remote desktops across hundreds of these suppliers.
The impacts of these attacks can look like “normal” disruptions, the result of human error or some missing piece of code: delayed component deliveries, corrupted design files, and general operational uncertainty. However, the ill effects accumulate over time, delaying shipbuilding schedules and weakening overall fleet readiness.
Emerging threatsThat’s not even accounting for sanctions. If equipment is damaged, and replacement parts or skilled maintenance teams are restricted, one attack has just crippled a nation’s chip manufacturing capacity—potentially for months or years.
These attacks also get smarter over time. AI agents are designed for continuous improvement, and as they sink deeper into a system, they become more adept at uncovering and exploiting weaknesses. The cascading damage limits recovery efforts, further delaying defense production timelines and dragging entire economies backwards.
Despite these emerging threats, most defense and industrial organizations still rely on traditional concepts of deterrents, built around visible threats and proportional response: think static defenses, annual audits, and reactive incident response. Meanwhile, adversaries are running autonomous campaigns that learn, adapt, and evolve faster than human defenders can respond. You cannot deter what you cannot detect, and you cannot retaliate against what you cannot attribute.
Facing such dire stakes, defense contractors must exploit their own environments before attackers do. That means deploying AI-powered agents across critical infrastructure—breaking in, chaining weaknesses, and fixing them—to achieve true resilience. If the window for exploitation narrows, and the cost of action rises. “Low equity” means little against a high chance of failure.
Leveraging AI in Proactive DefenseFighting fire with fire sounds simple enough, but there are serious risks involved. The same AI tools that bolster organizations’ defenses against smarter, more covert attacks can also create new vulnerabilities. Large language models (LLMs) may cache critical weaknesses in their model architecture, and third-party components that contribute to the models’ effectiveness can also introduce new vulnerabilities.
Any AI-powered security tools should undergo a comprehensive vetting process to identify potential risks and weaknesses. Model architecture and history, data pipeline hygiene, and infrastructural requirements–such as digital sovereignty compliance–are all factors to consider when augmenting security with AI-enabled tools.
Even the cleanest, most secure AI program is not a failsafe. Defenders that rely too heavily on AI will find themselves facing many of the same problems that plague their counterparts who use outdated scanners.
A mix of false confidence and alert fatigue from automated risk notifications can lead to missed critical vulnerabilities. In a national security scenario, that can lose a battle. That can lose a war. Real, attack-driven testing makes up for where AI lacks, and when used in tandem with it, creates an ironclad shield against AI-enabled adversaries.
Artificial intelligence is a boon for society and industry—but it is also a weapon, and a dangerous one at that. Fortunately, it’s one that we can wield for ourselves.
We list the best Large Language Models (LLMs) for coding.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
President Trump and Russian President Vladimir Putin meet in Anchorage today to talk about Ukraine. Here's what to know.
Stars are starting their own companies and marketing products directly to their fans. We talked to people following and making these deals, including John Legend who started his own skincare brand.
Attorney General Pam Bondi has named the head of the Drug Enforcement Administration to be Washington D.C.'s emergency police commissioner. The National Guard, FBI and other entities are now working to follow President Trump's directive to clean up the nation's capital.
In 1968, Nathaniel Estes started his own plumbing business in Denver's Five Points neighborhood. As his company grew, he became a pillar of the local Black community. His son, Eddie Estes, and daughter, Cathy Lane, remember their now 94-year-old father, and what it was like growing up as the plumber's kids.
One of the best website builders, Framer, just introduced a new feature that lets users update websites directly on the live page. Called On-Page Editing, the new tool is designed to have anyone, not just designers, make changes to websites quickly and safely.
In a press release shared with TechRadar Pro, Framer said that with On-Page Editing, users can fix typos, update text, swap images, and even create new pages without opening the design canvas, or navigating the Content Management System (CMS). Perhaps more importantly - they can do it without relying on someone else to implement changes.
“This isn’t just about making edits easier,” said Koen Bok, CEO and co-founder of Framer. “It’s about unlocking a whole new way to collaborate. With On-Page Editing, we’re laying the foundation for websites where designers build the system, but anyone can contribute with confidence.”
Removing bottlenecksOn-Page Editing integrates directly with Framer’s platform, offering single-click editing that syncs changes to the project, instantly. Rich text formatting, links, lists, and CMS page creation can all be done visually, without switching interfaces, the company explained. Framer also said that since teams can submit edits for review, bottlenecks between marketing, content, and design departments could be removed altogether.
Framer says that its end-to-end control of the stack, from canvas, across CMS, to hosting, allows the kind of workflow that keeps design integrity intact, while still allowing non-technical staff to contribute, and in real-time, at that. It expects template creators to benefit from the new offering as well, since they’ll now be able to create more static designs, while the flexibility of the system will enable anyone to edit and publish without coding knowledge.
On-Page Editing is available immediately for all paid Framer plans. Prices range from $75/month/site, to $200/month/site. There is a also the option of custom pricing for enterprise clients.
You might also likeWith IT infrastructure growing more complex and teams under pressure to do more with less, it’s time for organizations to rethink their observability strategy before costs, burnout, and blind spots spiral out of control.
Across industries, legacy observability tools are buckling under the weight of today’s dynamic infrastructure. These traditional monitoring systems were designed for a world where environments barely moved, data trickled in manageable amounts, and collecting more metrics felt like progress.
But that era is long gone, and teams stuck in ‘collect everything’ mode are paying the price with runaway costs, spiraling complexity, and blind spots that turn small hiccups into full-blown outages.
In today’s fast-moving, containerized world, this strategy is backfiring. What once felt like a safety net has morphed into a data landfill, drowning teams in noise, burning them out, and surprising them with cost overruns that deliver the only visibility nobody wants: a meeting with the CFO to justify the bill.
The observability promise that fell apartFor years, engineering teams were sold a simple idea: more data meant more control. That advice made sense when infrastructure was static and applications evolved slowly – capturing everything often delivered the insights teams needed. But the rise of cloud technology changed everything, turning environments ephemeral and accelerating the pace of change and telemetry growth. Yet many teams still cling to the old ‘collect everything’ strategy, even as it drags them down.
Modern systems don’t wait. They scale instantly, shift constantly, and produce overwhelming volumes of telemetry. The tools that once brought stability are falling behind; they weren’t built for today’s level of scale or complexity. They’ve become rigid, noisy, and expensive, and the cracks are starting to show.
In sectors like aviation, even brief outages can result in millions of dollars in losses within minutes. Elsewhere, the fallout is just as real: frustrated customers, eroded trust, and reputational damage.
What once felt like a smart investment has quietly become a liability. Many organizations are waking up to the uncomfortable truth: their observability stack is no longer fit for purpose. Instead of becoming the true utility teams can rely on, it adds to the technical debt they’re actively trying to mitigate.
When teams can’t separate signal from noise, dashboards become cluttered with irrelevant metrics, alerts never cease, and real issues slip through the cracks. This constant stream of distractions imposes a steep distraction tax: every context switch, every false alarm, every hunt for meaning chips away at an engineer’s productive time and mental energy.
Over time, this chaos breeds reliance on tribal knowledge from a few seasoned ‘heroes’ who know where the bodies are buried. These heroes become the crutch that props up the system, celebrated for their late-night saves. However, a hero culture comes at a high price, with burnout, a lack of knowledge sharing, and stalled innovation, as teams spend more time firefighting than building differentiating features.
Observability should enable innovation, not kill it. When engineers are drowning in data without clarity, the best they can do is react. And in a world moving this fast, organizations that can’t move past constant triage will find themselves leapfrogged by the competition.
What does good observability look like?Solving this problem isn’t just about new tools – it demands a strategic approach to your business pain. A strong observability strategy helps you deliver a better customer experience, enhance employee productivity, and increase conversion rates and revenue.
It delivers clear insights into the performance of your digital investments by revealing feature adoption trends, capacity and scaling gaps, and release quality and velocity issues. Done right, observability fuels a culture shift where teams embrace it as an enabler, not another distraction tax.
A clear telemetry collection methodology is essential to make observability a strategic asset rather than an operational burden. This methodology should be guided by well-defined Service Level Objectives (SLOs) and error budgets, which set the standard for what matters most to your business and customers.
By aligning telemetry collection with these objectives, you ensure your observability strategy surfaces only the data that helps measure and improve outcomes. This disciplined approach connects engineering efforts directly to business value, enabling teams to confidently invest in features, optimize performance, and scale systems without getting lost in the data deluge.
Even the best telemetry strategy will fail if observability is treated as an afterthought or siloed concern. Successful organizations make observability a shared responsibility by embedding it into team norms, workflows, and incentives. That starts with clear executive sponsorship to set expectations, coupled with training that gives every engineer confidence in reading, interpreting, and acting on telemetry data.
Organizational Change Management (OCM) practices help teams adopt observability incrementally, shifting the culture from reactive heroes to proactive, data-driven improvement. When observability becomes part of how everyone builds and operates software, it transforms from a distraction into a force multiplier for innovation and resilience.
Observability done right isn’t optional – it’s a competitive advantage. Teams that treat it as a strategic utility, guided by clear objectives and supported by a culture of shared responsibility, will outpace those stuck in reactive firefighting.
Now is the time to rethink your observability strategy: invest in disciplined telemetry, align it with what matters most to your business, and empower your teams to build with confidence. Strong observability turns bold strategies into market leadership and keeps your teams focused on the future.
We've listed the best business intelligence platform.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro