"I just didn't think it would take this long," one veteran head of diversity, who's been job-hunting since last summer, tells NPR.
(Image credit: Cornell Watson for NPR)
Cutting off research funding for Harvard University might hurt the school, its president Alan Garber told NPR, but it also potentially sets back important work that benefits the public.
Years after their son left the U.S. to join ISIS, a Minnesota couple learned they had two young grandsons trapped in a Syrian desert camp. Bringing them home was complicated — and took years.
(Image credit: Dion MBD for NPR)
The Ministry of Defence (MOD) recently published a document on 'Secure by Design' challenges that represents something we rarely see in government cybersecurity: a transparent acknowledgment of the complexities involved in implementing security from first principles.
Secure by design is a fundamental approach that embeds security into systems from the very beginning of the design process as opposed to treating it as a bolt-on feature later in development.
Having spent years advocating for the human element in security, it's refreshing to see an official recognition that technical controls are only as effective as the people implementing them.
Addressing the Security Skills ChallengeThe MOD's first identified problem is "How do we up-skill UK defense in 'Secure by Design'?"
Their acknowledgment that effective implementation requires a "one team" approach across UK defense reflects the reality that security cannot be siloed within technical teams.
This aligns perfectly with what I've observed in organizations with mature security cultures—security becomes everyone's responsibility, not just the security department's concern.
The Knowledge Distribution ProblemPerhaps most intriguing is problem two: "How does 'Secure by Design' account for unevenly distributed information and knowledge?"
The MOD correctly identifies that information asymmetry exists for various legitimate reasons. What makes this assessment valuable is the recognition that not all information-sharing barriers stem from poor security culture; some exist by design and necessity.
Imagine a family planning a surprise birthday party for their grandmother. Different family members have different pieces of information that they intentionally don't share with everyone:
The daughter knows the guest list and has sent invitations directly to each person, asking them not to discuss it openly on family group chats,
The son has arranged the venue and catering, with specific dietary requirements for certain guests,
The grandchildren are handling decorations and have a theme they're working on,
And most importantly—nobody tells grandmother anything about any of this.
This isn't because the family has poor communication skills or doesn't trust each other. These information barriers exist by design and necessity to achieve the goal of surprising grandmother. If everyone shared everything with everyone else, the surprise would be ruined.
The MOD's approachIn the MOD's security context, this is similar to how:
Certain threat intelligence can't be shared with all suppliers because doing so might reveal intelligence-gathering capabilities,
Suppliers can't share all their proprietary technology details even with clients like the MOD, as they need to protect their competitive advantage,
Specific security controls might be kept confidential from general staff to prevent those controls from being circumvented.
These aren't failures of security culture—they're intentional compartmentalization that sometimes make security work possible in the first place. The challenge isn't eliminating these barriers but designing systems that can function effectively despite them.
This reflects the nuanced reality of human behavior in security contexts. People don't withhold security information solely due to territoriality or negligence; often, legitimate constraints prevent the ideal level of transparency. The challenge becomes developing systems and practices that can function effectively despite these inherent limitations.
The Early Design ChallengeThe third problem addresses a familiar paradox: how to implement security at the earliest stages of capability acquisition when the capability itself is barely defined.
In other words, it's like trying to build a high-tech security system for a house when you only have a rough sketch of what the house might eventually look like - you know you need protection, but it's difficult to plan specific security measures when you're still deciding how many doors and windows there will be, what valuables will be stored inside, or even where the house will be located. As the MOD puts it, at this stage a capability might be "little more than a single statement of user need."
This connects directly to how humans approach risk management. When primary objectives (delivering military capability) compete with secondary concerns (security), practical compromises inevitably emerge. The MOD's candid acknowledgment that "cyber security will always be a secondary goal" reflects a pragmatic understanding of how priorities function in complex organizations.
Through-Life SecurityProblem four addresses perhaps the most demanding human aspect of security: maintaining security rationale and practice across decades of a capability's lifespan. With defense platforms potentially remaining operational for 30+ years, today's security decisions must make sense to tomorrow's engineers.
The question of continuous risk management becomes particularly relevant as organizations encounter new threats over their extended lifespans. How human operators interpret and respond to evolving risk landscapes determines the long-term security posture of these systems.
Building a Collaborative Security CultureThe MOD recognizes that 'Secure by Design' implementation isn't merely a technical challenge but fundamentally about collaboration among people across organizational, disciplinary, and national boundaries.
The MOD's approach suggests a shift toward a more mature security culture — one that acknowledges limitations, seeks external expertise, and recognizes the complex interplay between human factors and technical controls. Their openness about needing help from academia and industry demonstrates a collaborative mindset essential for addressing complex security challenges.
This collaborative approach to security culture stands in stark contrast to the traditional government tendency toward self-sufficiency. By explicitly inviting external perspectives, the MOD demonstrates an understanding that diverse viewpoints strengthen security posture rather than compromising it.
Security isn't about having all the answers—it's about creating the conditions where people can collaboratively develop appropriate responses to ever-changing threats.
We've compiled a list of the best identity management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Agentic AI is one of the latest concepts in artificial intelligence, now gaining real traction beyond its early buzz. Ongoing advancements in Agentic AI are accelerating the development of autonomous business systems, building on the achievements of machine learning.
Operating as an independent ‘agent’, this technology is equipped to make informed decisions based on the multimodal data and algorithmic logic, and can then ‘learn’ and evolve through experience.
Even more exciting is its capacity to act independently. It’s this unique ability to adapt, plan, and carry out complex tasks without human oversight that distinguishes Agentic AI from earlier generations of AI tools.
In supply chains, for instance, AI agents can track market activity and historical demand trends to forecast inventory needs and implement measures to avoid shortages, such as by automating parts of the restocking processes. These agents shift their behavior in response to changing market conditions, boosting efficiency and performance. It's therefore no surprise that 26% of business leaders report their organizations are beginning to shape strategic approaches around Agentic AI.
However, as great as it sounds to outsource such tasks to Agentic AI, we also need to err on the side of caution. For all its autonomous power, how can the actions and outputs of AI agents be fully trusted? If we rely on Agentic AI to complete sophisticated tasks on its own, how do we ensure its decisions are truly grounded in what’s happening in the real world, or in the enterprise’s view of the world?
In the same way our brains use observation and extra inputs to draw conclusions, AI agents need to rely on a lot of external sources and signals to enhance their reasoning capabilities.
This need can be met by solutions and platforms that collect and present data in a way that’s accessible and retrievable. Here’s how:
The trust challenge in autonomous AI systemsAs discussed, what sets Agentic AI apart from other AI systems is its ability to act autonomously, not just engage in a linear conversation. The complexity of the tasks agents complete typically requires them to refer to multiple, dynamic external sources. As a result, the risk of something going wrong automatically increases. For example, you might trust a chatbot to provide you with an update on the status of a claim or refund, but would you feel as trusting when giving an AI agent your credit card details to book a flight for you?
Away from conversational AI, task-based agents plan and change actions depending on the context they’re given. They delegate subtasks to the various tools available through a process often referred to as “chaining” (the output of one action becomes the input for the next). This means that queries (or tasks) can be broken down into smaller tasks, with each requiring access to data in real-time, processed iteratively to mimic human problem-solving.
The chain effect (in which decisions are made) is informed by the environment that’s being monitored, i.e., the sources of data. As a result, explainable and accurate data retrieval is required at each step of the chain for two reasons. Firstly, users need to know why the AI agent has landed on a particular decision and have visibility of the data source it’s based on.
They need to be able to trust that the action is, in fact, the most effective and efficient. Secondly, they need to be able to optimize the process to get the best possible result each time, analysing each stage of the output and learning from any dissatisfactory results.
To trust an agent to complete sophisticated tasks based on multiple retrieval steps, the value of the data needed to support the decision-making process multiplies significantly.
The need to make reliable enterprise data available to agents is key. This is why businesses are increasingly recognising the power of graph database technology for the broad range of retrieval strategies it offers, which in turn multiply the value of the data.
How graph technology strengthens AI reasoningAs Agentic AI drives decisions from data, the insights underpinning these decisions must be accurate, transparent, and explainable – benefits that graph databases are uniquely optimized to deliver. Gartner already identifies knowledge graphs as an essential capability for GenAI applications, as GraphRAG (Retrieval Augmented Generation), where the retrieval path includes a knowledge graph, can vastly improve the accuracy of outputs.
The unique structure of knowledge graphs, comprised of ‘nodes’ and ‘edges’, is where higher-quality responses can be derived. Nodes represent existing entities in a graph (like a person or place), and edges represent the relationship between those entities – i.e., how they connect to one another. In this type of structure, the bigger and more complex the data, the more previously hidden insights can be revealed. These characteristics are invaluable in presenting the data in a way that makes it easier for AI agents to complete tasks in a more reliable and useful way.
Users have been finding that GraphRAG answers are not only more accurate but also richer, speedier, more complete, and consequently more useful. For example, an AI agent addressing customer service queries could offer a particular discounted broadband package based on a complete understanding of the customer, as a result of using GraphRAG to connect disparate information about said customer. How long has the customer been with the company? What services are they currently using? Have they filed complaints before?
To answer these questions, nodes can be created to represent each aspect of the customer experience with the company (including previous interactions, service usage, and location), and edges to show the cheapest or best service for them. A fragmented and dispersed view of the data could lead to the agent offering up a discounted package when it was not due, leading to cost implications for the business.
As mentioned by the CEO of Klarna, “Feeding an LLM the fractioned, fragmented, and dispersed world of corporate data will result in a very confused LLM”. But the outcome is very different when data is connected in a graph: Positive results have been reported by the likes of LinkedIn’s customer service team, who have reduced median per-issue resolution time by 28.6% since implementing GraphRAG.
Why connected data is key to Agentic AI readinessWith every iteration, the LLMs behind AI agents are advancing quickly, and agentic frameworks are making it easier to build complex, multi-step applications. The next vital move is to make enterprise data as rich, connected, and contextually aware as possible, so it's fully accessible to these powerful agents.
Taking this step allows businesses to unlock the full value of their data, enabling agents that are not only more accurate and efficient but also easier to understand and explain. This is where the integration of Agentic AI and knowledge graphs proves transformational. Connected data gives agents the context they need to think more clearly, generate smarter outputs, and have a greater impact.
We've compiled a list of the best survey tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Lewis Pugh wants to change public perceptions and encourage protections for sharks — which he said the film maligned as "villains, as cold-blooded killers."
(Image credit: Robert F. Bukaty)
Three more of the 10 inmates who escaped from a New Orleans jail earlier this month were re-arrested Monday in two different states after more than a week on the lam, authorities said.
(Image credit: Hilary Scheinuk/The Advocate)
A 53-year-old man is in police custody. He is from the Liverpool area and is believed to be the driver, police said.
(Image credit: Getty Images)
Dan Bongino, the deputy director of the FBI, says the bureau is refocusing on cases that pointed to "potential public corruption."
(Image credit: Brendan Smialowski/AFP)
It's a classic Washington power move — the late-on-Friday news dump.
This past Friday, at 4:30pm, start of a long holiday weekend, about half the staff of the National Security Council got emails asking them to leave by 5pm. Dozens of people abruptly dismissed.
The restructuring of the NSC as Secretary of State and National Security advisor Marco Rubio has characterized it — continues a trend in this second term for President Trump, of radical downsizing.
The Trump administration plans to cut thousands of intelligence and national security jobs across the government.
The US Government has long relied on scores of intelligence officials across the government to keep America safe. Trump wants many of them gone – what could that mean for security at home and abroad?
For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org.
Email us at considerthis@npr.org.
(Image credit: Anna Moneymaker)
Cybercriminals are abusing a legitimate Google service to bypass email protection mechanisms and deliver phishing emails straight to people’s inboxes.
Cybersecurity researchers KnowBe4, who first spotted the attacks, have warned the crooks are using Google AppSheet, a no-code application development platform for mobile and web apps, and through its workflow automation were able to send emails using the "noreply@appsheet.com" address.
The phishing emails are mimicking Facebook, and are designed to trick people into giving away their login credentials, and 2FA codes, for the social media platform.
Keeper is a cybersecurity platform primarily known for its password manager and digital vault, designed to help individuals, families, and businesses securely store and manage passwords, sensitive files, and other private data.
It uses zero-knowledge encryption and offers features like two-factor authentication, dark web monitoring, secure file storage, and breach alerts to protect against cyber threats.
Preferred partner (What does this mean?)View Deal
2FA codes and session tokensThe emails, which were sent in-bulk and on a fairly large scale, were coming from a legitimate source, successfully bypassing Microsoft and Secure Email Gateways (SEGs) that rely on domain reputation and authentication checks (SPF, DKIM, DMARC).
Furthermore, since AppSheets can generate unique IDs, each email was slightly different, which also helped bypass traditional detection systems.
The emails themselves spoofed Facebook. The crooks tried to trick victims into thinking they infringed on someone’s intellectual property, and that their accounts were due to be deleted within 24 hours.
Unless, of course, they submit an appeal through a conveniently placed “Submit an Appeal” button in the email.
Clicking on the button leads the victim to a landing page impersonating Facebook, where they can provide their login credentials and 2FA codes, which are then relayed to the attackers.
The page is hosted on Vercel which, KnowBe4 says, is a “reputable platform known for hosting modern web applications”. This further strengthens the entire campaign’s credibility.
The attack has a few additional contingencies. The first attempt at logging in returns a “wrong password” result - not because the victim typed in the wrong credential - but in order to confirm the submission.
Also, the 2FA codes that are provided are immediately submitted to Facebook and in return - the crooks grab a session token which grants them persistence even after a password change.
You might also likeThe king's visit is being seen in Canada as an opportunity for the nation to bolster its sovereignty amid threats by President Trump to turn the United States' northern neighbor into the 51st state.
(Image credit: Ben Stansall)
Trump's remarks were a rare rebuke of the Russian president and followed a storm of drone and missile attacks on Ukrainian cities on Sunday evening.
(Image credit: Manuel Balce Ceneta)