Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Feed aggregator

New forum topics

In Alabama, a dredging project in Mobile Bay brings together unlikely allies

NPR News Headlines - Tue, 08/12/2025 - 04:00

Dredging waterways for navigation is a centuries-old practice, but this project is controversial because the mud being dug out of the channel is put into other parts of Mobile Bay.

(Image credit: Blake Jones for NPR)

Categories: News

Welcome to the era of empathic Artificial Intelligence

TechRadar News - Tue, 08/12/2025 - 03:47

Imagine a health plan member interacting with their insurer’s virtual assistant, typing, “I just lost my mom and feel overwhelmed.” A conventional chatbot might respond with a perfunctory “I’m sorry to hear that” and send a list of FAQs. This might be why 59% of chatbot users before 2020 felt that “the technologies have misunderstood the nuances of human dialogue.”

In contrast, an AI agent can pause, offer empathetic condolences, gently guide the member to relevant resources, and even help schedule an appointment with their doctor. This empathy, paired with personalization, drives better outcomes.

When people feel understood, they’re more likely to engage, follow through, and trust the system guiding them. Oftentimes in regulated industries that handle sensitive topics, simple task automation fails when users abandon engagements that feel rigid, incompetent, or lack understanding of the individual’s circumstances.

AI agents can listen, understand, and respond with compassion. This combination of contextual awareness and sentiment‑driven response is more than just a nice‑to‑have add-on—it’s foundational for building trust, maintaining engagement, and ensuring members navigating difficult moments get the personalized support they need.

Beyond Automation: Why Empathy Matters in Complex Conversations

Traditional automation excels at straightforward, rule‑based tasks but struggles when conversations turn sensitive. AI agents, by contrast, can detect emotional cues—analyzing tone, punctuation, word choice, conversation history, and more—and deliver supportive, context‑appropriate guidance.

This shift from transactional to relational interactions matters in regulated industries, where people may need help navigating housing assistance, substance-use treatment, or reproductive health concerns.

AI agents that are context-aware and emotionally intelligent can support these conversations by remaining neutral, non‑judgmental, and attuned to the user’s needs.

They also offer a level of accuracy and consistency that’s hard to match—helping ensure members receive timely, personalized guidance and reliable access to resources, which could lead to better, more trusted outcomes.

The Technology Under the Hood

Recent advances in large language models (LLMs) and transformer architectures (GPT‑style models) have been pivotal to enabling more natural, emotionally aware conversations between AI agents and users. Unlike early sentiment analysis tools that only classified text as positive or negative, modern LLMs predict word sequences across entire dialogues, effectively learning the subtleties of human expression.

Consider a scenario where a user types, “I just got laid off and need to talk to someone about my coverage.” An early-generation chatbot might respond with “I can help you with your benefits,” ignoring the user’s distress.

Today’s emotionally intelligent AI agent first acknowledges the emotional weight: “I’m sorry to hear that—losing a job can be really tough.” It then transitions into assistance: “Let’s review your coverage options together, and I can help you schedule a call if you'd like to speak with someone directly."

These advances bring two key strengths. First, contextual awareness means AI agents can track conversation history—remembering what a user mentioned in an earlier exchange and following up appropriately.

Second, built‑in sentiment sensitivity allows these models to move beyond simple positive versus negative tagging. By learning emotional patterns from real‑world conversations, these AI agents can recognize shifts in tone and tailor responses to match the user’s emotional state.

Ethically responsible online platforms embed a robust framework of guardrails to ensure safe, compliant, and trustworthy AI interactions. In regulated environments, this includes proactive content filtering, privacy protections, and strict boundaries that prevent AI from offering unauthorized advice.

Sensitive topics are handled with predefined responses and escalated to human professionals when needed. These safeguards mitigate risk, reinforce user trust, and ensure automation remains accountable, ethical, and aligned with regulatory standards.

Navigating Challenges in Regulated Environments

For people to trust AI in regulated sectors, AI must do more than sound empathetic. It must be transparent, respect user boundaries, and know when to escalate to live experts. Robust safety layers mitigate risk and reinforce trust.

Empathy Subjectivity

Tone, cultural norms, and even punctuation can shift perception. Robust testing across demographics, languages, and use cases is critical. When agents detect confusion or frustration, escalation paths to live agents must be seamless, ensuring swift resolution and access to the appropriate level of human support when automated responses may fall short.

Regulatory Compliance and Transparency

Industries under strict oversight cannot allow hallucinations or unauthorized advice. Platforms must enforce transparent disclosures—ensuring virtual agents identify themselves as non-human—and embed compliance‑driven guardrails that block unapproved recommendations. Redirects to human experts should be fully logged, auditable, and aligned with applicable frameworks.

Guardrail Management

Guardrails must filter hate speech or explicit content while distinguishing between abusive language and expressions of frustration. When users use mild profanity to convey emotional distress, AI agents should recognize the intent without mirroring the language—responding appropriately and remaining within company guidelines and industry regulations.

Also, crisis‑intervention messaging—responding to instances of self‑harm, domestic violence, or substance abuse—must be flexible enough for organizations to tailor responses to their communities, connect people with local resources, and deliver support that is both empathetic and compliant with regulatory standards.

Empathy as a Competitive Advantage

As regulated industries embrace AI agents, the conversation is shifting from evaluating their potential to implementing them at scale. Tomorrow’s leaders won’t just pilot emotion‑aware agents but embed empathy into every customer journey, from onboarding to crisis support.

By committing to this ongoing evolution, businesses can turn compliance requirements into opportunities for deeper connection and redefine what it means to serve customers in complex, regulated environments.

Regulated AI must engineer empathy in every interaction. When systems understand the emotional context (not just data points), they become partners rather than tools. But without vertical specialization and real-time guardrails, even the most well-intentioned AI agents can misstep.

The future belongs to agentic, emotionally intelligent platforms that can adapt on the fly, safeguard compliance, and lead with compassion when it matters most. Empathy, when operationalized safely, becomes more than a UX goal—it becomes a business advantage.

We list the best enterprise messaging platform.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

People are fighting about whether Dyson vacuums are trash, but the arguments don't stack up – here's my take as a vacuum expert

TechRadar News - Tue, 08/12/2025 - 03:44

Vacuum cleaners divide opinion more than you might expect, and the brand that people seem to feel most strongly about is Dyson. Behind every diehard Dyson fan there are 10 more people ready to eagerly proclaim that they're the worst vacuums in the world.

At the weekend, designer Mike Smith proclaimed on X that Dyson vacuums were "not for serious vacuumers" and the ensuing thread went viral, with over 1,000 people piling in to air their vacuum views.

My hot take is that Dyson vacuums are not for serious vacuumers.Battery is garbage, filter is garbage. Canister too small. Absolute joke of a cleaning tool.August 10, 2025

I manage the vacuum cleaner content for TechRadar, which includes reviewing vacs from many different brands and putting together our official best vacuum cleaner ranking. All of that means I spend far more time than the average person thinking about vacuum cleaners.

I'm neither wildly pro- or anti-Dyson, and this discussion didn't sway me any further in either direction. What it did do is make me even more confident in my long-held belief that what most people actually have a problem with is not Dyson vacuums, but cordless stick vacuums in general.

Cordless stick vacuums are not the same as traditional upright vacuums or canister vacs. In some ways, they're worse. Providing strong suction requires a lot of power, and the bigger the battery the heavier the vacuum – so brands are constantly trying to balance whether to provide customers with longer runtimes or a lighter build.

A bigger dust cup means a vacuum that's bulkier and heavier, so there's another trade-off there in terms of how often you have to empty it. They also seem to be an inherently less robust type of cleaner – cordless stick vacs are expected to have a far shorter overall lifespan than other styles of vacuum.

(Image credit: Future)

In short, if you choose a cordless stick vacuum, you should expect limited runtimes on higher suction modes, canisters that need emptying regularly, and for it not to last forever. For those compromises, you get something you don't need to plug into the wall, and which you can easily use to vacuum up the stairs – or even on the ceiling – if you want to.

Of course, some cordless vacs perform much better than others, but broadly speaking you should expect those pros and cons to be true whatever model or brand you go for. Dyson stick vacs might not be for "serious" vacuuming, but boy are they good for convenient, comfortable vacuuming.

(Of course, the other element when it comes to Dyson is the price. I get into this more in my article exploring if Dyson vacuums are worth it, and I've also written about by experience of Shark vs Dyson vacuums, if you're interested in that comparison specifically.)

In the thread, the name that crops up again and again from the opposing chorus is Miele. This brand is synonymous with canister vacuums, and not a direct comparison. One of the very best vacuums I've used in terms of outright suction power remains the 25+ year-old upright that used to belong to my Nana and now lives in my parents' house. But it weighs a ton and takes up a load of space, so when it comes to cleaning my own flat, I'd reach for a Dyson (or similar) every time.

You might also like...
Categories: Technology

Israeli airstrike kills a prominent Al Jazeera journalist and colleagues in Gaza

NPR News Headlines - Tue, 08/12/2025 - 03:35

Al Jazeera's Anas al-Sharif and five of his colleagues at the network were killed in an Israeli airstrike targeting Gaza's most recognized television journalist.

(Image credit: Anas Baba)

Categories: News

What's at stake as Trump prepares to meet Putin in Alaska?

NPR News Headlines - Tue, 08/12/2025 - 03:33

Trump said Ukrainian President Volodymyr Zelenskyy was unlikely to be included in talks he described as a "feel out meeting" to better understand Russia's demands for ending its war in Ukraine.

(Image credit: Aurelien Morissard, left and center, Pavel Bednyakov, right)

Categories: News

I am an AI expert and here's why synthetic threats demand synthetic resilience

TechRadar News - Tue, 08/12/2025 - 02:51

Artificial Intelligence (AI) is rapidly reshaping the landscape of fraud prevention, creating new opportunities for defense as well as new avenues for deception.

Across industries, AI has become a double-edged sword. On one hand, it enables more sophisticated fraud detection, but on the other, it is being weaponized by threat actors to exploit controls, create synthetic identities and launch hyper-realistic attacks.

Fraud prevention is vital in sectors handling high volumes of sensitive transactions and digital identities. In financial services, for example, it's not just about protecting capital - regulatory compliance and customer trust are at stake.

Similar cybersecurity pressures are growing in telecoms and tech industries like SaaS, ecommerce and cloud infrastructure, where threats like SIM swapping, API abuse and synthetic users can cause serious disruption.

Fraud has already shifted from a risk to a core business challenge - with 58 per cent of key decision-makers in large UK businesses now viewing it as a ‘serious threat’, according to a survey conducted in 2024.

The rise of synthetic threats

Synthetic fraud refers to attacks that leverage fabricated data, AI-generated content or manipulated digital identities. These aren’t new concepts, but the capability and accessibility of generative AI tools have dramatically lowered the barrier to entry.

A major threat is the creation of synthetic identities which are combinations of real and fictitious information used to open accounts, bypass Know-Your-Customer (KYC) checks or access services.

Deepfakes are also being used to impersonate executives during video calls or in phishing attempts. One recent example involved attackers using AI to mimic a CEO’s voice and authorize a fraudulent transfer. These tactics are difficult to detect in fast-moving digital environments without advanced, real-time verification methods.

Data silos only exacerbate the problem. In many tech organizations, different departments rely on disconnected tools or platforms. One team may use AI for authentication while another still relies on legacy systems, and it is these blind spots which are easily exploited by AI-driven fraud.

AI as a defense

While AI enables fraud, it also offers powerful tools for defense if implemented strategically. At its best, AI can process vast volumes of data in real time, detect suspicious patterns and adapt as threats evolve. But this depends on effective integration, governance and oversight.

One common weakness lies in fragmented systems. Fraud prevention efforts often operate in silos across compliance, cybersecurity and customer teams. To build true resilience, organizations must align AI strategies across departments. Shared data lakes, or secure APIs, can enable integrated models with a holistic view of user behavior.

Synthetic data, often associated with fraud, can also play a role in defense. Organizations can use anonymized, realistic data to simulate rare fraud scenarios and train models without compromising customer privacy. This approach helps test defenses against edge cases not found in historical data.

Fraud systems must also be adaptive. Static rules and rarely updated models can’t keep pace with AI-powered fraud - real-time, continuously learning systems are now essential. Many companies are adopting behavioral biometrics, where AI monitors how users interact with devices, such as typing rhythm or mouse movement, to detect anomalies, even when credentials appear valid.

Explainability is another cornerstone of responsible AI use and it is essential to understand why a system has flagged or blocked activity. Explainable AI (XAI) frameworks help make decisions transparently, supporting trust and regulatory compliance, ensuring AI is not just effective, but also accountable.

Industry collaboration

AI-enhanced fraud doesn’t respect organizational boundaries, and as a result, cross-industry collaboration is becoming increasingly important. While sectors like financial services have long benefited from information-sharing frameworks like ISACs, similar initiatives are emerging in the broader tech ecosystem.

Cloud providers are beginning to share indicators of compromised credentials or coordinated malicious activity with clients. SaaS and cybersecurity vendors are also forming consortiums and joint research initiatives to accelerate detection and improve response times across the board.

Despite its power, AI is not a silver bullet and organizations which rely solely on automation risk missing subtle or novel fraud techniques. Effective fraud strategies should include regular model audits, scenario testing and red-teaming exercises (where ethical hackers conduct simulated cyberattacks on an organization to test cybersecurity effectiveness).

Human analysts bring domain knowledge and judgement that can refine model performance. Training teams to work alongside AI is key to building synthetic resilience, combining human insight with machine speed and scale.

Resilience is a system, not a feature

As AI transforms both the tools of fraud and the methods of prevention, organizations must redefine resilience. It’s no longer about isolated tools, but about creating a connected, adaptive, and explainable defense ecosystem.

For many organizations, that means integrating AI across business units, embracing synthetic data, prioritizing explainability, and embedding continuous improvement into fraud models. While financial services may have pioneered many of these practices, the broader tech industry now faces the same level of sophistication in fraud, and must respond accordingly.

In this new era, synthetic resilience is not a static end goal but a capability to be constantly cultivated. Those who succeed will not only defend their businesses more effectively but help define the future of secure, AI-enabled digital trust.

We list the best identity management solutions.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Pages

Subscribe to The Vortex aggregator