Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 2 hours 29 min ago

Welcome to the era of empathic Artificial Intelligence

Tue, 08/12/2025 - 03:47

Imagine a health plan member interacting with their insurer’s virtual assistant, typing, “I just lost my mom and feel overwhelmed.” A conventional chatbot might respond with a perfunctory “I’m sorry to hear that” and send a list of FAQs. This might be why 59% of chatbot users before 2020 felt that “the technologies have misunderstood the nuances of human dialogue.”

In contrast, an AI agent can pause, offer empathetic condolences, gently guide the member to relevant resources, and even help schedule an appointment with their doctor. This empathy, paired with personalization, drives better outcomes.

When people feel understood, they’re more likely to engage, follow through, and trust the system guiding them. Oftentimes in regulated industries that handle sensitive topics, simple task automation fails when users abandon engagements that feel rigid, incompetent, or lack understanding of the individual’s circumstances.

AI agents can listen, understand, and respond with compassion. This combination of contextual awareness and sentiment‑driven response is more than just a nice‑to‑have add-on—it’s foundational for building trust, maintaining engagement, and ensuring members navigating difficult moments get the personalized support they need.

Beyond Automation: Why Empathy Matters in Complex Conversations

Traditional automation excels at straightforward, rule‑based tasks but struggles when conversations turn sensitive. AI agents, by contrast, can detect emotional cues—analyzing tone, punctuation, word choice, conversation history, and more—and deliver supportive, context‑appropriate guidance.

This shift from transactional to relational interactions matters in regulated industries, where people may need help navigating housing assistance, substance-use treatment, or reproductive health concerns.

AI agents that are context-aware and emotionally intelligent can support these conversations by remaining neutral, non‑judgmental, and attuned to the user’s needs.

They also offer a level of accuracy and consistency that’s hard to match—helping ensure members receive timely, personalized guidance and reliable access to resources, which could lead to better, more trusted outcomes.

The Technology Under the Hood

Recent advances in large language models (LLMs) and transformer architectures (GPT‑style models) have been pivotal to enabling more natural, emotionally aware conversations between AI agents and users. Unlike early sentiment analysis tools that only classified text as positive or negative, modern LLMs predict word sequences across entire dialogues, effectively learning the subtleties of human expression.

Consider a scenario where a user types, “I just got laid off and need to talk to someone about my coverage.” An early-generation chatbot might respond with “I can help you with your benefits,” ignoring the user’s distress.

Today’s emotionally intelligent AI agent first acknowledges the emotional weight: “I’m sorry to hear that—losing a job can be really tough.” It then transitions into assistance: “Let’s review your coverage options together, and I can help you schedule a call if you'd like to speak with someone directly."

These advances bring two key strengths. First, contextual awareness means AI agents can track conversation history—remembering what a user mentioned in an earlier exchange and following up appropriately.

Second, built‑in sentiment sensitivity allows these models to move beyond simple positive versus negative tagging. By learning emotional patterns from real‑world conversations, these AI agents can recognize shifts in tone and tailor responses to match the user’s emotional state.

Ethically responsible online platforms embed a robust framework of guardrails to ensure safe, compliant, and trustworthy AI interactions. In regulated environments, this includes proactive content filtering, privacy protections, and strict boundaries that prevent AI from offering unauthorized advice.

Sensitive topics are handled with predefined responses and escalated to human professionals when needed. These safeguards mitigate risk, reinforce user trust, and ensure automation remains accountable, ethical, and aligned with regulatory standards.

Navigating Challenges in Regulated Environments

For people to trust AI in regulated sectors, AI must do more than sound empathetic. It must be transparent, respect user boundaries, and know when to escalate to live experts. Robust safety layers mitigate risk and reinforce trust.

Empathy Subjectivity

Tone, cultural norms, and even punctuation can shift perception. Robust testing across demographics, languages, and use cases is critical. When agents detect confusion or frustration, escalation paths to live agents must be seamless, ensuring swift resolution and access to the appropriate level of human support when automated responses may fall short.

Regulatory Compliance and Transparency

Industries under strict oversight cannot allow hallucinations or unauthorized advice. Platforms must enforce transparent disclosures—ensuring virtual agents identify themselves as non-human—and embed compliance‑driven guardrails that block unapproved recommendations. Redirects to human experts should be fully logged, auditable, and aligned with applicable frameworks.

Guardrail Management

Guardrails must filter hate speech or explicit content while distinguishing between abusive language and expressions of frustration. When users use mild profanity to convey emotional distress, AI agents should recognize the intent without mirroring the language—responding appropriately and remaining within company guidelines and industry regulations.

Also, crisis‑intervention messaging—responding to instances of self‑harm, domestic violence, or substance abuse—must be flexible enough for organizations to tailor responses to their communities, connect people with local resources, and deliver support that is both empathetic and compliant with regulatory standards.

Empathy as a Competitive Advantage

As regulated industries embrace AI agents, the conversation is shifting from evaluating their potential to implementing them at scale. Tomorrow’s leaders won’t just pilot emotion‑aware agents but embed empathy into every customer journey, from onboarding to crisis support.

By committing to this ongoing evolution, businesses can turn compliance requirements into opportunities for deeper connection and redefine what it means to serve customers in complex, regulated environments.

Regulated AI must engineer empathy in every interaction. When systems understand the emotional context (not just data points), they become partners rather than tools. But without vertical specialization and real-time guardrails, even the most well-intentioned AI agents can misstep.

The future belongs to agentic, emotionally intelligent platforms that can adapt on the fly, safeguard compliance, and lead with compassion when it matters most. Empathy, when operationalized safely, becomes more than a UX goal—it becomes a business advantage.

We list the best enterprise messaging platform.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

People are fighting about whether Dyson vacuums are trash, but the arguments don't stack up – here's my take as a vacuum expert

Tue, 08/12/2025 - 03:44

Vacuum cleaners divide opinion more than you might expect, and the brand that people seem to feel most strongly about is Dyson. Behind every diehard Dyson fan there are 10 more people ready to eagerly proclaim that they're the worst vacuums in the world.

At the weekend, designer Mike Smith proclaimed on X that Dyson vacuums were "not for serious vacuumers" and the ensuing thread went viral, with over 1,000 people piling in to air their vacuum views.

My hot take is that Dyson vacuums are not for serious vacuumers.Battery is garbage, filter is garbage. Canister too small. Absolute joke of a cleaning tool.August 10, 2025

I manage the vacuum cleaner content for TechRadar, which includes reviewing vacs from many different brands and putting together our official best vacuum cleaner ranking. All of that means I spend far more time than the average person thinking about vacuum cleaners.

I'm neither wildly pro- or anti-Dyson, and this discussion didn't sway me any further in either direction. What it did do is make me even more confident in my long-held belief that what most people actually have a problem with is not Dyson vacuums, but cordless stick vacuums in general.

Cordless stick vacuums are not the same as traditional upright vacuums or canister vacs. In some ways, they're worse. Providing strong suction requires a lot of power, and the bigger the battery the heavier the vacuum – so brands are constantly trying to balance whether to provide customers with longer runtimes or a lighter build.

A bigger dust cup means a vacuum that's bulkier and heavier, so there's another trade-off there in terms of how often you have to empty it. They also seem to be an inherently less robust type of cleaner – cordless stick vacs are expected to have a far shorter overall lifespan than other styles of vacuum.

(Image credit: Future)

In short, if you choose a cordless stick vacuum, you should expect limited runtimes on higher suction modes, canisters that need emptying regularly, and for it not to last forever. For those compromises, you get something you don't need to plug into the wall, and which you can easily use to vacuum up the stairs – or even on the ceiling – if you want to.

Of course, some cordless vacs perform much better than others, but broadly speaking you should expect those pros and cons to be true whatever model or brand you go for. Dyson stick vacs might not be for "serious" vacuuming, but boy are they good for convenient, comfortable vacuuming.

(Of course, the other element when it comes to Dyson is the price. I get into this more in my article exploring if Dyson vacuums are worth it, and I've also written about by experience of Shark vs Dyson vacuums, if you're interested in that comparison specifically.)

In the thread, the name that crops up again and again from the opposing chorus is Miele. This brand is synonymous with canister vacuums, and not a direct comparison. One of the very best vacuums I've used in terms of outright suction power remains the 25+ year-old upright that used to belong to my Nana and now lives in my parents' house. But it weighs a ton and takes up a load of space, so when it comes to cleaning my own flat, I'd reach for a Dyson (or similar) every time.

You might also like...
Categories: Technology

I am an AI expert and here's why synthetic threats demand synthetic resilience

Tue, 08/12/2025 - 02:51

Artificial Intelligence (AI) is rapidly reshaping the landscape of fraud prevention, creating new opportunities for defense as well as new avenues for deception.

Across industries, AI has become a double-edged sword. On one hand, it enables more sophisticated fraud detection, but on the other, it is being weaponized by threat actors to exploit controls, create synthetic identities and launch hyper-realistic attacks.

Fraud prevention is vital in sectors handling high volumes of sensitive transactions and digital identities. In financial services, for example, it's not just about protecting capital - regulatory compliance and customer trust are at stake.

Similar cybersecurity pressures are growing in telecoms and tech industries like SaaS, ecommerce and cloud infrastructure, where threats like SIM swapping, API abuse and synthetic users can cause serious disruption.

Fraud has already shifted from a risk to a core business challenge - with 58 per cent of key decision-makers in large UK businesses now viewing it as a ‘serious threat’, according to a survey conducted in 2024.

The rise of synthetic threats

Synthetic fraud refers to attacks that leverage fabricated data, AI-generated content or manipulated digital identities. These aren’t new concepts, but the capability and accessibility of generative AI tools have dramatically lowered the barrier to entry.

A major threat is the creation of synthetic identities which are combinations of real and fictitious information used to open accounts, bypass Know-Your-Customer (KYC) checks or access services.

Deepfakes are also being used to impersonate executives during video calls or in phishing attempts. One recent example involved attackers using AI to mimic a CEO’s voice and authorize a fraudulent transfer. These tactics are difficult to detect in fast-moving digital environments without advanced, real-time verification methods.

Data silos only exacerbate the problem. In many tech organizations, different departments rely on disconnected tools or platforms. One team may use AI for authentication while another still relies on legacy systems, and it is these blind spots which are easily exploited by AI-driven fraud.

AI as a defense

While AI enables fraud, it also offers powerful tools for defense if implemented strategically. At its best, AI can process vast volumes of data in real time, detect suspicious patterns and adapt as threats evolve. But this depends on effective integration, governance and oversight.

One common weakness lies in fragmented systems. Fraud prevention efforts often operate in silos across compliance, cybersecurity and customer teams. To build true resilience, organizations must align AI strategies across departments. Shared data lakes, or secure APIs, can enable integrated models with a holistic view of user behavior.

Synthetic data, often associated with fraud, can also play a role in defense. Organizations can use anonymized, realistic data to simulate rare fraud scenarios and train models without compromising customer privacy. This approach helps test defenses against edge cases not found in historical data.

Fraud systems must also be adaptive. Static rules and rarely updated models can’t keep pace with AI-powered fraud - real-time, continuously learning systems are now essential. Many companies are adopting behavioral biometrics, where AI monitors how users interact with devices, such as typing rhythm or mouse movement, to detect anomalies, even when credentials appear valid.

Explainability is another cornerstone of responsible AI use and it is essential to understand why a system has flagged or blocked activity. Explainable AI (XAI) frameworks help make decisions transparently, supporting trust and regulatory compliance, ensuring AI is not just effective, but also accountable.

Industry collaboration

AI-enhanced fraud doesn’t respect organizational boundaries, and as a result, cross-industry collaboration is becoming increasingly important. While sectors like financial services have long benefited from information-sharing frameworks like ISACs, similar initiatives are emerging in the broader tech ecosystem.

Cloud providers are beginning to share indicators of compromised credentials or coordinated malicious activity with clients. SaaS and cybersecurity vendors are also forming consortiums and joint research initiatives to accelerate detection and improve response times across the board.

Despite its power, AI is not a silver bullet and organizations which rely solely on automation risk missing subtle or novel fraud techniques. Effective fraud strategies should include regular model audits, scenario testing and red-teaming exercises (where ethical hackers conduct simulated cyberattacks on an organization to test cybersecurity effectiveness).

Human analysts bring domain knowledge and judgement that can refine model performance. Training teams to work alongside AI is key to building synthetic resilience, combining human insight with machine speed and scale.

Resilience is a system, not a feature

As AI transforms both the tools of fraud and the methods of prevention, organizations must redefine resilience. It’s no longer about isolated tools, but about creating a connected, adaptive, and explainable defense ecosystem.

For many organizations, that means integrating AI across business units, embracing synthetic data, prioritizing explainability, and embedding continuous improvement into fraud models. While financial services may have pioneered many of these practices, the broader tech industry now faces the same level of sophistication in fraud, and must respond accordingly.

In this new era, synthetic resilience is not a static end goal but a capability to be constantly cultivated. Those who succeed will not only defend their businesses more effectively but help define the future of secure, AI-enabled digital trust.

We list the best identity management solutions.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

The evolution of smart data capture

Tue, 08/12/2025 - 01:53

The landscape of smart data capture software is undergoing a significant transformation, with advancements that can help businesses build long-term resilience against disruptions like trade tariffs, labor shortages, and volatile demand.

No longer confined to handheld computers and mobile devices, the technology is embracing a new batch of hybrid data capture methods that include fixed cameras, drones, and wearables.

If you weren’t familiar with smart data capture, it is the ability to capture data intelligently from barcodes, text, IDs, and objects. It enables real-time decision-making, engagement, and workflow automation at scale across industries such as retail, supply chain, logistics, travel, and healthcare.

The advancements it’s currently experiencing are beyond technological novelties; they are further redefining how businesses operate, driving ROI, enhancing customer experience, and streamlining operational workflows. Let’s explore how:

More than just smartphones

Traditionally, smart data capture relied heavily on smartphones and handheld computers, devices that both captured data and facilitated user action. With advancements in technology, the device landscape is expanding. Wearables like smart glasses and headsets, fixed cameras, drones, and even robots are becoming more commonplace, each with its own value.

This diversification leads to the distinction of devices that purely ‘capture’ data versus those that can ‘act’ on it too. For example, stationary cameras or drones capture data from the real world and then feed it into a system of record to be aggregated with other data.

Other devices — often mobile or wearable — can capture data and empower users to act on that information instantly, such as a store associate who scans a shelf and can instantly be informed of a pricing error on a particular item. Depending on factors such as the frequency of data collected, these devices can allow enterprises to tailor a data capture strategy to their needs.

Practical innovations with real ROI

In a market saturated with emerging technologies, it's easy to get caught up in the hype of the next big thing. However, not all innovations are ready for prime time, and many fail to deliver a tangible return on investment, especially at scale. The key for businesses is to focus on practical, easy-to-implement solutions that enhance workflows rather than disrupt them by leveraging existing technologies and IT infrastructure.

An illustrative example of this evolution is the increasing use of fixed cameras in conjunction with mobile devices for shelf auditing and monitoring in retail environments. Retailers are deploying mobile devices and fixed cameras to monitor shelves in near real-time and identify out-of-stock items, pricing errors, and planogram discrepancies, freeing up store associates’ time and increasing revenue — game-changing capabilities in the current volatile trade environment, which triggers frequent price changes and inventory challenges.

This hybrid shelf management approach allows businesses to scale operations no matter the store format: retailers can easily pilot the solution using their existing mobile devices with minimal upfront investment and assess all the expected ROI and benefits before committing to full-scale implementation.

The combination also enables further operational efficiency, with fixed cameras providing continuous and fully automated shelf monitoring in high-footfall areas, while mobile devices can handle lower-frequency monitoring in less-frequented aisles.

This is how a leading European grocery chain increased revenue by 2% in just six months — an enormous uplift in a tight-margin vertical like grocery.

Multi-device and multi-signal systems

An important aspect of this data capture evolution is the seamless integration of all these various devices and technologies. User interfaces are being developed to facilitate multi-device interactions, ensuring that data captured by one system can be acted upon through another.

For example, fixed cameras might continuously monitor inventory levels, with alerts to replenish specific low-stock items sent directly to a worker's wearable device for immediate and hands-free action.

And speaking of hands-free operation: gesture recognition and voice input are also becoming increasingly important, especially for wearable devices lacking traditional touchscreens. Advancing these technologies would enable workers to interact with items naturally and efficiently.

Adaptive user interfaces also play a vital role, ensuring consistent experiences across different devices and form factors. Whether using a smartphone, tablet, or digital eyewear, the user interface should adapt to provide the necessary functionality without a steep learning curve; otherwise, it may negatively impact the adoption rate of the data capture solution.

Recognizing the benefits, a large US grocer implemented a pre-built adaptive UI to enable top-performing scanning capabilities on existing apps to 100 stores in just 90 days.

The co-pilot system

As the volume of data increases, so does the potential for information overload. In some cases, systems can generate thousands of alerts daily, overwhelming staff and hindering productivity. To combat this, businesses are adopting so-called co-pilot systems — a combination of devices and advanced smart data capture that can guide workers to prioritize ROI-optimizing tasks.

This combination leverages machine learning to analyze sales numbers, inventory levels, and other critical metrics, providing frontline workers with actionable insights. By focusing on high-priority tasks, employees can work more efficiently without sifting through endless lists of alerts.

Preparing for the future

As the smart data capture landscape continues to evolve and disruption becomes the “new normal”, businesses must ensure their technology stacks are flexible, adaptable, and scalable.

Supporting various devices, integrating multiple data signals, and providing clear task prioritization are essential for staying competitive in an increasingly complex, changeable and data-driven market.

By embracing hybrid smart data capture device strategies, businesses can optimize processes, enhance user experiences, and make informed decisions based on real-time data.

The convergence of mobile devices, fixed cameras, wearables, drones, and advanced user interfaces represents not just an evolution in technology but a revolution in how businesses operate. And in a world where data is king, those who capture it effectively — and act on it intelligently — will lock in higher margins today and lead the way tomorrow.

We've listed the best ERP software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

I learned all about cheese with Gemini's Guided Learning feature, and it was so easy, I’m thinking of making my own cheese

Mon, 08/11/2025 - 22:00

Google Gemini introduced a new feature aimed at education called Guided Learning this month. The idea is to teach you something through question-centered conversation instead of a lecture.

When you ask it to teach you something, it breaks the topic down and starts asking you questions about it. Based on your answers, it explains more details and asks another question. The feature provides visuals, quizzes, and even embeds YouTube videos to help you absorb knowledge.

As a test, I asked Gemini's Socratic tutor to teach me all about cheese. It started by asking me about what I think is in cheese, clarifying my somewhat vague answer with more details, and then asking if I knew how those ingredients become cheese. Soon, I was in a full-blown cheese seminar. For every answer I gave, Gemini came back with more details or, in a gentle way, told me I was wrong.

The AI then got into cheese history. It framed the history as a story of traveling herders, clay pots, ancient salt, and Egyptian tombs with cheese residue. It showed a visual timeline and said, “Which of these surprises you most?” I said the tombs did, and it said, “Right? They found cheese in a tomb and it had survived.” Which is horrifying and also makes me respect cheese on a deeper level.

In about 15 minutes, I knew all about curds and whey, the history of a few regional cheese traditions, and even how to pick out the best examples of different cheeses. I could see photos in some cases and a video tour of a cellar full of expensive wheels of cheese in France. The AI quizzed me when I asked it to make sure I was getting it, and I scored a ten out of ten.

(Image credit: Gemini screenshots)Cheesemonger AI

It didn’t feel like studying, exactly. More like falling into a conversation where the other person knows everything about dairy and is excited to bring you along for the ride. After learning about casein micelles. starter cultures, and cutting the curd, Gemini asked me if I wanted to learn how to make cheese.

I said sure, and it guided me through the process of making ricotta, including pictures to help show what it should look like at each step.

(Image credit: Gemini screenshots)

By the time I was done with that part of the conversation, I felt like I’d taken a mini‑course in cheesemaking. I'm not sure I am ready to fill an entire cheeseboard or age a wheel of gruyère in my basement.

Still, I think making ricotta or maybe paneer would be a fun activity in the next few weeks. And I can show off a mild, wobbly ball of dairy pride thanks to learning from questioning, and, as it were, being guided to an education.

You might also like
Categories: Technology

Baffled by ChatGPT and Copilot? It might not be your fault - report flags the key skills needed to get the most out of AI

Mon, 08/11/2025 - 20:03
  • Report claims AI adoption depends on critical human abilities
  • Ethics, adaptability, and audience-specific communication all named
  • The skills gap in AI workplaces is as much human as it is technical

As AI tools become more and more embedded in our everyday work, new research claims the challenge of not getting the best out of them may not lie solely with the technology.

A report from Multiverse has identified thirteen core human skillsets which could determine whether companies fully realize AI’s potential.

The study warns without deliberate attention to these capabilities, investment in AI writer systems, LLM applications, and other AI tools could fall short of expectations.

Critical thinking under pressure

The Multiverse study draws from observation of AI users at varying experience levels, from beginners to experts, employing methods such as the Think Aloud Protocol Analysis.

Participants verbalised their thought processes while using AI to complete real-world tasks.

From this, researchers built a framework grouping the identified skills into four categories: cognitive skills, responsible AI skills, self-management, and communication skills.

Among the cognitive abilities, analytical reasoning, creativity, and systems thinking were found to be essential for evaluating AI outputs, pushing innovation, and predicting AI responses.

Responsible AI skills included ethics, such as spotting bias in outputs, and cultural sensitivity to address geographic or social context gaps.

Self-management covered adaptability, curiosity, detail orientation, and determination, traits that influence how people refine their AI interactions.

Communication skills included tailoring AI-generated outputs for audience expectations, engaging empathetically with AI as a thought partner, and exchanging feedback to improve performance.

Reports from academic institutions, including MIT, have raised concerns reliance on generative AI can reduce critical thinking, a phenomenon linked to “cognitive offloading.”

This is the process where people delegate mental effort to machines, risking erosion of analytical habits.

While AI tools can process vast amounts of information at speed, the research suggests they cannot replace the nuanced reasoning and ethical judgement that humans contribute.

The Multiverse researchers note that companies focusing solely on technical training may overlook the “soft skills” required for effective collaboration with AI.

Leaders may assume their AI tool investments address a technology gap when in reality, they face a combined human-technology challenge.

The study refrains from claiming AI inevitably weakens human cognition, but instead it argues the nature of cognitive work is shifting, with less emphasis on memorising facts and more on knowing how to access, interpret, and verify information.

You might also like
Categories: Technology

One of my favorite iPhone features arrives on the Mac with Tahoe – and I can’t stop using it

Mon, 08/11/2025 - 19:00

While the new ‘Liquid Glass’ look and a way more powerful Spotlight might be the leading features of macOS Tahoe 26, I’ve found that bringing over a much-loved iPhone feature has proven to be the highlight after weeks of testing.

Live Activities steal the show on the iPhone, thanks to their glanceability and effortless way of highlighting key info, whether it’s from a first or third-party app. Some of my favorites are:

  • Flighty displays flight tracking details in real-time, for myself, family, or friends
  • Airlines like United show my seat, a countdown for boarding, or even baggage claim
  • Rideshare apps tell you what kind of car you're driving is arriving in
  • Apple Sports displays your favorite teams' live scores in real-time with the game

Now, all of this is arriving on the Mac – right at the top navigation bar, near the right-hand side. They appear when your iPhone is nearby, signed into the same Apple Account, and mirror the same Live Activities you’d see on your phone. It’s a simple but powerful addition.

Considering Apple brought iPhone Mirroring to the Mac in 2024, this 2025 follow-up isn’t surprising. But it’s exactly the kind of small feature that makes a big difference. I’ve loved being able to check a score, track a flight, or see my live position on a plane – without fishing for my phone.

(Image credit: Future/Jacob Krol)

I’ve used it plenty at my desk, but to me, it truly shines in Economy class. If you’ve ever tried balancing an iPhone and a MacBook Pro – or even a MacBook Air – on a tray table, you know the awkward overlap. I usually end up propping the iPhone against my screen, hanging it off the palm rest, or just tossing it in my lap. With Live Activities on the Mac, I can stick to one device and keep the tray table clutter-free.

Considering notifications already sync, iPhone Mirroring arrived last year, Live Activities were ultimately the missing piece. On macOS Tahoe, they sit neatly collapsed in the menu bar, just like the Dynamic Island on iPhone, and you can click on one to expand and see the full Live Activity. Another click on any of these Live Activities quickly opens the app on your iPhone via the Mirroring app – it all works together pretty seamlessly.

(Image credit: Future/Jacob Krol)

You can also easily dismiss them, as I have found they automatically expand for major updates, saving screen real estate on your Mac. If you already have a Live Activity that you really enjoy on your iPhone, there’s really no extra work needed from the developer, as these will automatically repeat.

All-in-all, it’s a small but super helpful tool that really excels in cramped spaces. So, if you’ve ever struggled with the same balancing act as I have with a tray table, your iPhone, and a MacBook, know that relief is on the way.

It's arriving in the Fall (September or October) with the release of macOS Tahoe 26. If you want it sooner, the public beta of macOS Tahoe 26 is out now, but you'll need to be okay with some bugs and slowdowns.

You might also like
Categories: Technology

Brave or foolhardy? Huawei takes the fight to Nvidia CUDA by making its Ascend AI GPU software open source

Mon, 08/11/2025 - 17:42
  • Huawei makes its CANN AI GPU toolkit open source to challenge Nvidia’s proprietary CUDA platform
  • CUDA’s near 20-year dominance has locked developers into Nvidia’s hardware ecosystem exclusively
  • CANN provides multi-layer programming interfaces for AI applications on Huawei’s Ascend AI GPUs

Huawei has announced plans to make its CANN software toolkit for Ascend AI GPUs open source, a move aimed squarely at challenging Nvidia’s long-standing CUDA dominance.

CUDA, often described as a closed-off “moat” or “swamp,” has been viewed as a barrier for developers seeking cross-platform compatibility by some for years.

Its tight integration with Nvidia hardware has locked developers into a single vendor ecosystem for nearly two decades, with all efforts to bring CUDA functionality to other GPU architectures through translation layers blocked by the company.

Opening up CANN to developers

CANN, short for Compute Architecture for Neural Networks, is Huawei’s heterogeneous computing framework designed to help developers create AI applications for its Ascend AI GPUs.

The architecture offers multiple programming layers, giving developers options for building both high-level and performance-intensive applications.

In many ways, it is Huawei’s equivalent to CUDA, but the decision to open its source code signals an intent to grow an alternative ecosystem without the restrictions of a proprietary model.

Huawei has reportedly already begun discussions with major Chinese AI players, universities, research institutions, and business partners about contributing to an open-sourced Ascend development community.

This outreach could help accelerate the creation of optimized tools, libraries, and AI frameworks for Huawei’s GPUs, potentially making them more attractive to developers who currently rely on Nvidia hardware.

Huawei’s AI hardware performance has been improving steadily, with claims that certain Ascend chips can outperform Nvidia processors under specific conditions.

Reports such as CloudMatrix 384’s benchmark results against Nvidia running DeepSeek R1 suggest that Huawei’s performance trajectory is closing the gap.

However, raw performance alone will not guarantee developer migration without equivalent software stability and support.

While open-sourcing CANN could be exciting for developers, its ecosystem is in its early stages and may not be anything close to CUDA, which has been refined for nearly 20 years.

Even with open-source status, adoption may depend on how well CANN supports existing AI frameworks, particularly for emerging workloads in large language models (LLM) and AI writer tools.

Huawei’s decision could have broader implications beyond developer convenience, as open-sourcing CANN aligns with China’s broader push for technological self-sufficiency in AI computing, reducing dependence on Western chipmakers.

In the current environment, where U.S. restrictions target Huawei’s hardware exports, building a robust domestic software stack for AI tools becomes as critical as improving chip performance.

If Huawei can successfully foster a vibrant open-source community around CANN, it could present the first serious alternative to CUDA in years.

Still, the challenge lies not just in code availability, but in building trust, documentation, and compatibility at the scale Nvidia has achieved.

Via Toms Hardware

You might also like
Categories: Technology

4 things we learned from OpenAI’s GPT-5 Reddit AMA

Mon, 08/11/2025 - 17:00

OpenAI CEO Sam Altman and several other researchers and engineers came to Reddit the day after debuting the powerful new GPT-5 AI model for the time-honored tradition of an Ask Me Anything thread.

Though the discussion ranged over all kinds of technical and product elements, there were a few topics that stood out as particularly important to posters based on the frequency and passion with which they were discussed. Here are a few of the most notable things we learned from the OpenAI AMA.

Pining for GPT-4o

The biggest recurring theme in the AMA was a mournful wail from users who loved GPT-4o and felt personally attacked by its removal. That's not an exaggeration, as one user posted, “BRING BACK 4o GPT-5 is wearing the skin of my dead friend.”To which Altman replied, “what an…evocative image. ok we hear you on 4o, working on something now.”

This wasn’t just one isolated request, either. Another post asked to keep both GPT-4o and GPT-4.1 alongside GPT-5, arguing that the older models had distinct personalities and creative rhythms. Altman admitted they were “looking into this now.”

Most requests were a little more subdued, with one poster asking, “Why are we getting rid of the variants and 4o when we all have unique communication styles? Please bring them back!”

Altman’s answer was brief but direct in conceding the point. He wrote, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it."

It is interesting that so many heavy users seem to prefer the style of the older model, and prefer it to the objectively better newer ones.

Filtering history

Another big topic was ChatGPT's safety filter, both currently and before GPT-5 which many posted complaints about for being overzealous. One user described a scenario where they’d been flagged for discussing historical topics, with a response about Gauguin getting flagged and deleted because the artist was a "sex pest," and the user's own clarification question itself getting flagged.

Altman’s answer was a mixture of agreement and reality check. “Yeah, we will continue to improve this,” he said. “It is a legit hard thing; the lines are often really quite blurry sometimes.” He stressed that OpenAI wants to allow “very wide latitude” but admitted that the boundary between unsafe and safe content is far from perfect, but that "people should of course not get banned for learning."

New tier

Another questioner zeroed in on a gap in OpenAI’s subscription model: "Are you guys planning to add another plan for solo power users that are not pros? 20$ plan offers too little for some, and the $200 tier is overkill."

Altman’s answer was succinct, simply saying, “Yes we will do something here.” No details, just a confirmation that the idea’s on the table. That brevity leaves open possibilities from 'next week' to just saying 'the discussion starts now.' But the pricing gap is a big deal for power users who find themselves constrained by the Plus tier but can’t justify enterprise pricing. If OpenAI does create an intermediate tier, it could reshape how dedicated individual users engage with the platform.

The future

At the end of the AMA, Altman shared some new information about the current and future state of ChatGPT and GPT-5. He started by admitting to some issues with the release, writing that "we expected some bumpiness as we roll out so many things at once. But it was a little more bumpy than we hoped for!"

That bumpiness ended up making GPT-5 seem not as impressive as it should have until now.

"GPT-5 will seem smarter starting today," Altman wrote. "Yesterday, we had a sev [severity, meaning system issue] and the autoswitcher was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber."

He also promised more access for ChatGPT Plus users, with double the rate limits, as well as the upcoming return of GPT-4o, at least for those same subscribers. The AMA did paint a clearer picture of what OpenAI is willing to change in response to public pressure.

The return of GPT-4o for Plus users at least acknowledges that raw capability isn’t the only metric that matters. If users are this vocal about keeping an older model alive, future releases of GPT-5 and beyond may be designed with more deliberate flavors built in beyond just the personality types promised for GPT-5.

You might also like
Categories: Technology

MacBook screens can be broken with a simple greeting card, viral TikTok video warns – and Apple has explained the reason why

Mon, 08/11/2025 - 17:00
  • A TikTok user damaged their MacBook display in an unexpected way
  • The issue was caused by a piece of card placed under the lid
  • Even something as innocuous as this can break a laptop screen

For many MacBook owners, it’s a nightmare come true: you open the lid of your pricey laptop and switch it on, only to find the display is a mess, with black bars and glitchy colors everywhere you look. The screen has been ruined, and it’s going to cost a whole lot to put it right.

Worryingly, it’s actually a lot easier to experience this than you might expect: just one seemingly innocuous action can cause hundreds of dollars of damage.

That’s something TikTok user classicheidi found out the hard way. In a video uploaded to the social media platform, classicheidi explained that they had placed a piece of card on the keyboard of their MacBook Air, then closed the lid.

When they opened it again a while later, the screen was ruined.

A costly mistake @classicheidi

Is this common knowledge omg

♬ original sound - Heidi

This is an unfortunate incident, but there’s a reason it happened. It’s not because the displays of Apple’s laptops (or those of any other manufacturer, for that matter) are weak or poorly made. But while they should certainly be treated with care, there’s another issue at play.

It’s what Apple describes in a support document as the “tight tolerances” of its laptops. Apple’s MacBooks are made to be as thin as possible, which means the gap between the keyboard and display is very small when the lid is closed.

Anything placed in that gap – even something as modest as a piece of card – can be pushed up against the display, with the resulting pressure leading to serious damage.

For that reason, Apple warns that “leaving any material on your display, keyboard, or palm rest might interfere with the display when it’s closed and cause damage to your display.” If you have a camera cover, a palm rest cover, or a keyboard cover, Apple says you should remove it before closing your laptop’s lid to avoid this kind of scenario – unfortunately, it’s something we've seen before.

If you want to sidestep the kind of outcome classicheidi suffered, it’s important to ensure there’s nothing between your laptop’s keyboard and screen when you close it. If there is, you might open it up to “the biggest jump scare of the century,” in classicheidi’s words.

You might also like
Categories: Technology

Fake TikTok shops found spreading malware to unsuspecting victims - here's how to stay safe

Mon, 08/11/2025 - 16:04
  • Fraudulent TikTok Shops driving victims into fake portals designed to steal cryptocurrency and data
  • Scammers mimic trusted seller profiles and lure shoppers with unrealistic discounts across popular platforms
  • SparkKitty malware secretly collects sensitive data from devices, enabling long-term unauthorized surveillance and control

Cybercriminals are now making use of TikTok Shops to spread malware and steal funds from unsuspecting young users of the platform.

The campaign, revealed by security experts at CTM360, mimics the profile of legitimate ecommerce sellers to build its credibility, often using AI-generated content.

In addition to TikTok, these fake shops can also be found on Facebook, where their modus operandi is to advertise massive price cuts to lure potential victims.

Exploiting brand trust for profit

The main target of these malicious actors is not only to defraud users, mostly in cryptocurrency, but also to deliver malicious software and steal login details.

At the moment, TikTok Wholesale and Mall pages have been linked to over 10,000 such fraudulent URLs.

These URLs, which look like official platforms, offer “buy links” that redirect visitors to a criminal phishing portal.

Once users click the link and enter the portal, they will be made to pay a deposit into an online wallet or purchase a product – the online wallet is fake and the product does not exist.

Some operations take the deception further by posing as an affiliate management service, pushing malicious apps disguised as tools for sellers.

More than 5,000 app download sources have been uncovered, many using embedded links and QR codes to bypass traditional scrutiny.

One identified threat, known as SparkKitty, is capable of harvesting data from both Android and iOS devices.

It can enable long-term access to compromised devices, creating ongoing risk even after the initial infection.

The malware is often delivered through these fake affiliate applications, turning what appears to be a legitimate opportunity into a direct path for account takeover and identity theft.

Because cryptocurrency transactions are irreversible, victims have little recourse once funds are transferred.

A common thread in the campaign is the use of pressure tactics, with countdown timers or limited-time discounts designed to force quick decisions.

These tactics, while common in legitimate marketing, make it harder for users to pause and assess the authenticity of an offer.

Domain checks reveal many of the scam sites using inexpensive extensions such as .top, .shop, or .icu, which can be purchased and deployed rapidly.

How to stay safe
  • Make sure you check the website address carefully before entering your payment information. Every detail of the website should match the legitimate domain.
  • Ensure that you use secure HTTPS encryption
  • If the price cut feels too huge, follow your gut and stay away.
  • Do not allow a countdown timer to pressure you into making payment; this pressure is a common tactic my malicious actors
  • Always insist on the standard payment methods and avoid direct wire transfers or cryptocurrency, as these are harder to trace and often used in scams.
  • Install and maintain a trusted security suite that combines robust antivirus protection with real-time browsing safeguards to block malicious websites.
  • Configure your firewall to actively monitor and filter network traffic, preventing unauthorized access and blocking suspicious connections before they reach your device.
  • Pay close attention to alerts from reputable security programs, which can detect and warn you about known phishing sites or fraudulent activities in real time.
  • Remain cautious even when shopping on professional-looking platforms, as well-designed storefronts can still conceal sophisticated attempts at theft.
You might also like
Categories: Technology

Roblox is sharing its AI tool to fight toxic game chats – here’s why that matters for kids

Mon, 08/11/2025 - 16:00

Online game chats are notorious for vulgar, offensive, and even criminal behavior. Even if only a tiny percentage, the many millions of hours of chat can accumulate a lot of toxic interactions in a way that's a problem for players and video game companies, especially when it involves kids. Roblox has a lot of experience dealing with that aspect of gaming and has used AI to create a whole system to enforce safety rules among its more than 100 million mostly young daily users, Sentinel. Now, it's open-sourcing Sentinel, offering the AI and its capacity for identifying grooming and other dangerous behavior in chat before it escalates for free to any platform.

This isn’t just a profanity filter that gets triggered when someone types a curse word. Roblox has always had that. Sentinel is built to watch patterns over time. It can track how conversations evolve, looking for subtle signs that someone is trying to build trust with a kid in potentially problematic ways. For instance, it might flag a long conversation where an adult-sounding player is just a little too interested in a kid’s personal life.

Sentinel helped Roblox moderators file about 1,200 reports to the National Center for Missing and Exploited Children in just the first half of this year. As someone who grew up in the Wild West of early internet chatrooms, where “moderation” usually meant suspecting that people who used correct spelling and grammar were adults, I can’t overstate how much of a leap forward that feels.

Open-sourcing Sentinel means any game or online platform, whether as big as Minecraft or as small as an underground indie hit, can adapt Sentinel and use it to make their own communities safer. It’s an unusually generous move, albeit one with obvious public relations and potential long-term commercial benefits for the company.

For kids (and their adult guardians), the benefits are obvious. If more games start running Sentinel-style checks, the odds of predators slipping through the cracks go down. Parents get another invisible safety net they didn’t have to set up themselves. And the kids get to focus on playing rather than navigating the online equivalent of a dark alley.

For video games as a whole, it’s a chance to raise the baseline of safety. Imagine if every major game, from the biggest esports titles to the smallest cozy simulators, had access to the same kind of early-warning system. It wouldn’t eliminate the problem, but it could make bad behavior a lot harder to hide.

AI for online safety

Of course, nothing with “AI” in the description is without its complications. The most obvious one is privacy. This kind of tool works by scanning what people are saying to each other, in real time, looking for red flags. Roblox says it uses one-minute snapshots of chat and keeps a human review process for anything flagged. But you can’t really get around the fact that this is surveillance, even if it’s well-intentioned. And when you open-source a tool like this, you’re not just giving the good guys a copy; you also make it easier for bad actors to see how you're stopping them and come up with ways around the system.

Then there’s the problem of language itself. People change how they talk all the time, especially online. Slang shifts, in-jokes mutate, and new apps create new shorthand. A system trained to catch grooming attempts in 2024 might miss the ones happening in 2026. Roblox updates Sentinel regularly, both with AI training and human review, but smaller platforms might not have the resources to keep up with what's happening in their chats.

And while no sane person is against stopping child predators or jerks deliberately trying to upset children, AI tools like this can be abused. If certain political talk, controversial opinions, or simply complaints about the game are added to the filter list, there's little players can do about it. Roblox and any companies using Sentinel will need to be transparent, not just with the code, but also with how it's being deployed and what the data it collects will be used for.

It's also important to consider the context of Roblox's decision. The company is facing lawsuits over what's happened with children using the platform. One lawsuit alleges a 13‑year‑old was trafficked after meeting a predator on the platform. Sentinel isn't perfect, and companies using it could still face legal problems. Ideally, it would serve as a component of online safety setups that include things like better user education and parental controls. AI can't replace all safety programs.

Despite the very real problems of deploying AI to help with online safety, I think open-sourcing Sentinel is one of the rare cases where the upside of using AI is both immediate and tangible. I’ve written enough about algorithms making people angry, confused, or broke to appreciate when one is actually pointed toward making people safer. And making it open-source can help make more online spaces safer.

I don’t think Sentinel will stop every predator, and I don’t think it should be a replacement for good parenting, better human moderation, and educating kids about how to be safe when playing online. But as a subtle extra line of defense, Sentinel has a part to play in building better online experiences for kids.

You might also like
Categories: Technology

I’ll upgrade my M1 MacBook Pro for the first time in years if this rumor is true – and it might be the last MacBook I buy this decade

Mon, 08/11/2025 - 16:00

How often do you upgrade your MacBook? I’m willing to bet it’s not very often, and certainly not every year. If so, that’s great news for you, but perhaps not so pleasing for Apple, which would rather you stumped up for one of the best MacBooks as often as possible. Yet is there really a reason to upgrade if your laptop does everything you need for years at a time?

Take me, for example. I’ve had a MacBook Pro with M1 Pro chip since 2022, and it’s served me superbly well in that time. It handles all my work without a hitch and gives me strong gaming performance for the titles I play. Even Cyberpunk 2077 performs impressively well if I turn frame generation on, and I’m happy to do that since it boosts the frame rates from my integrated laptop chip – which is several generations out of date – up to the mid-70s.

That all means that over the past few years, I’ve looked at advances in the MacBook Pro and decided to take a pass. New chips have been the only major changes of note, and with no big design adjustments or feature improvements to tempt me – and my M1 Pro chip performing so consistently – there’s been no need to rock the boat.

However, I’m starting to get the feeling that this situation is not going to last. Judging by the latest rumors, things could change in a big way in the next year or two, and it might be harder than ever for me to resist the lure of a new MacBook Pro. The good news, though, is that this step up could last me well into the next decade.

The OLED revolution

(Image credit: Apple)

That idea centers around Apple’s M6 chip, which is expected to land in the MacBook Pro in late 2026 or early 2027. This model is expected to come with an OLED display as well as the new chip, according to Bloomberg journalist Mark Gurman’s latest Power On newsletter.

There, Gurman says that the upcoming M6 MacBook Pro “represents enough of a change to finally move the needle” in his opinion, bringing with it a new chip, an improved screen, plus a thinner, redesigned chassis for the first time in several years.

Gurman is not the only person who could be swayed by this upcoming Mac: it’s the kind of upgrade that might convince me to open the purse strings as well. After all, by the time the M6 model launches, my M1 Pro laptop will be five generations out of date and might start showing its age a little more. It’s still going strong for now, but that won’t be the case forever.

But the bigger change will be the OLED display. This has been rumored for years, but Apple’s obsessive perfectionism has meant we still haven’t seen it in action. When it finally arrives, though, Apple’s gaming gains could finally be married up with the kind of visual output they deserve. The question of whether MacBooks are actually gaming machines has been discussed much over the last few years, but adding an OLED display into the mix would surely settle the question in Apple’s favor once and for all.

What does the future hold?

(Image credit: Future)

But the fact that it would take an upgrade as momentous as this to convince me to get a new MacBook raises another question: what happens after the M6 MacBook Pro has been and gone?

Generally, MacBook upgrades aren’t usually as feature packed as the one we’re expecting when the M6 chip and OLED display come around. The M4 MacBook Pro, for example, offered a new chip, added Center Stage to the front-facing camera, brought Thunderbolt 5 connectivity to the M4 Pro and M4 Max chips, added a nano-texture coating to the display… and not a whole lot else. Those changes are fine, but they’re not groundbreaking.

Apple has, in some ways, created a problem for itself: its chips are now so performant that they can last for generations, dissuading people from upgrading. Contrast that to the bad old Intel Mac days, when the chips were so underpowered that many people felt forced into expensive annual upgrades, and it’s clear that Apple users are in a better spot than ever.

These days, Apple silicon chips have a lot more longevity, which means it’s harder for Apple to persuade its users to buy new MacBooks on the regular. My hope, at least, is this means Apple will bring more significant new features in the coming years in a bid to tempt upgraders.

But even if it doesn’t, just having a chip that lasts years without faltering is a win for Apple fans, and my M1 Pro is a testament to that. If I upgrade to the M6 MacBook Pro and its OLED display, I’m hoping the improvements it brings last me half a decade or more, just as my long-serving M1 Pro chip has done before it.

You might also like
Categories: Technology

This Meta prototype is a seriously upgraded Meta Quest 3 – and you can try it for yourself

Mon, 08/11/2025 - 15:00
  • Meta has two new VR headsets you can try
  • They're protypes that aren't usually accessible to the public
  • You'll have to attend SIGGRAPH 2025 to give them a whirl

Every so often, Meta will showcase some of its prototype VR headsets – models which aren’t for public release like its fully fledged Meta Quest 3, but allow its researchers to test attributes when they’re pushed too far beyond current commercial headset limits. Like the Starburst headset, which offered a peak brightness of 20,000 nits.

Tiramisu and Boba 3 – two more of its prototypes – are more concerned with offering “retinal resolution” and an extremely wide field of view rather than just boasting incredible brightness, but like Starburster, Meta is giving folks the chance to demo these usually lab-exclusive headsets.

That is, if you happen to be attending SIGGRAPH 2025 in Vancouver.

(Image credit: Meta)

I’ve been to SIGGRAPH previously, and it’s full of futuristic XR tech and demos that companies like Meta and its Reality Labs have been cooking up.

Though usually the prototypes look just like Tiramasu. That is to say, a little impractical.

Tiramisu does at least seem to be a headset you can wear normally, even if it does look like a Meta Quest 2 that has been comically stretched – Starburst, for example, had to be suspended from a metal frame as it was far too heavy to wear.

But Tiramasu doesn’t look like the most practical model. The trade-off is that Meta can outfit the headset with µOLED displays and other tech like custom lenses to deliver high contrast and resolution – 3x and 3.6x respectively of what the Meta Quest 3 offers.

As a result, Tiramasu is the closest Meta has got to achieving the “visual Turing test”, virtual visuals that are indistinguishable from real ones.

(Image credit: Meta)

Boba 3, on the other hand, looks like a headset you could buy tomorrow, and the way Meta talks about it, it does feel like something inspired by it could arrive at some point in the future.

That’s because it looks surprisingly compact – apparently it weighs just 660g, a little less than a Quest 3 with Elite strap at 698g. It also has a 4k by 4k resolution, and – the reason this headset is special – it boasts a horizontal field of view of 180° and a vertical field of view of 120°.

That’s significantly more than the 110° and 96°, respectively, offered by the Meta Quest 3, and while the 3 covers about 46% of a person’s field of view, Boba 3 captures about 90%.

The only issue is Boba 3 does require a “top-of-the-line GPU and PC system”, according to Display Systems Research Optical Scientist Yang Zhao. That’s because it needs to fill in the extra space the larger field of view creates, leading to higher compute requirements.

Though Zhao did note that Boba 3 is “something that we wanted to send out into the world as soon as possible”, and it does resemble goggles in a way – the design direction Meta’s next headset is said to be taking.

So we’ll have to keep our eyes peeled to see what Meta launches next, but while only a few lucky folks will get to try Boba 3 at Siggraph, I’m hoping many more of us will get to experience the next-gen VR headsets it inspires.

You might also like
Categories: Technology

MRI scans, X-rays and more leaked online in major breach - over a million healthcare devices affected, here's what we know

Mon, 08/11/2025 - 14:27
  • Modat found more than 1.2 million misconfigured devices leaking info
  • This includes MRI scans, X-rays, and other sensitive files, together with patient contact data
  • The healthcare industry needs a proactive approach to cybersecurity, researchers warn

Researchers have warned there are currently over a million internet-connected healthcare devices which are misconfigured, leaking all the data they generate online - putting millions of people at risk of identity theft, phishing, wire fraud, and more.

Modat recently scanned the internet in search of misconfigured, non-password protected, devices and their data, and by using the tag ‘HEALTHCARE’, they found more than 1.2 million devices which were generating, and leaking, confidential medical images including MRI scans, X-rays, and even blood work, of hospitals all over the world.

“Examples of data being leaked in this way include brain scans and X-rays, stored alongside protected health information and personally identifiable information of the patient, potentially representing both a breach of patient’s confidentiality and privacy,” the researchers explained.

Weak passwords and other woes

In some cases, the researchers found information unlocked and available for anyone who knows where to look - and in other cases, the data was protected with such weak and predictable passwords that it posed no challenge to break in and grab them.

“In the worst-case scenario, leaked sensitive medical information could leave unsuspecting victims open to fraud or even blackmail over a confidential medical condition,” they added.

In theory, a threat actor could learn of a patient’s condition before they do. Together with names and contact details, they can reach out to the patient and threaten to release the information to friends and family, unless they pay a ransom.

Alternatively, they could impersonate the doctor or the hospital and send phishing emails inviting the victim to “view sensitive files” which would just redirect them to download malware or share login credentials.

The majority of the misconfigured devices are located in the United States (174K+), with South Africa being close second (172K+). Australia (111K+), Brazil (82K+), and Germany (81K+) round off the top five.

For Modat, a proactive security culture “beats a reactive response”.

“This research reinforces the urgent need for comprehensive asset visibility, robust vulnerability management, and a proactive approach to securing every internet-connected device in healthcare environments, ensuring that sensitive patient data remains protected from unauthorized access and potential exploitation," commented Errol Weiss, Chief Security Officer at Health-ISAC.

You might also like
Categories: Technology

Your webcam could be hacked and hijacked into malware attacks - researchers warn Lenovo devices specifically at risk

Mon, 08/11/2025 - 13:32
  • Researchers claim to have found a way to turn a Lenovo webcam into a BadUSB device
  • BadUSB is a firmware vulnerability that turns a USB stick into a malware-writing weapon
  • Lenovo released a firmware update, so users should patch now

Your device's webcam can be reprogrammed to turn on you and serve as a backdoor for a threat actor, experts have warned.

Security researchers at Eclypsium claim certain Lenovo webcam models powered by Linux can be turned into so-called “BadUSB” devices.

The bug is now tracked as CVE-2025-4371. It still doesn’t have a severity score, but it has a nickname - BadCam.

Reflashing firmware

Roughly a decade ago, researchers found a way to reprogram a USB device’s firmware to act maliciously, letting it mimic keyboards, network cards, or other devices. This allows it to run commands, install malware, or steal data, and the biggest advantage compared to traditional malware is that it can successfully bypass traditional security measures.

The vulnerability was dubbed “BadUSB”, and was seen abused in the wild, when threat actors FIN7 started mailing weaponized USB drives to US-based organizations. At one point, the FBI even started warning people not to plug in USB devices found in office toilets, airports, or received in the postbox.

Now, Eclypsium says that the same thing can be done with certain USB webcams, built by Lenovo and powered by Linux.

"This allows remote attackers to inject keystrokes covertly and launch attacks independent of the host operating system," Eclypsium told The Hacker News.

"An attacker who gains remote code execution on a system can reflash the firmware of an attached Linux-powered webcam, repurposing it to behave as a malicious HID or to emulate additional USB devices," the researchers explained.

"Once weaponized, the seemingly innocuous webcam can inject keystrokes, deliver malicious payloads, or serve as a foothold for deeper persistence, all while maintaining the outward appearance and core functionality of a standard camera.

Gaining remote access to a webcam requires the device to be compromised in the first place, in which case the attackers can do what they please anyway. However, users should be careful not to plug in other people’s webcams, or buy such products from shady internet shops.

Lenovo 510 FHD and Lenovo Performance FHD webcams were said to be vulnerable, and a firmware update version 4.8.0 was released to mitigate the threat.

You might also like
Categories: Technology

I tested Samsung and LG's cheapest OLED TVs side-by-side to see which TV comes out on top – here's what happened

Mon, 08/11/2025 - 13:00

LG and Samsung have been locked in an OLED TV battle for a number of years, ever since Samsung reentered the OLED TV market in 2022 with the Samsung S95B.

Samsung has since been our TV of the year winner for two years in a row, with the Samsung S90C taking the crown in 2023 and the Samsung S95D taking the title in 2024. Even so, several LG OLED models still sit on our list for the best OLED TV.

I’ve already tested both brands' 2025 flagship models, the LG G5 and Samsung S95F, side-by-side. Recently, however, I also had the chance to do a side-by-side test of their entry-level OLEDs, the LG B5 and Samsung S85F.

It’s worth noting that both these TVs use the same standard W-OLED display panel. So they can’t really be that different, right? Well, let’s look at the results of my comparison to find out.

Brightness and contrast

The Samsung S85F (right) demonstrated higher brightness in some highlight areas despite having the same panel as the LG B5 (left) (Image credit: Future)

With both TVs using the same panel, I expected their brightness measurements to be similar, and that did turn out to be the case. When I measured peak HDR brightness for both TVs, the LG B5 clocked in at 668 nits, and the S85F at 777 nits. I assumed a difference of just over 100 nits wouldn’t make an impact on the picture, but I was wrong.

Although the difference was subtle, the S85F’s picture did have bolder highlights in specific movie scenes. Watching The Batman, highlights from light sources such as lamps and torches in the opening subway fight and crime scene sections were indeed brighter on the S85F. The B5 still demonstrated solid brightness, but I found my eye more drawn to the S85F’s picture.

In demo footage from the Spears & Munsil UHD Benchmark 4K Blu-ray, with images such as the sun behind a satellite dish or a horizon at sunset, the S85F had a bit more vibrancy, which made these highlight areas look more striking.

Both the LG B5 (left) and Samsung S85F (right) showed very good contrast, but the B5 handled darker tones better. (Image credit: Warner Bros. / Future)

Both the B5 and S85F demonstrated excellent contrast throughout testing. In The Batman, light sources balanced well with dark tones on screen, creating a good sense of contrast, though the S85F’s higher brightness gave it an edge.

Both TVs also had refined shadow detail when watching The Batman, but the B5 displayed deeper, richer black tones, and it better maintained shadow detail, with the S85F showing minor black crush. In Oppenheimer’s black and white scenes, both TVs again showed a good range of gray tones, but here again, the B5 maintained details in darker areas more accurately than the S85F.

I noticed that while Filmmaker Mode was the more accurate mode for darker movies such as Oppenheimer and The Batman, the differences between the two TVs were more obvious in Cinema mode, especially when it came to brightness, contrast and shadow detail.

Color profile

Both the LG B5 (left) and Samsung S85F showcased vivid colors, but the S85F's had more pop, whereas the B5's looked more natural (Image credit: Universal Pictures / Future )

Where the B5 and S85F really differed was in their color. Although both use the same OLED panel type, the S85F’s colors had a greater visual punch, especially when evaluating both TVs with their Cinema picture preset active.

In Wicked, during the Wizard & I scene where Elphaba stands under some pink flowers, the flowers looked more vibrant on the S85F than the B5, giving them an eye-popping quality. Elphaba’s green skin also appeared brighter, and later in the Emerald City, the greens appeared more dazzling on the S85F.

Where the B5 differed here was in its color depth. The B5’s deeper blacks had the effect of making the pink flowers and Elphaba’s green skin look richer and more lifelike compared to the S85F.

In the same Spears & Munsil footage, shots of colorful butterflies and flowers looked rich and refined on both TVs, but once again, the B5 displayed deeper, richer, and more subtle hues, whereas the S85F had more outright colorful images. I found myself more drawn to the S85F, especially with both TVs in Cinema mode.

Sports

The LG B5 (left) had the better motion handling for sports compared to the Samsung S85F (right) (Image credit: Future)

One thing I wanted to test on these TVs was sports viewing. OLEDs typically have very good motion handling, which is why they always feature in our best TVs for sport guide. I’ve found that Samsung TVs require more setup effort when it comes to sports than LG TVs, and it was no different with the S85F.

In Standard mode (color in the B5’s Sports mode is too oversaturated, so I preferred not to use it), the LG B5 displayed superior motion handling. An MLS soccer game I watched via Prime Video in this mode looked fluid and smooth throughout viewing, with no settings changes required.

The S85F, also in its Standard preset, showed several motion artifacts, such as a ghosting ball and some stuttering. Changing blur and judder reduction to 5 did help, but even then, there was some picture judder compared to the B5.

Of the two TVs, the B5 was the clear winner when it came to motion handling.

Which TV should you choose?

With many similarities between the LG B5 (left) and Samsung S85F (right), the choice may ultimately come down to price (Image credit: Future)

After testing both the LG B5 and Samsung S85F side-by-side, the differences are generally subtle, so which one you should buy will likely come down to personal preference.

If you want a brighter, bolder-looking TV with more vibrant color, opt for the S85F. If you want a more natural-looking TV with richer blacks, opt for the B5.

Both TVs have the full suite of gaming features we look for on the best gaming TVs, and both have great smart TV platforms. But sports fans will want to go for the B5 due to its superior motion handling.

During my testing, I ultimately found myself more drawn to the S85F. So that’s the one I’d choose, but it was very close.

Honestly, it could all come down to discounts. The 55-inch B5 costs $1,499.99 / £1,399 / AU$1,995, and the 55-inch Samsung S85F costs $1,499.99 / £1,399 / AU$2,495, so in the US and UK, there's currently nothing between them. But as we approach the end of the year, both TVs will inevitably receive discounts, and the amount of those discounts could determine which TV is the better overall value.

You might also like
Categories: Technology

Hackers are now mimicking government websites using AI - everything you need to know to stay safe

Mon, 08/11/2025 - 12:31
  • Threat actors cloned Brazilian government websites using generative AI
  • The sites were used to steal personal information and money
  • In both instances, the sites were almost identical, experts warn

Experts have warned hackers recently used a generative AI tool to replicate several web pages belonging to the Brazilian government in an effort to steal sensitive personal information and money.

The fake websites were examined by Zscaler ThreatLabz researchers, who discovered multiple indicators of the use of AI to generate code.

The websites look almost identical to the official sites, with the hackers using SEO poisoning to make the websites appear higher in search results, and therefore seem more legitimate.

AI generated government websites

In the campaign examined by ThreatLabz, two websites were spotted mimicking important government portals. The first was for the State Department of Traffic’s portal for applying for a drivers license.

(Image credit: ZScaler ThreatLabz)

The two sites appear to be near-identical, with the only major difference being in the website’s URL. The threat actor used ‘govbrs[.]com’ as the URL prefix, mimicking the official URL in a way that would be easily overlooked by those visiting the site. The webpage was also boosted in search results using SEO poisoning, making it appear to be the legitimate site.

Once on the site, the users are invited to enter their CPF number (a form of personal identification number similar to an SSN), which the hacker would ‘authenticate’ using an API.

The victim would then fill out a web form asking for personal information such as name and address, before being asked to schedule psychometric and medical exams as part of the driving application.

The victim would then be prompted to use Pix, Brazil’s instant payment system, to complete their application. The funds would go directly to the hacker’s account.

A second website based on the job board for the Brazilian Ministry of Education lured applicants into handing over their CPF number and completing payments to the hacker. This website used similar URL squatting techniques and SEO poisoning to appear legitimate.

The user would apply to fake job listings, handing over personal information before again being prompted to use the Pix payment system to complete their application.

In ThreatLabz’ technical analysis of both sites, much of the code showed signs of being generated by Deepsite AI using a prompt to copy the official website, such as TailwindCSS styling and highly structured code comments that state “In a real implementation…”

The CSS files of the website also include templated instructions on how to reproduce the government sites.

The ThreatLabz blog concludes, “While these phishing campaigns are currently stealing relatively small amounts of money from victims, similar attacks can be used to cause far more damage. Organizations can reduce the risk by ensuring best practices along with deploying a Zero Trust architecture to minimize the attack surface.”

You might also like
Categories: Technology

Sam Altman says the super-powerful ChatGPT-5 Pro might be coming to Plus accounts, but with one big limitation

Mon, 08/11/2025 - 12:06
  • Altman's tweet suggests that ChatGPT-5 Pro is coming to Plus subscribers
  • It will be limited to a few queries a month
  • The move would add more confusion to the ChatGPT Plus model selector

Following the backlash against OpenAI removing ChatGPT-4o when it introduced ChatGPT-5, the AI giant has now restored access to ChatGPT-4o, but only for ChatGPT Plus subscribers.

Free tier users are limited to just ChatGPT-5 for now, but it seems that CEO Sam Altman and OpenAI aren’t done making changes to its LLM lineup just yet.

In reply to a post on X praising how good GPT-5 Pro is, Altman responded, “We are considering giving a (very) small number of GPT-5 Pro queries each month to Plus subscribers so they can try it out!”

we are considering giving a (very) small number of GPT-5 pro queries each month to plus subscribers so they can try it out! i like it too.but yeah if you wanna pay us $1k a month for 2x the input tokens feels like we should find a way to make that happen... https://t.co/9qC0rsDl6zAugust 11, 2025

Plus users currently get a choice between ChatGPT-5 for fast answers and ChatGPT-5 Thinking for slower, but more thoughtful answers. ChatGPT Pro is essentially the best of both worlds, delivering thoughtful answers at speed.

Making even a few queries a month available to Plus users would represent a serious added value to the $20 (£20 / AU$30) monthly subscription. OpenAI describes ChatGPT-5 Pro as “research grade” AI, and it’s currently only available to $200 (£200 / AU$300) a month ChatGPT Pro subscribers.

The current Plus user selection box, with GPT-4o added. (Image credit: Future)Model confusion

Before I get too excited, perhaps it's worth noting the word “considering” is contained in Altman’s tweet, and means that this isn’t definitely going to happen. However, if Altman thinks it’s a good idea, then, being the CEO, he can probably make it happen.

Part of the ethos of ChatGPT-5 was to do away with the confusing LLM line-up and naming conventions that had arisen around ChatGPT-4. The streamlined ChatGPT-5 was supposed to simplify all the different options and intelligently decide which version of the model your query would best respond to.

By giving Plus users access to ChatGPT-5 Pro, in addition to reintroducing ChatGPT-4o, we will essentially be back in the same old situation where people are given too much choice about which model to use, meaning that OpenAI still has a product naming and line-up problem.

You might also like
Categories: Technology

Bad news Microsoft workers - tech giant is "considering" remote working crackdown, and employees could be ordered back to the office soon

Mon, 08/11/2025 - 12:02
  • Microsoft is reportedly looking to formalize three-day in-office working policy
  • Rivals like Amazon now ask for full-time office attendance
  • Workers must prepare for more changes, reports claim

Microsoft could be the latest tech giant to explore a stricter in-office working policy, with reports claiming the company is reportedly considering enacting a three-day office-working policy for most employees.

Until now, workers have been able to spend around half of their time at home (or away from the office) despite rivals like Amazon enforcing stricter full-time office-working policies.

A Microsoft spokesperson told Business Insider the company had been exploring changes to the policy, but no official alterations have been made yet.

Microsoft considering upping its office-working days

The report claims an official Microsoft announcement could come as soon as September 2025, with rollout of any changes arriving as soon as January 2026, although dates and indeed policies may vary depending on location.

Reports of upcoming changes come after the company has made other changes to its workforce, including ongoing worker readjustments and an updated PIP framework to more quickly exit underperforming workers.

In July 2025, Microsoft laid off around 9,000 of its workers, and two months earlier in May a further 6,000 workers lost their jobs.

Company CFO Amy Hood told workers in an internal memo (see by Business Insider) that they should prepare for another year of "intensity."

"We're entering FY26 with clear priorities in security, quality, and AI transformation, building on our momentum and grounded in our mission and growth-mindset culture," she added.

Although the company has undergone major layoffs in recent months, hiring efforts in other areas and a broader restructuring has seen minimal changes to actual overall headcount.

Microsoft CEO Satya Nadella recently said the layoffs had been "weighing heavily" on him, likening the ongoing transformation to that of the 1990s, when PCs and software became democratized, blaming the shifts on evolving customer needs.

Microsoft told us that it is looking at refreshing its flexible working guidelines, as it has done many times before. The company has a page dedicated to its flexible work approach, which reads "No 'one size fits all'."

You might also like
Categories: Technology

Pages