Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Feed aggregator

New forum topics

The path to European data sovereignty

TechRadar News - Tue, 08/19/2025 - 02:47

Back in 2020, the European Parliament published a briefing paper which set out “growing concern that the citizens, businesses and Member States of the European Union (EU) are gradually losing control over their data, over their capacity for innovation, and over their ability to shape and enforce legislation in the digital environment.”

At the heart of the matter is the domination that the likes of Amazon, Microsoft and Google have established over the European cloud computing market. One of the effects of their success is that the region now faces significant challenges in ensuring data is subject to the laws and governance structures of the country or region in which it is collected, stored or processed.

For organizations based in the EU hosting their data with providers based elsewhere, this raises serious questions about who ultimately has jurisdiction over that data and whether it can be governed by foreign legal frameworks beyond their control.

Let’s also be clear - the market-leading hyperscalers offer efficiency, scale and a whole host of other compelling advantages. They are all highly innovative, trusted providers that have transformed how businesses operate and have enabled extraordinary digital progress at speed and scale.

Thousands of European organizations rely on – and will continue to rely on – these brands for good reason. At the same time, however, it’s also vital that organizations understand that where they store their data, and under whose jurisdiction it falls, carries implications far beyond IT.

Whether viewed through a political, economic, or operational lens, data sovereignty matters. In some scenarios, it can shape access rights, trigger regulatory obligations or even expose organizations to geopolitical risk.

For example, laws in one country could compel a cloud provider to share data stored in another, an issue that’s been flagged in relation to executive powers and national security mandates at the disposal of foreign governments.

So, how is the landscape changing? Firstly, there are a number of promising European cloud initiatives, including regulatory developments, sovereign cloud frameworks and consortium-based models designed to create local alternatives to the all-in-one hyperscaler stack. However, these solutions are not without their challenges, with cost, fragmentation, scalability and adoption hurdles potentially standing in the way of an effective regional system.

For many organizations, a full switch isn’t viable due to issues such as existing investment commitments, operational complexity and the simple absence of mature, like-for-like alternatives that can match the scale and capabilities of established providers.

The US hyperscalers are also getting in on the act. This time last year, for instance, AWS announced plans to invest €7.8 billion in the AWS European Sovereign Cloud, an initiative which the company says reinforces its “commitment to offer customers the most advanced set of sovereignty controls, privacy safeguards, and security features available in the cloud.”

How this plays out remains to be seen, but whatever route organizations favor in the pursuit of data sovereignty, access to choice and autonomy over where their data is stored is likely to grow in importance as time passes.

The role of intelligent data management

For European organizations in this position, and there are many, the good news is that they don’t need to wait for systemic changes in the cloud landscape to start regaining control. Data sovereignty can be addressed today through the implementation of modern, vendor-neutral data management technologies, which enable them to visualize their entire data landscape and apply consistent policies across disparate storage environments.

Armed with a unified view of their data across cloud and on-premises environments, organizations can then make informed choices about what data to store, where to store it and how best to safeguard it.

The obvious starting point is visibility because, without knowing what data exists, where it resides and how it moves, businesses are flying blind. This is particularly significant and challenging in contemporary multi-cloud and hybrid-cloud environments, where data can be extremely fragmented, often with little consistency or oversight.

But by establishing a clear picture of all data assets, classifying them based on sensitivity and business value and ensuring local copies of critical data are always available, IT management can also enforce policies that align with governance and regulatory requirements.

In the end, this is not just a technology and geography issue; it goes much deeper to cover everything from business resilience and compliance to control and, ultimately, customer trust. Europe’s digital future will depend not only on where its data lives, but on who can access it, govern it and protect it.

As the European Parliament data sovereignty briefing concludes, “Building a secure pan-European data framework and adopting new standards and practices to provide trustworthy and controllable digital products and services would ensure a safer digital environment.”

We list the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

The AI Triple Threat: mitigating the dangers of AI adoption with identity security

TechRadar News - Tue, 08/19/2025 - 02:35

A succession of recent high‑profile breaches has shown that the UK remains vulnerable to ever‑more advanced cyber threats. This exposure is intensifying as artificial intelligence becomes increasingly embedded in everyday business operations. AI tools have become essential for organizations seeking to deliver value and maintain competitiveness. Yet, its benefits also bring risks that far too many organizations have yet to fully mitigate.

CyberArk’s latest research identifies AI as a complex “triple threat”. It is being leveraged as an attack vector, utilized defensively, and—perhaps most worryingly—creating significant new security gaps. In light of this evolving threat landscape, organizations must position identity security at the heart of their AI strategies if they wish to build future resilience.

AI: Same threats, new problems

AI has raised the bar for traditional attack methods. Phishing, which remains the most common entry point for identity breaches, has evolved beyond poorly worded emails to sophisticated scams that use AI-generated deepfakes, cloned voices and authentic-looking messages.

Nearly 70% of UK organizations fell victim to successful phishing attacks last year, with more than a third reporting multiple incidents. This shows that even robust training and technical safeguards can be circumvented when attackers use AI to mimic trusted contacts and exploit human psychology.

It is no longer enough to assume that conventional perimeter defenses can stop such threats. Organizations must adapt by layering in stronger identity verification processes and building a culture where suspicious activity is flagged and investigated without hesitation.

Using AI in defense

While AI is strengthening attackers’ capabilities, it is also transforming how defenders operate. Nearly nine in ten UK organizations now use AI and large language models to monitor network behavior, identify emerging threats and automate repetitive tasks that previously consumed hours of manual effort. In many security operations centers, AI has become an essential force multiplier that allows small teams to handle a vast and growing workload.

Almost half of organizations expect AI to be the biggest driver of cybersecurity spending in the coming year. This reflects a growing recognition that human analysts alone cannot keep up with the scale and speed of modern attacks. However, AI-powered defense must be deployed responsibly.

Over-reliance without sufficient human oversight can lead to blind spots and false confidence. Security teams must ensure AI tools are trained on high-quality data, tested rigorously, and reviewed regularly to avoid drift or unexpected bias.

AI is broadening the scope of attacks

The third element of the triple threat is the rapid growth in machine identities and AI agents. As employees embrace new AI tools to boost productivity, the number of non-human accounts accessing critical data has surged, now outnumbering human users by a ratio of 100 to one.

Many of these machine identities have elevated privileges but operate with minimal governance. Weak credentials, shared secrets and inconsistent lifecycle management create opportunities for attackers to compromise systems with little resistance.

Shadow AI is compounding this challenge. Research indicates that over a third of employees admit to using unauthorized AI applications, often to automate tasks or generate content quickly. While the productivity gains are real, the security consequences are significant. Unapproved tools can process confidential data without proper safeguards, leaving organizations exposed to data leaks, regulatory non-compliance and reputational damage.

Addressing this risk

Addressing this risk requires more than technical controls alone. Organizations should establish clear policies on acceptable AI use, educate staff on the risks of bypassing security, and provide approved, secure alternatives that meet business needs without creating hidden vulnerabilities.

Positioning identity security at the heart of digital strategy Securing AI‑driven enterprises requires embedding identity security at every layer of an organization's digital strategy. That means ensuring real‑time visibility of all identities - human, machine or AI agent -applying least privilege consistently, and continuously monitoring for unusual access behavior that may signal a breach.

Forward‑facing organizations are already updating their access and identity management frameworks to meet AI’s distinct demands. This entails adopting just‑in‑time access for machine identities, monitoring privilege escalation, and treating all AI agents with the same scrutiny as human accounts.

AI offers tremendous value for organizations that embrace it responsibly, but without robust identity security, that value can swiftly become a liability. The businesses that thrive will be those recognizing that resilience isn’t optional- it’s the foundation for long‑term growth.

At a time where businesses and their adversaries are both empowered by AI, one principle stands firm: securing AI begins and ends with securing identity.

We list the best software asset management (SAM) tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

How algorithms are changing the way we speak

NPR News Headlines - Tue, 08/19/2025 - 02:00

Social media has birthed an entire lexicon replicated by millions online — even if these words don’t actually mean skibidi. On today’s show, we talk to author Adam Aleksic about how TikTok and Instagram's engagement metrics, and viral memes, are rewiring our brains and transforming language at warp speed.

Adam Aleksic’s book is Algospeak: How Social Media is Transforming the Future of Language 

Related episodes: 
What we’re reading on the beach this summer  

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.  

Fact-checking by Sierra Juarez. Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter

(Image credit: Olivier Morin)

Categories: News

A record number of aid workers were killed in global hotspots in 2024, the U.N. says

NPR News Headlines - Tue, 08/19/2025 - 01:51

The Aid Worker Security Database, which has compiled reports since 1997, said the number of killings rose from 293 in 2023 to 383 in 2024, including over 180 in Gaza.

(Image credit: Mariam Dagga)

Categories: News

AI’s invisible labor is tech’s biggest blind spot

TechRadar News - Tue, 08/19/2025 - 01:51

Behind the polished responses of AI platforms lies a lesser-known truth: workers from emerging economies, such as Africa, Latin America, and Asia, were paid less than $2 per hour to sift through graphic and traumatic content to help train their safety systems. This labor practice has sparked global concern, lawsuits, and calls for ethical reform in the AI industry.

Artificial intelligence is the crown jewel of modern enterprise – a sector exceeding $500 billion, reshaping everything from banking to healthcare. However, the truth is that behind every chatbot, image generator, and recommendation engine are armies of human workers who perform tasks that AI can’t handle, including labeling data, filtering toxic content, and correcting machine errors.

Without them, the algorithms would collapse, and the irony is hard to miss. AI is at risk of becoming the digital frontier for labor malpractice and a new form of unethical conduct. If businesses and innovators don’t act, AI’s promise could unravel under the weight of its own contradictions.

The Invisible Labor Fueling AI’s Rise

It is tempting to believe that AI systems are self-sufficient, refining themselves through endless feedback loops of data and computation. The reality, however, is far more complex. AI systems don’t clean or train themselves. The scale of this hidden labor crisis is staggering. Major gig platforms employ millions to annotate data, correct model errors, and sift through violent or explicit content.

These gig workers are outsourced from countries in the Global South, such as Kenya, India, and the Philippines, who prop up the $8 billion industry that powers the AI revolution. These workers are often highly educated but take on these jobs because better opportunities are scarce. They sign up believing they will contribute to cutting-edge technology, only to find themselves trapped in digital piecework. Pay is low, mental health support is rare, and job security is virtually nonexistent.

Why haven’t businesses fixed this? Because it’s cheap and easy to ignore. However, it comes with growing risks. Consumers and regulators are already beginning to question the ethics of AI supply chains. The European Union’s AI Act and similar efforts globally are setting new expectations for transparency, fairness, and accountability. Companies that fail to address the human cost of AI could face reputational damage, regulatory fines, or worse – a collapse of trust in the systems they have built.

Web3 might be the overlooked fix AI desperately needs

The promise of Web3 – decentralization, transparency, and user empowerment - directly addresses many of the failings in AI’s hidden labor ecosystem. Yet these tools remain largely untapped by enterprise AI, which is clearly a missed opportunity.

Decentralized Autonomous Organizations (DAOs) offer a way to embed genuine transparency and fairness into AI’s supply chains. Unlike traditional gig platforms, where decisions about pay, task selection, or working conditions are made behind closed doors, DAOs make every decision transparent and visible.

Every vote cast, every rule change, and every payment to a contributor is stored on a public ledger, creating an auditable trail that cannot be altered after the fact. This means that anyone, from participants to external auditors, can trace who made the decisions on ‘what, when, and how’. Immutable payment records eliminate the disputes that plague opaque gig work, while public governance logs ensure that power isn’t concentrated in the hands of a few.

Real-world examples are beginning to show what’s possible. Some decentralized employment platforms enable independent workers to collectively manage their pay structures and benefits, with all transactions and decisions recorded on-chain for complete transparency.

Others apply similar principles to research and contributor projects, where rules around compensation and project selection are codified in smart contracts, leaving little room for hidden decisions or unfair practices.

These models exist and are effective, but the reality is that enterprise AI has shown little interest in adopting them so far.

The Limits and Urgency of Change

Many enterprise AI leaders cling to the idea that ethical supply chains are simply too expensive - an unfortunate cost that doesn’t fit the margins demanded by investors or customers. But this is a myth that Web3 technologies can finally dismantle.

Web3’s value isn’t limited to ethics; it offers efficiency gains that traditional systems struggle to match. Smart contracts automate payments and bonuses, reducing the need for large administrative teams and eliminating intermediaries that add cost without providing value. Immutable blockchain records mean payment disputes, task verifications, and contract enforcement happen with far less friction, saving time, legal costs, and operational headaches.

However, Web3 isn’t flawless. Decentralized systems can replicate biases if data or governance isn’t audited. Transparency alone doesn’t guarantee explainability in AI decisions. And DAOs risk elitism if influence skews toward a wealthy few.

Clearly, the real risk is in doing nothing. The companies that lead on ethical AI supply chains will not only avoid the coming backlash but also earn the trust of their customers, regulators, and employees. Those who continue to look the other way will eventually find that the cost of cleaning up the mess is far higher than the cost of reforming now.

Web3 offers the clearest path to cleaning up AI’s hidden mess. But the window for voluntary reform is closing fast. Enterprises can either lead this change or be dragged into it when the backlash hits.

The choice won’t stay theirs for long.

We've listed the best employee management software and the best HR software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

DORA: six months into a resilience revolution

TechRadar News - Tue, 08/19/2025 - 01:37

There was a lot of discussion, planning, cost, and people management involved for all of those in the financial sector in bringing DORA into effect.

In January 2025, Rubrik Zero Lab’s research reported that the strains on businesses were not always obvious. In addition to costing nearly half (47%) of businesses over a Million Euros, 79% of employees reported an impact on mental health, and 58% of CISOs reported increased stress.

It was no secret, though; the work in preparing a business for DORA was always going to be significant. DORA’s five pillars of cybersecurity included ICT risk management, incident reporting, digital operational resilience testing, third-party risk management, and information sharing. A significant undertaking and expense for any business.

Integrating DORA

In the last six months, financial institutions have had to pivot from preparing for DORA to actively integrating its requirements into their daily operations. The initial months have seen a strong emphasis on solidifying ICT risk management frameworks, ensuring they are comprehensive, well-documented, and continuously monitored. The tasks involve mapping critical IT assets, identifying vulnerabilities, and establishing clear risk appetite statements.

A significant shift has been observed in incident reporting. Firms are currently facing the challenge of meeting strict requirements for classifying, notifying, and providing detailed reports on major ICT-related incidents to competent authorities within tight deadlines. These requirements have necessitated refining internal processes, improving monitoring tools, and establishing clear communication channels to ensure the timely and accurate flow of information.

Perhaps one of the most challenging areas has been digital operational resilience testing, particularly the highly prescriptive Threat-Led Penetration Testing (TLPT). While many firms had planned for these tests, the post-go-live period has seen the initiation and execution of complex simulations that mimic real-world attacks. These tests are not just about finding vulnerabilities but assessing the institution's ability to withstand and recover from severe disruptions, pushing internal teams and third-party testers to their limits.

Last but not least, third-party risk management has moved from a siloed function to a central focus. DORA mandates that financial entities oversee the entire lifecycle of their reliance on critical ICT third-party providers, which includes meticulous due diligence, robust contractual arrangements, and ongoing monitoring of their third parties' resilience.

Many institutions have been reassessing their entire vendor landscape, identifying critical dependencies, and, in some cases, diversifying providers to mitigate concentration risk. The regulatory spotlight on critical third parties means firms are demanding greater transparency and assurance from their suppliers than ever before.

None more so, the breadth of the regulation has also meant financial institutions have seen DORA touch almost every aspect of their businesses - IT and cybersecurity, to legal, compliance, risk, and even business operations. The human element is having an impact on upskilling and training staff, expanding roles and responsibilities, and increasing workload.

Do you feel ready for when an attack does take place?

After the work is undertaken to help your organization fall in line with DORA or other cybersecurity standards or regulations, the practical question to ask yourself is: ‘Do I feel resilient enough to bounce back from an attack and maintain business continuity in the wake of an attack?’

  • Putting the process in place helps, but have you road-tested it within your organization?
  • Have you thought about every eventuality? Or at least pre-planned for those you can?
  • What new risks can you identify now that you have assessed the gaps and resolved your security ecosystem?

Inevitably, it’s not a case of if an attack will take place, but when. Working through regulations supports your journey to cyber resilience, but if the honesty, the practice and the continual testing fail, then so will your defense system.

What does the future look like for DORA? And what does this mean on an international stage?

The first thing to realize is that DORA is one of many cybersecurity regulations that have come into place in recent months and years. Six months after implementation is very early, and as organizational frameworks mature, businesses will continue to invest, improve and adapt their work to maintain what is in place.

Costs, while substantial, are viewed not as mere compliance burdens but as strategic investments. The financial and reputational damage from a major cyber incident—potentially reaching into the hundreds of millions or even billions of euros in a severe scenario, not to mention regulatory fines—far outweighs the upfront investment in DORA compliance.

DORA's principles of robust ICT governance, rigorous testing, and vigilant third-party oversight will be critical for navigating the ever-evolving cyber threat landscape. By deeply embedding these practices into their operational DNA, financial institutions can not only meet regulatory obligations but also fortify their defenses, ensuring business continuity and maintaining customer trust in an increasingly volatile digital age.

We list the best IT management tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Sigma reveals super-bright lens for astrophotography fans – plus a new world-first for pro sports shooters

TechRadar News - Tue, 08/19/2025 - 00:00
  • Sigma unveils an ultra-wide 12mm f/1.4 lens for APS-C mirrorless cameras
  • It's available for Sony, Canon and Fujifilm cameras for £519 (US / AU pricing TBC)
  • 200mm F2 telephoto prime also revealed, part of Sigma's pro Sports line

Good Lord, Sigma's lens making department is on a roll. Following its versatile 18-40mm F1.8 zoom and award-winning 300-600mm F4 telephoto monster, it has unveiled two high-quality primes; a 12mm F1.4 for APS-C cameras, plus a full-frame 200mm F2.

The 12mm lens is the fifth and widest in a line of f/1.4 primes for APS-C cameras, following 16mm, 23mm, 30mm and 56mm options. I've tested all of four of those existing f/1.4 primes with a Canon mirrorless camera, and they pack superb optical performance into a lightweight and super-compact form factor.

There's still a clear need for the new 12mm lens, though, with its equivalent 18mm focal length in full-frame terms filling a niche for astrophotography, especially with its super-bright f/1.4 aperture. The previous widest f/1.4 lens in the range has an equivalent 24mm focal length, which won't be wide enough for many keen astrophotographers.

Sigma has made the ultra-wide prime for Sony E, Canon RF and Fujifilm X-mount cameras and its list price is £519 (US and Australia pricing TBC). We don't yet know if it will be made for other lens mounts such as Nikon Z or L-mount, but judging from previous launches I would hedge my bets that it will.

The new ultra-wide prime isn't the only news from Sigma today. It has also unveiled the world's first 200mm lens with bright f/2 aperture, available for Sony E and L-mount cameras.

As part of Sigma's Sports line for pros, the 200mm F2 shares much of the same DNA as the 300-600mm super telephoto zoom; it features superb optics, a high-speed autofocus response, 6.5EV optical image stabilization using Sigma's OS2 algorithm, and a dust- and splash-resistant build. It costs £2,999 (again, US and Australia pricing TBC).

The sales start date for both lenses is set for September 4.

(Image credit: Sigma)Shoot for the stars

Sigma's new 12mm F/1.4 lens will no doubt appeal to astrophotographers that shoot with an APS-C mirrorless camera, such as the Sony A6700, Canon EOS R7 or Fujifilm X-T5. However, with its compact build, wide perspective, responsive autofocus and minimal focus breathing, it also fits the bill for a different kind of star – vlogging.

It weighs just 7.9oz / 225g and measures 2.7in / 68mm in length, making it a compact pairing with any compatible APS-C, plus it's dust- and splash-resistant, so there's no problem getting out in challenging terrain or cold nights.

For optical engineering, the 200mm F2 pro prime is arguably all the more impressive, being the brightest 200mm lens on the market. It's billed for telephoto portraits and indoor sports, especially given its bright f/2 aperture and the compression effect achieved by the telephoto focal length.

It's a weightier affair than the 12mm F/1.4, tipping the scales at 64.2oz / 1,820g and measuring 7.9in / 201mm in length. That's the price you pay for the bright f/2 aperture at such a telephoto focal length, versus a 70-200mm zoom lens with a maximum f/2.8 aperture.

Judging from my experience with previous Sigma lenses, I expect both of these latest unique optics to deliver high-quality images, which are otherwise not possible given the world-first features on offer, while many APS-C shooters could finally have the astrophotography lens they have been asking for. For further information, do check out the Sigma website.

You might also like
Categories: Technology

Today's NYT Mini Crossword Answers for Tuesday, Aug. 19

CNET News - Mon, 08/18/2025 - 22:26
Here are the answers for The New York Times Mini Crossword for Aug. 19.
Categories: Technology

New iOS 26 Public Beta 4 Gets Us One Step Closer to the Final iPhone Release

CNET News - Mon, 08/18/2025 - 19:34
The final version of iOS 26 will drop next month. But you can try an early version of the software on an iPhone right now.
Categories: Technology

Google Translate Reportedly Adding AI Integration, Duolingo-Like Game Elements

CNET News - Mon, 08/18/2025 - 18:02
This update could spell trouble for its translation competitor, the internet's favorite green owl.
Categories: Technology

I was about to upgrade to Windows 11, but I've decided to stick with Windows 10 – here's why

TechRadar News - Mon, 08/18/2025 - 18:00

It's confession time. I've been procrastinating over upgrading to Windows 11 on my main PC, even though I fully intended to move to the newer operating system, away from Windows 10, this year. (Actually, the original plan was to switch early this year).

So yes - I've let things slide, at least when it comes to my main PC, anyway. In my defense, I did upgrade my secondary machine - a Microsoft Surface Pro laptop - to Windows 11. Technically, then, I have made the leap to Windows 11 - in a partial manner - and I've found the latest incarnation of Microsoft's desktop OS just fine on that 2-in-1. There are no complaints there (well, mostly, and I'll come back to the one niggle shortly).

And while I was fully planning to migrate to Windows 11 on my work computer (I call it that, but I game on this PC too) as mentioned, there are some good reasons why I've put that plan on ice - for now.

Yes, I've not changed my mind about upgrading to Windows 11, but only adjusted the timeframe involved, as Microsoft busily reminds all of us Windows 10 folks that we only have two months of support (and vital security updates) left (as Bleeping Computer noticed). This is the latest step in a campaign of nudges to get people shifted over - Microsoft has even sent out emails directly to Windows 10 users, urging upgrades to Windows 11 in the past.

So, what are my reasons for deciding against taking the plunge with the newer OS? Well, there are a few of them, so let's dive in and explore.

Freebie extension

The first reason - and my main one, really - is that a couple of months back, Microsoft switched tack and announced that there would be a free way to get extended updates for Windows 10.

In case you missed it entirely, the Extended Security Updates (ESU) program was originally revealed for consumers with a $30 price tag (or equivalent in your currency). Then late in June, Microsoft brought forth a freebie option - well, in terms of the cash cost anyway: the new choice was to get extended support for a year if you sync your PC settings to OneDrive.

As I've said before, I don't think this is too big a deal for most people. It's not like you have to sync and store your personal data with Microsoft's cloud storage, just your settings. In my case, I do this anyway, so there's literally no cost for me to get an extra year of support. So, when this spin on the ESU was announced, it immediately took all the heat out of my (delayed) quest to upgrade my main machine to Windows 11.

And since then, I've only been thinking about why there's no rush at all now. While I didn't want to fork out actual money to stay on Windows 10, now I don't have to - and with effectively free extended support, I have until October 2026 to shift over to Windows 11. And frankly, there aren't really any pressing reasons to upgrade anyway…

(Image credit: Marjan Apostolovic / Shutterstock)Performance wrinkles

What's also become clearer to me as this year has progressed (with my upgrade heels dragging) is that Windows 11 is somewhat wonky in some elements of its performance. When using the operating system on my laptop, I've experienced sluggishness with File Explorer, which is pretty disappointing. This is likely the result of there being a lot of changes with work under the hood in Windows 11, and Microsoft has even admitted that the performance situation could be better - and it's working to improve this.

In fairness, on the whole, my experience with Windows 11 on my laptop - and my wife's PC, which also has the newer operating system - is that it's actually pretty snappy overall. Indeed, I'd say it's more responsive than Windows 10, but not by enough to have me rushing for that upgrade button.

Of course, performance levels on my other PCs don't guarantee that Windows 11 will feel just as snappy on my Windows 10 rig, either. That's the thing about upgrades: they can be unpredictable, and outcomes may vary on different hardware. And there are folks out there who are firing some considerable flak at Microsoft for Windows 11 being slower in general (not just File Explorer) - so that does leave a little room for doubt to creep in.

(Image credit: Shutterstock)Bugs and stability

Then we come onto the bugs. The fact that Windows 11 24H2 has been very glitchy (and generally weird) in many respects doesn't instill confidence, and for me, this was also a major pause for thought (in the past, as well as now). Case in point: I've just written an article about a new reported bug in Windows 11, which is seemingly breaking SSDs, and while it's still to be confirmed, and we certainly shouldn't be jumping to conclusions that the most recent August update caused it, this appears to be the case.

Whether that's true or not, we shall see in time, but the fact is that it's still something for those running Windows 11 to worry about. (Think twice before embarking on any big installations, as I discussed this earlier.)

Which got me thinking: if I stay on Windows 10, as I'd already been leaning towards anyway, I'm going to receive nothing but plain security patches over the next year and a bit. Just fixes for vulnerabilities, and no tinkering with the operating system whatsoever - meaning less chance of breaking stuff.

The upshot is that Windows 10 is likely to run a lot more stably than Windows 11, which is going to be witnessing a steady stream of new features as this year turns into the next, and 2026 rolls onwards.

(Image credit: Shutterstock / lassedesignen)Risk averse

I'm risk-averse in general – and particularly with computers – so it just makes sense to stick with Windows 10, and not twist to install Windows 11, for the time being. It won't cost me anything to do so, I know how Windows 10 performs – and it runs just fine for me, it's not sluggish at all, even if it may not be quite as snappy as my wife's desktop PC on Windows 11 – and I know it'll be more reliable in terms of what will happen with updates.

Don't get me wrong, though: I will be upgrading to Windows 11 next year. Indeed, I might make the leap straight away if a tempting new feature does arrive for Windows 11 (not that there's anything in particular on the horizon yet). But for now, I'll play it safe with Windows 10, as that just seems like the best course of action on balance.

You might also like
Categories: Technology

Trump wants to stop states from voting by mail and using voting machines

NPR News Headlines - Mon, 08/18/2025 - 17:23

But legal experts say he lacks the constitutional authority to do so.

(Image credit: Mark Makela)

Categories: News

Today's Wordle Hints, Answer and Help for Aug. 19, #1522

CNET News - Mon, 08/18/2025 - 17:00
Here are hints and the answer for today's Wordle for Aug. 19, No. 1,522.
Categories: Technology

Today's NYT Connections Hints, Answers and Help for Aug. 19, #800

CNET News - Mon, 08/18/2025 - 17:00
Here are some hints and the answers for the NYT Connections puzzle for Aug. 19, #800.
Categories: Technology

Everything leaving Hulu in September 2025 – don't miss streaming these 20 movies before they disappear

TechRadar News - Mon, 08/18/2025 - 17:00

September is only a few weeks away, and that means Hulu is gearing up to clear out a handful of movies for its new arrivals. But don't worry, there are only 18 movies and two documentaries set to get the chop, so it's safe to say that Hulu's collection of best TV shows is staying put.

Compared to other streaming services – that have a tendency to remove major blockbusters more frequently – Hulu is far more likely to remove lesser-known titles that aren't as popular as its star movies and shows. Next month month is no exception, but it's always worth checking them out as one of your favorite hidden gems could sneak in there – you never know.

Everything leaving Hulu in September 2025

Leaving on September 1

Unplugging (movie)

Leaving on September 2

Taurus (movie)

Leaving on September 7

Petite Maman (movie)
Racing Extinction (documentary)
The Cove (movie)

Leaving on September 9

Corsage (movie)
The Last Victim (movie)

Leaving on September 12

Fool's Paradise (movie)
Lost Girls (movie)
Remember Me: The Mahalia Jackson Story (movie)

Leaving on September 17

Bad Axe (documentary)
Dakota (movie)
Somewhere in Queens (movie)

Leaving on September 22

The Almond and the Seahorse (movie)

Leaving on September 23

Dinner in America (movie)

Leaving on September 25

A Chiara (movie)
Private Property (movie)

Leaving on September 30

After Midnight (movie)
Charlotte (movie)
The Wheel (movie)

You might also like
Categories: Technology

Today's NYT Strands Hints, Answers and Help for Aug. 19, #534

CNET News - Mon, 08/18/2025 - 16:58
Here are hints and answers for the NYT Strands puzzle for Aug. 19, No. 534.
Categories: Technology

Finally! Future SSDs are set to be more energy efficient and more secure thanks to a new set of guidelines

TechRadar News - Mon, 08/18/2025 - 16:52
  • NVMe 2.3 sets new ground rules which could alter storage behavior across multiple environments
  • Power monitoring shifts focus toward sustainability and careful control in both enterprise and consumer drives
  • Energy capping functions might prevent system stress in older setups that struggle with power draw

The NVM Express group has confirmed the release of NVMe 2.3, a revision that introduces 11 updates across storage command sets and transport protocols.

The changes touch NVM, Zoned Namespace, Key Value, Local Memory, and Compute, while also extending refinements to PCIe, RDMA, and TCP.

Alongside this, the NVMe Management Interface advances to version 2.1, and NVMe Boot moves to version 1.3.

Shifts in power control and monitoring

NVM Express says the purpose of this upgrade is to make solid-state drives more reliable, flexible, and energy-conscious.

In terms of power management, the new Power Limit Config function allows administrators to cap energy draw from an NVMe device.

This can prevent strain in older servers or in setups where consumption needs to be tightly monitored.

In addition, a Self-Reported Drive Power feature lets storage devices reveal usage levels in real time or across longer intervals.

Such reporting may help in capacity planning, early fault detection, and in keeping overall consumption within sustainable levels.

These features may be useful, but their practical benefit will rest on whether manufacturers implement them consistently across both the largest SSD models and portable external SSD units aimed at consumers.

Security changes also appear in the specification. Sanitize Per Namespace makes it possible to erase a defined portion of the drive while leaving the rest intact.

This may help in environments where parts of a drive are being retired or reassigned while other data remains active.

Another addition, Configurable Device Personality, lets an SSD shift operating modes depending on requirements, such as favoring speed or conserving power.

This could reduce the complexity of managing storage arrays, yet questions remain about how often real-world deployments will need such tuning and whether vendors will expose this level of control to users outside enterprise settings.

Rapid Path Failure Recovery is another headline change. When the connection between the host and the storage subsystem falters, the system can now redirect commands through an alternate path instead of failing outright.

The goal is reduced downtime and fewer errors from repeated requests.

For organizations running large clusters or managing the best rugged SSD options in field conditions, this could mean greater resilience.

You might also like
Categories: Technology

An Aurora Is Hitting Monday Night Only and Will Be Visible in Over a Dozen States

CNET News - Mon, 08/18/2025 - 16:45
Increased solar flare activity over the last few days will cause the northern horizon to light up for many states.
Categories: Technology

The NYT just launched a new daily game – but it's no Wordle

TechRadar News - Mon, 08/18/2025 - 16:14

If you were hoping for yet another New York Times game to satisfy your word-puzzling itch, I'm sorry to disappoint you. Pips, the latest addition to The New York Times' growing games corral, is a word-free, domino-filled exercise in entertainment and occasional frustration.

Pips, which was launched on Monday (August 18) online and iOS and Android, is a departure from the global phenomenon Wordle and its cousin games, Connections and Strands (as well as competitors like Quordle). It has no letters, no word jumbles, or even topic-driven associations.

The only playing pieces on Pip's tiny game board are five dominoes. Yes, just like the dominoes you played with as a kid, or are still using in real life with games like Tiles (no, not the same as NYT's own "Tiles" game). Pips, by the way, are the dots on a domino.

What's the point of Pips?Image 1 of 2

(Image credit: Future)Image 2 of 2

(Image credit: Future)

The object of the game is to place all your dominoes on the board by fulfilling certain on-board requirements. These are set out via color-coding, which indicates which Tiles are included in a condition, and small tags that define the condition (often a value) of dominos you can drop in one or more squares.

As you may recall, dominoes have values on them that range from zero (blank) to six pips. Each domino can have mismatched numbers or matching figures. It's these numbers and combos you'll need to pay close attention to as you try to work out each Pips logic puzzle.

This is the first New York Times game we can recall in recent times that allows for three levels of play per day – Easy, Medium, and Hard – and that lets you play all of them on the same day. As soon as you start playing, a timer starts.

Playing Pips

(Image credit: Future)

On the game board, you'll see labels like ">3" ("greater than three"), "<13" for ("less than 13"), or "=" (indicating all squares feature the same number). In each case, the pips on the tiles have to meet those conditions, either individually or collectively.

Satisfying those conditions takes some non-linear thinking. Conditional colors across multiple tile squares do not necessarily mean that you'll be using both squares on one domino to meet those conditions.

(Image credit: Future)

Sometimes you have to look for a condition or pair of conditions that can only be satisfied by one of the five tiles you've been given.

I started with the easy game and solved it in 31 seconds. Feeling pretty good about myself, I switched to Hard and found myself struggling for almost 10 minutes. Medium took me almost six minutes. Now, at least, I think I understand how to play the game and hope that I'll do better tomorrow.

Sharing your Pips

(Image credit: Future)

As with all other New York Times games, you can share your score with friends. Shares show the Pips game number, game level, a color code that I assume reflects your performance, and time. For 31 seconds, I got a green dot. For 5:56, I got a yellow, and for 9:26, I got a red dot.

Pips certainly works a different mental muscle than Wordle, but overall, it feels less succinct and maybe a little less fun. There's a glorious combination of erudition and simplicity to Wordle that I cherish. As a writer, I love word games like it, including Connections and Strands. I also think the gamification, results, and competitions that result from games like Wordle are more universally relatable.

I'm not even sure how you would write the daily guide for Pips. The level of complexity and thought might make each daily read a slog or worse, highly frustrating.

We all use words every day to communicate. We have a sense that our aptitude for the English language is some measure of our intelligence, and whether or not that's true, we love to dunk on each other when we get Wordle in two or even one try.

There's no obvious Pips equivalent of "Got it in 2!" and for that reason, this is no Wordle, but that might just be OK.

You might also like
Categories: Technology

This iOS 26 upgrade could make your iPhone feel much faster, but there’s a catch

TechRadar News - Mon, 08/18/2025 - 16:00
  • iOS 26 public beta 3 improves app opening times
  • This makes using apps feel much quicker and snappier
  • The adjustment does make apps’ content load any faster, though

The third public beta of Apple’s iOS 26 has just landed, and it comes with a few minor tweaks and adjustments for your iPhone. But there’s one change that has really caught my eye, and it’s made my iPhone feel so much faster in the process.

Once you’ve downloaded iOS 26 public beta 3, you’ll likely notice the change almost immediately: now, opening and closing apps feels much snappier than before, with apps springing to life on your screen as soon as you tap them. Instead of waiting for them to fill your screen, they’re there almost instantly. The video further down this page shows the speed in action.

In objective terms, there’s probably only a minor difference in app opening and closing times between iOS 18 and the third beta of iOS 26 – maybe just a few milliseconds. Yet in practice, you absolutely notice the change. I love how quickly it feels like I can dive into my apps without waiting for the animation to finish in iOS 26.

In times when I need to get started rapidly – like opening my camera app to capture a one-off moment, for example – the new animation is a real blessing. But those times are comparatively rare, and I’ve found the main benefit is simply the feeling of increased snappiness I now get with my iPhone. It’s a small change, but a meaningful one.

The slight drawback

iOS 26 beta 6 brings new animations when opening and closing apps pic.twitter.com/u2BiXZDVTgAugust 11, 2025

That said, it’s worth noting that you might not feel the benefits at all times. That’s because the change impacts how quickly an app’s container loads and fills your screen, but it doesn’t actually make that app’s content load any faster.

So, if an app is poorly optimized or takes a long time to show any content on-screen, the latest iOS 26 public beta likely won’t make it feel any faster. That means the improvements could vary, depending on which apps you use the most.

Still, if your apps are well-made and load their internal workings quickly, you’ll really start to feel the improvement when iOS 26 lands properly next month.

Updates like this show that small tweaks can sometimes make more of a difference than new features or big, showy changes. That’s certainly the case here, and the rapid app-loading animations might be one of my favorite adjustments found in iOS 26 so far.

You might also like
Categories: Technology

Pages

Subscribe to The Vortex aggregator