Scientists have long wondered about how the potato's genetic lineage came to be. Now they know: The plants are a cross between tomatoes and a plant known as Etuberosum.
(Image credit: Natural History Museum, London)
The White House says people living on the street in Washington, D.C., can avoid jail by going to a shelter. Homeless advocates say there aren't enough shelter beds.
(Image credit: Maansi Srivastava)
Kari Lake has sought to dismantle Voice of America and its federal parent, the U.S. Agency for Global Media. The agency has recently called her its acting CEO. But the law suggests she's not eligible for the job.
(Image credit: Real America's Voice/via YouTube)
Although "dog" is ubiquitous today to describe man's best friend, it remains a mystery where the word originally came from.
(Image credit: Oisin Keniry)
Could AI be the answer to the UK’s productivity problem? More than half (58%) of organizations think so, with many experiencing a diverse range of AI-related benefits including increased innovation, improved products or services and enhanced customer relationships.
You don’t need me to tell you this – chances are you’re one of the 7 million UK workers already using AI in the workplace. Whether you’re saving a few minutes on emails, summarizing a document, pulling insights from research, or creating workflow automations.
Yet while AI is a real source of opportunities for companies and their employees, pressure for organizations to adopt it quickly can inadvertently give rise to increased cybersecurity risks. Meet shadow AI.
What is shadow AI?Feeling the heat to do more with less, employees are looking to GenAI to save time and make their lives easier – with 57% of office workers globally resorting to third-party AI apps in the public domain. But when employees start bringing their own tech to work without IT approval, shadow AI rears its head.
Today this is a very real problem, with as many as 55% of global workers using unapproved AI tools while working, and 40% using those that are outright banned by their organization.
Further, internet searches for the term “shadow AI” are on the rise – leaping by 90% year-on-year. This shows the extent to which employees are “experimenting” with GenAI – and just how precariously an organization's security and reputation hangs in the balance.
Primary risks associated with shadow AIIf UK organizations are going to stop this rapidly evolving threat in its tracks, they need to wake up to the threat of shadow AI – and fast. This is because the use of LLMs within organizations is gaining speed, with over 562 companies around the world engaging with them last year.
Despite this rapid rise in use cases, 65% of organizations still aren’t comprehending the implications of GenAI. But each unsanctioned tool leads to significant vulnerabilities that include (but are not limited to):
1. Data leakage
When used without proper security protocols, shadow AI tools raise serious concerns about the vulnerability of sensitive content, e.g. data leakage through the learning of information in LLMs.
2. Regulatory and compliance risk
Transparency around AI usage is central to ensuring not just the integrity of business content, but users’ personal data and safety. However, many organizations lack expertise or knowledge around the risks associated with AI and/or are deterred by cost constraints.
3. Poor tool management
A serious challenge for cybersecurity teams is maintaining a tech stack when they don’t know who is using what – especially in a complex IT ecosystem. Instead, comprehensive oversight is needed and security teams must have visibility and control over all AI tools.
4. Bias perpetuation
AI is only as effective as the data it learns from and flawed data can lead to AI perpetuating harmful biases in its responses. When employees use shadow AI companies are at risk of this – as they have no oversight of the data such tools draw upon.
The fight against shadow AI begins with awareness. Organizations must acknowledge that these risks are very real before they can pave the way for better ways of working and higher performance – in a secure and sanctioned way.
Embracing the practices of tomorrow, not yesterdayTo realize the potential of AI, decision makers must create a controlled, balanced environment that puts them in a secure position – one where they can begin to trial new processes with AI organically and safely. Crucially though, this approach should exist within a zero-trust architecture – one which prioritizes essential security factors.
AI shouldn’t be treated as a bolt-on. Securely leveraging it requires a collaborative environment that prioritizes safety. This ensures AI solutions enhance – not hinder – content production. Adaptive automation helps organizations adjust to changing conditions, inputs, and policies, simplifying deployment and integration.
Any security experience must also be a seamless one, and individuals across the business should be free to apply and maintain consistent policies without interruption to their day-to-day. A modern security operations center looks like automated threat detection and response that not only spot threats but handles them directly, making for a consistent, efficient process.
Robust access controls are also key to a zero-trust framework, preventing unauthorized queries and protecting sensitive information. While these governance policies have to be precise, they must also be flexible to keep pace with AI adoption, regulatory demands, and evolving best practices.
Finding the right balance with AIAI could very well be the answer to the UK’s productivity problem. But for this to happen, organizations need to ensure there isn't a gap in their AI strategy where employees feel limited by the AI tools available to them. This inadvertently leads to shadow AI risks.
Powering productivity needs to be secure, and organizations need two things to ensure this happens – a strong and comprehensive AI strategy and a single content management platform.
With secure and compliant AI tools, employees are able to deploy the latest innovations in their content workflows without putting their organization at risk. This means that innovation doesn’t come at the expense of security – a balance that, in a new era of heightened risk and expectation, is key.
We list the best IT management tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
You can now create a WordPress website in minutes, with the help of Generative AI (GenAI), without needing a third-party website builder or AI tool. Everything can be done in WordPress directly, through a chat interface, and without the website builder’s branding showing anywhere on the site.
This is all courtesy of the website builder platform 10Web, which just announced the launch of its fully white-labeled AI website builder solution. It comes in the form of a WordPress plugin, and allows users to create a website inside their hosting stack without relying on a separate builder platform.
In a press release shared with TechRadar Pro earlier this week, 10Web says the new offering should further increase ARPU, reduce churn, and differentiate through same-day AI website delivery.
“Hosting companies have been stuck selling blank WordPress installs,” said Arto Minasyan, Founder and CEO of 10Web. “With this solution, they can launch fully functional websites under their own brand in seconds. It’s the simplest way to deliver real customer value, without changing how they host or deploy WordPress.”
WooCommerce includedUsually, when a customer buys a hosting service, they get either a blank WordPress dashboard, or one bundled with themes and plugins. However, with the emergence of GenAI, expectations changed, and customers have gotten used to the “describe and build” experience, the company claims.
That being said, it claims “early tests” showed users being 30% more likely to publish their site compared to traditional WordPress onboarding flows. It didn’t say when the tests took place, who was tested, and against what, though.
In any case, 10Web says the plugin is built on its proprietary AI technology which leverages advanced models from OpenAI, Gemini, and Anthropic. The sites are mobile-friendly, fully structured, and based on a “simple business description”.
When users create a site, they will see a branded AI flow that generates the entire website, including WooCommerce integration, if needed. Finally, everything is white-labeled with the hosting provider’s name and logo, and includes a visual editor with AI Co-Pilot.
More from TechRadar ProIn boardrooms and investor meetings, artificial intelligence is now table stakes. AI tools are everywhere. Analysts are forecasting trillions in potential value. McKinsey estimates that generative AI alone could boost the global economy by up to $4.4 trillion a year.
And yet, in the enterprise? Something’s not clicking.
Despite the hype, most AI projects are still stuck in the sandbox; demo-ready, not decision-ready. The issue isn’t model performance. It’s operationalization. Call it the Enterprise AI Paradox: the more advanced the model, the harder it is to deploy, trust, and govern inside real-world business systems.
The heart of the paradoxAt the heart of this paradox, McKinsey argues, lies a misalignment between how AI has been adopted and how it generates value.
Horizontal use cases, notably tools like Microsoft’s Copilot or Google's Workspace AI, proliferate rapidly because they're easy to plug in and intuitive to use. They provide general assistance, they summarize emails, draft notes, simplify meetings, and so on.
Yet these horizontal applications scatter their value thinly, spreading incremental productivity improvements so broadly that the total impact fades into insignificance.
As the McKinsey report puts it, these applications deliver "diffuse, hard-to-measure gains.”
In sharp contrast, vertical applications (those baked into core business functions) carry the promise of significant value but struggle profoundly to scale. Less than 10 percent of these targeted deployments ever graduate beyond pilot phases, trapped behind technological complexity, organizational inertia, and a lack of mature solutions. LLMs are extraordinary. But they’re not enough.
It’s like trying to run a Formula 1 car on a farm trackThe real enterprise challenge isn’t building a big, clever model. It’s orchestrating intelligence, across systems, teams, and decisions.
The world’s most innovative companies don’t want a single mega-model spitting out answers from a black box. They want a system that’s intelligent across the board: data flowing from hundreds of sources, automated agents taking action, results being validated, and everything feeding back into an improved loop.
That’s not one model. That’s many. Talking to each other. Acting with autonomy. And constantly learning from a dynamic environment.
This is the future of enterprise AI, and it’s what’s known as agentic.
What is agentic AI, and why does it matter?Agentic AI systems are different from monolithic LLMs in one key way: they think and act like a team. Each agent is a specialist, trained on a narrow domain, given a clear role, and capable of working with other agents to complete complex tasks.
One might handle user intent. Another interfaces with an internal database. A third enforces compliance. They can run asynchronously, reason over real-time data, and retrain independently.
Think of it like microservices, but for cognition. Unlike traditional generative AI, which remains largely reactive (waiting passively for human prompting) agents introduce something entirely different. "AI agents mark a major evolution in enterprise AI - extending gen AI from reactive content generation to autonomous, goal-driven execution,” McKinsey researchers explain.
This isn’t some speculative vision from a Stanford whitepaper. It’s already happening, in advanced enterprise labs, in the open-source community, and in early production systems that treat AI not as a product, but as a process.
It’s AI moving from intelligence-as-an-output to intelligence-as-infrastructure.
Why most enterprises aren’t ready (yet)If agentic systems are the answer, why aren’t more enterprises deploying them?
Because most AI infrastructure still assumes a batch world. Systems were designed for analytics, not autonomy. They rely on periodic data snapshots, siloed memory, and brittle pipelines. They weren’t built for real-time decision-making, let alone a swarm of AI agents operating simultaneously across business functions.
To make agentic AI work, enterprises need three things:
Live data access – Agents must act on the most current information available
Shared memory – So knowledge compounds, and agents learn from one another
Auditability and trust – Especially in regulated environments where AI decisions must be traced, explained, and governed
This isn’t just a technology problem, it’s actually an architectural one. And solving it will define the next wave of AI leaders.
From sandbox to systemEnterprise AI isn’t about making better predictions. It’s about delivering better outcomes.
To do that, companies must move beyond models and start thinking in systems. Not static models behind APIs, but living, dynamic intelligence networks: contextual, composable, and accountable.
The Agentic Mesh, as McKinsey calls it, is coming. And it won’t just power next-gen applications. It will reshape how decisions are made, who makes them, and what enterprise infrastructure looks like beneath the surface.
It isn’t simply a set of new tools bolted onto existing systems. Instead, it represents a shift in how organizations conceive, deploy, and manage their AI capabilities.
To really make this work, McKinsey says it’s time to wrap up all those scattered AI experiments and get serious about what matters most. That means clear priorities, solid guardrails, and picking high-impact "lighthouse" projects that show how it's done.
The agentic mesh isn't just a fancy architecture - it’s a call for leaders to rethink how the whole enterprise runs. Because real enterprise transformation won’t come from scaling a smarter model. It will come from orchestrating a smarter system.
We list the best AI chatbot for business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Listening to entrepreneurs discuss the potential of AI cybersecurity will give you déjà vu. The discussions are eerily similar to how we once talked about cloud computing when it emerged 15 years ago.
At least initially, there was a surprisingly prevalent misconception that the cloud was inherently more secure than on-premises infrastructure. In reality, the cloud was (and is) a massive attack surface. Innovation always creates new attack vectors, and to say AI is no exception is an understatement.
CISOs are generally aware of AI’s advantage, and for the most part they’re similarly aware that it’s creating new attack vectors. Those who took the right lessons from the development of cloud cybersecurity are right to be even more hesitant about AI.
Within the cloud, proper configuration of the right security controls keeps infrastructure relatively static. AI shifts faster and more dramatically, and is thus inherently more difficult to secure. Companies that got burned by being overeager about cloud infrastructure are now hesitant about AI for the same reasons.
Multi-industry AI adoption bottleneckThe knowledge gap isn’t about AI’s potential to drive growth or streamline operations; it’s about how to implement it securely. CISOs recognize the risks in AI’s expansive attack surface.
Without strong assurances that company data, access controls, and proprietary models can be safeguarded, they hesitate to roll out AI at scale. This is likely the biggest reason why AI apps at the enterprise level are coming out only at a trickle.
The rush to develop AI capabilities has created a multi-industry bottleneck in adoption, not because companies lack interest, but because security hasn’t kept pace. While technical innovation in AI has accelerated rapidly, protections tailored to AI systems have lagged behind.
This imbalance leaves companies exposed and without confidence to deploy at scale. Making matters worse, the talent pool for AI-specific cybersecurity remains shallow, delaying the hands-on support organizations need to integrate safeguards and move from adoption intent to execution.
A cascade of complicating factorsThis growing adoption gap isn’t just about tools or staffing—it’s compounded by a broader mix of complicating factors across the landscape. Some 82% of companies in the US now have a BYOD policy, which complicates cybersecurity even absent AI.
Elon Musk’s Department of Government Efficiency (DOGE) has fired hundreds of employees at the U.S. government’s cybersecurity agency CISA, which worked directly with enterprises on cybersecurity measures. This dearth of trust only tightens this bottleneck.
Meanwhile, we’re seeing AI platforms like DeepSeek become capable of creating the basic structure for malware. Human CISOs, in other words, are trying to create AI cybersecurity capable of facing AI attackers, and they’re not sure how. So rather than risk it, they don’t do it at all.
The consequences are now becoming evident, and dealing a critical blow to adoption. It just about goes without saying: AI won’t reach its full potential absent widespread adoption. AI is not going to fizzle out like a mere trend, but AI security is lagging and inadequate and it’s clearly hampering development.
When “good enough" security isn’t enoughAI security is shifting from speculative to strategic. This is a market brimming with potential. Enterprises are grappling with the severity and scale of AI-specific threats, and the demand those challenges created are attracting wider investor interest. Organizations have no choice but to secure AI to fully harness its capabilities. Those that aren’t hesitating are actively seeking solutions through dedicated vendors or by building internal expertise.
This has created a lot of noise. A lot of vendors claim to be doing AI red teaming, while really just offering basic penetration testing in a shiny package. They may expose some vulnerabilities and generate initial shock value, but they fall short of providing the continuous and contextual insight needed to secure AI in real-world conditions.
If I were trying to bring AI into production in an enterprise environment, a simple pen test wouldn’t cut it. I would require robust, repeatable testing that accounts for the nuances of runtime behavior, emergent attack vectors, and model drift. Unfortunately, in the rush to move AI forward, many cybersecurity offerings are relying on this “good enough” pen testing, and that’s not good enough for smart organizations.
The reality is that AI security requires a fundamentally different approach – this is a new class of software. Traditional models of vulnerability testing fail to capture how AI systems adapt, learn, and interact with their environments.
Worse still, many model developers are constrained by their own knowledge silos. They can only guard against threats they’ve seen before. Without continuous external evaluation, blind spots will remain.
As AI becomes embedded across sectors and systems, cybersecurity needs to provide actually suitable solutions. That means moving beyond one-time audits or compliance checkboxes. It means adopting dynamic, adaptive security frameworks that evolve alongside the models they’re meant to protect. Without this, the AI industry will stagnate or risk serious security breaches.
We list the best encrypted messaging app for Android.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Generative AI is changing the software development game. Beyond its capabilities in IT automation, the tool is also empowering professionals to contribute where it matters most, specifically at the strategic level.
This is the case of developers, who are no longer confined to their expectations of simply building applications but are increasingly becoming more involved with strategic business outcomes.
Gartner predicts that by 2028, 75% of enterprise software engineers will use AI. This figure doesn’t just represent technological advancements and the growing role of generative AI in enterprise software but also serves as a wakeup call for businesses to rethink the role of their developers.
The connecting factorOrganizations need to recognize that developers are the connecting factor between their needs and digital solutions. Those that recognize this early on and harness developer expertise will be those that succeed ahead.
This shift is already underway across multiple industries. In healthcare, for example, developers are addressing clinical needs by designing solutions that reduce operational friction, giving practitioners more time to focus on patient care.
In financial services, they are driving growth in a highly regulated and competitive environment, enhancing fraud detection while making financial services accessible and convenient for customers.
Meanwhile, in the retail sector, developers are elevating the technologies behind customer experiences to meet rising expectations. Across the board, developers are emerging as strategic innovators, leveraging technology, not just to solve problems, but to deliver meaningful business outcomes.
Developers are keepers of insightAcross sectors, businesses are beginning to rethink how they engage with their developers. The conversation is now shifting from basic interaction to empowering them to contribute strategically.
With developers holding a deeper understanding of their business's needs, they are more frequently asking to be heard and consulted upon the innovation strategy to better support business objectives.
Therefore, unlocking the potential of AI will require a mindset shift - one that acknowledges generative AI’s role not just to accelerate development but also elevate individuals behind it. To move forward, organizations need to recognize the value that developers bring to the table; including solving the issues that generative AI alone cannot solve.
Empowering developers: how low-code and AI are redefining complexityObject oriented low-code and no-code platforms and generative AI have fundamentally changed how developers can leverage their business relevance in their organizations. By eliminating some of the complexity of line-by-line code development, this allows them to move quickly from idea to implementation, creating more room for innovation, experimentation, and collaboration with other stakeholders.
As a result, developers are finding it easier to take a much bigger seat at the table, thereby helping to guide business strategy. Developers bring unique value: they are embedded in systems, close to the problems that need solving, and often have first-hand insight into operational inefficiencies and user frustrations. They understand the organization not just from a technical perspective, but from a business one.
Low-code and generative AI free developers from repetitive, technical tasks and enable them to focus on solving real business problems. As a result, developers are no longer just responding to requirements - they are helping to shape them. This shift gives developers a greater voice in strategic discussions and positions them as key players in driving business success.
Generative AI as a copilotGenerative AI copilots go beyond traditional tools by actively assisting developers throughout the software development lifecycle. Instead of working within rigid frameworks that slow innovation, developers can now brainstorm ideas and instantly generate code, receive intelligent suggestions, automate repetitive routine tasks, like debugging, or documentation. These copilots act as intelligent partners, freeing developers to concentrate on solving high-impact problems faster.
The critical advantage of a DevOps team with time, is their ability to more proactively engage with the overall direction of business. Generative AI amplifies the value of human insight further by enabling developers to focus on the work that matters most including creativity, judgement, and a deep understanding of organizational context. In addition, when generative AI is paired with low-code, developers have a co-pilot aiding them on the journey to create better applications and services for the industry they work in.
Developers delivering value across industriesAn industry where the shift in developers offering insights is most apparent is in healthcare. The development of applications in this sector isn’t just about building tools, but more importantly reducing friction for clinical practitioners and returning time to patient care.
Developers who understand the pain points and frustrations clinical staff face, are better equipped to create applications that minimize these complexities. Generative AI and low-code development platforms make it possible to quickly build, iterate, and improve these tools, resulting in better alignment between healthcare technology and frontline needs.
Another telling example is the financial services sector where 75% of financial firms already use AI. Developers are able to redirect their focus from routine tasks and offer value by modernizing legacy systems, streamlining compliance and enhancing fraud detection, all while supporting rapid product innovation.
Building solutionsIn a tightly regulated industry, their ability to build secure, efficient and customer-centric solutions is critical. Developers offer real value by creating solutions without compromising safety or security. With AI, developers can move faster, meet regulatory requirements, and deliver personalized experiences that build trust and retention.
In retail, developers are using customer feedback to solve friction points in the shopping journey. They are building tools that personalize the user experience, boost satisfaction and increase sales. With AI and low=code automating routine tasks, developers can focus on innovation, from responding to consumer trends to improving supply chain resilience.
Across sectors, the combination of generative AI, low-code platforms, and developer insight is accelerating innovation and unlocking strategic business value.
Time to push the needleIt is time for businesses to push the needle, not just to adopt generative AI but also to empower developers to lead innovatively. With the use of generative AI and AI-powered low-code, developers can reallocate time that they can then reinvest towards targeting strategic business needs. Thanks to their strong understanding of business needs and pain points, developers are able to shape solutions that align digital solutions with business objectives.
Successful businesses will be those that recognize that AI will not be replacing developers but rather promoting them to more strategic roles.
We list the best sites for hiring developers.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Investigators say the former president and first lady exerted undue influence on the conservative People Power Party to nominate a specific candidate during a 2022 election.
(Image credit: Jung Yeon-je)
The Samsung Galaxy Z Flip 7 might just reel me back into using a vertical-style flip phone. I used to count on the Galaxy Z Flip 5 before I got my S24 Ultra, finding the handset to be more than capable of keeping up with my daily needs while also offering an immense level of cool. The Galaxy Z Flip 7 keeps the cool factor going, with an exceptionally minimized crease and a cover display that I just can’t help but love.
If you’re not aware, the cover screen is an essential component of any foldable smartphone. It allows for functionality when the primary screen is inaccessible due to being closed like a clamshell. It can also be used to take selfies using the rear cameras, conveniently placed at the bottom of the cover screen, or top of the phone.
Now, for a lot of foldables, the cover screen isn’t feature rich – by default. To maintain a seamlessly premium feel, Samsung actively restricts how much a user can do with the screen to a handful of supported widgets and cover screen elements. It’s not a bad idea and it keeps the level of polish to a high standard, but some folks, like me, may be left wanting to do more with the conveniently small screen. Thankfully Samsung has an easy solution to this – Multistar.
Give Multistar a go(Image credit: Zachariah Kelly / TechRadar)Multistar isn’t new. It’s been around for several generations of the Galaxy Z Flip, but it’s always been limited by how small the cover screen is. That’s no longer an issue with the Galaxy Z Flip 7, with its cover screen spanning the entire top of the folded phone.
Multistar is an essential piece of the puzzle. This official Samsung extension, accessible through the phone’s cover screen settings and then downloaded from the Galaxy Store, allows you to put apps directly onto one of the widget menus of the cover screen, allowing you to swipe through Bluesky or even play games like Crossy Road.
It’s not a complete solution – the screen doesn’t display notification bar information, navigating between apps is extremely basic (limited to a single swipe up) and indeed some apps are still inaccessible, such as Samsung’s own contacts and phone apps – but it does feel more useful than previous generations of the Flip, and I feel like I could sufficiently use much of my smartphone with just this small screen and my selected apps.
(Image credit: Zachariah Kelly / TechRadar)But is it worth the extra cost?As much as I love the cover screen, its functionality, and the concept of a compact, square phone over a plain rectangle, it's hard to justify the higher price – especially with a more affordable option on the market.
Alongside the Galaxy Z Flip 7, Samsung also released the Z Flip 7 FE, a cheaper handset with many of the same specs found in the Z Flip 6 – including its smaller cover screen that’s capable of a lot of the same functionality. Similarly, I’d recommend checking out Motorola’s Razr range of foldable smartphones, as those can be used with similar utility when it comes to apps at a more accessible price.
For now though – I’m a big fan of the funny little square I’ve been using instead of a boring rectangle.
You might also like...Anthropic has given Claude a memory upgrade, but it will only activate when you choose. The new feature allows Claude to recall past conversations, providing the AI chatbot with information to help continue previous projects and apply what you've discussed before to your next conversation.
The update is coming to Claude’s Max, Team, and Enterprise subscribers first, though it will likely be more widely available at some point. If you have it, you can ask Claude to search for previous messages tied to your workspace or project.
However, unless you explicitly ask, Claude won’t cast an eye backward. That means Claude will maintain a generic sort of personality by default. That's for the sake of privacy, according to Anthropic. Claude can recall your discussions if you want, without creeping into your dialogue uninvited.
By comparison, OpenAI’s ChatGPT automatically stores past chats unless you opt out, and uses them to shape its future responses. Google Gemini goes even further, employing both your conversations with the AI and your Search history and Google account data, at least if you let it. Claude’s approach doesn't pick up the breadcrumbs referencing earlier talks without you asking it to do so.
Claude remembersAdding memory may not seem like a big deal. Still, you'll feel the impact immediately if you’ve ever tried to restart a project interrupted by days or weeks without a helpful assistant, digital or otherwise. Making it an opt-in choice is a nice touch in accommodating how comfortable people are with AI currently.
Many may want AI help without surrendering control to chatbots that never forget. Claude sidesteps that tension cleanly by making memory something you summon deliberately.
But it’s not magic. Since Claude doesn’t retain a personalized profile, it won’t proactively remind you to prepare for events mentioned in other chats or anticipate style shifts when writing to a colleague versus a public business presentation, unless prompted mid-conversation.
Further, if there are issues with this approach to memory, Anthropic’s rollout strategy will allow the company to correct any mistakes before it becomes widely available to all Claude users. It will also be worth seeing if building long-term context like ChatGPT and Gemini are doing is going to be more appealing or off-putting to users compared to Claude's way of making memory an on-demand aspect of using the AI chatbot.
And that assumes it works perfectly. Retrieval depends on Claude’s ability to surface the right excerpts, not just the most recent or longest chat. If summaries are fuzzy or the context is wrong, you might end up more confused than before. And while the friction of having to ask Claude to use its memory is supposed to be a benefit, it still means you'll have to remember that the feature exists, which some may find annoying. Even so, if Anthropic is right, a little boundary is a good thing, not a limitation. And users will be happy that Claude remembers that, and nothing else, without a request.
You might also likeWindows 10 reaches its End of Life in October 2025, and a California resident is particularly disgruntled about this looming deadline.
He isn't alone, of course, but Lawrence Klein feels strongly enough that Microsoft is out of order in bringing the shutters down on Windows 10 in just a couple of months that he has fired up a lawsuit against the company.
As The Register reports, Klein has accused Microsoft of violating consumer legal code and business code (including false advertising law) by winding up support for Windows 10 too early, in his opinion.
The crux of the argument is that too many people remain on Windows 10 for the operating system to have support pulled (there are nuances here, which I'll come back to). And that some 240 million devices don't meet the hardware requirements to upgrade to Windows 11 – due to Microsoft setting those PC specifications at an unreasonable level – and the potential e-waste nightmare that could prompt.
In short, the upgrades required for Windows 11 - including TPM 2.0 security, as well as ruling out some surprisingly recent processors - aren't justified.
Furthermore, Klein argues that this upgrade timeline is all part of Microsoft's drive to push folks to use its Copilot AI with Windows 11, in a broader push to get more adoption for Copilot+ PCs - in other words, to buy new machines and discard old Windows 10 hardware (and again, we're back to that e-waste issue).
You can read the lawsuit in its entirety (it's a PDF) here, but that's the gist, and Klein argues that Microsoft should postpone killing off Windows 10 and wait until far fewer people are using the older operating system.
As the suit states: "[The] Plaintiff seeks injunctive relief requiring Microsoft to continue providing support for Windows 10 without additional fees or conditions until the number of devices running the operating system falls below a reasonable threshold, thereby ensuring that consumers and businesses are not unfairly pressured into unnecessary expenditures and cybersecurity risks [of running a Windows 10 PC without security updates]."
Is Klein justified in this lawsuit? In some respects, I think so, and while I don't imagine for a minute that this legal action will go anywhere in terms of the outcome of the suit itself, I've a feeling it could come into play, and be important, indirectly.
What do I mean by that exactly? Well, let's dive into the thinking behind Klein's lawsuit, and the key reasons why it might force Microsoft to sit up and take notice.
(Image credit: fizkes / Shutterstock)1. Windows 11's hardware requirements really are unreasonableDo we really need TPM 2.0 forced upon us? Yes, it ushers in a better level of security, I don't dispute that – but hundreds of millions of PCs potentially heading to landfill seems too heavy a price to pay. For me, as already mentioned, the decision to rule out some relatively new CPUs in the Windows 11 specs is particularly baffling.
The key point here is that Microsoft has never pushed the PC hardware requirements as hard as it has with Windows 11, and that leaves it open to criticism, although this observation is nothing new. What is new, though, is that the lack of fairness in setting this higher hardware bar has become crystal clear with the number of people who are still using Windows 10, which brings us onto my next point.
(Image credit: Microsoft)2. This close to End of Life, there are clearly too many people still using Windows 10The lawsuit cites outdated figures as to how many folks are still on Windows 10 - an estimate drawn from April 2025 suggests that 53% remain on the older OS. While that's no longer the case, the level remains high.
Based on the latest report from StatCounter (for July 2025), Windows 10 usage is 43%, which is very high with the End of Life deadline imminent. Normally, an outgoing Windows version would have way fewer users than this - Windows 7 had a 25% market share when it ran out of support (and it was a popular OS).
There are always holdouts when a new version of Windows comes out, but it's looking like this is going to be really bad with Windows 10's end of support. This is Klein's central argument, and I think it's a key factor that Microsoft doesn't appear to be taking into account - or perhaps doesn't want to face up to.
Maybe the software giant is thinking there'll be a last-minute flood of Windows 11 migration, but given the outlined hardware requirements problem, I doubt it.
(Image credit: NATNN / Shutterstock)3. Proving the cynics right?Another part of Klein's case against Microsoft is the assertion that the company is using Windows 10's end of support and Windows 11 upgrades to persuade people to buy new PCs that major in AI, namely Copilot+ PCs. And indeed Microsoft hasn't helped itself here, openly pushing these Windows 11 devices as the lawsuit points out – and that includes intrusive full-screen advertisements piped to Windows 10 machines.
That feels like a crass tactic, and makes it seem like part of this is indeed about pushing those Copilot+ laptops. Yes, by all means, advertise Copilot+ PCs and their AI abilities (which are limited thus far, I should note) – but don't do it in this way, directly at Windows 10 users, and expect that to be viewed in anything other than a negative and cynical light.
(Image credit: Shutterstock)4. Microsoft has already made a concession, true - but it's not enoughIt's worth noting that not everything Klein puts forward in this suit seems reasonable. I don't think you can argue that 10 years of support is stingy, which is what Microsoft has given Windows 10. However, Klein picks out 'transitional' support in his lawsuit, meaning the length of support after a succeeding OS has been launched – four years in this case – which isn't entirely fair and looks lean. The problem here is not the length of time for support as such, but the different circumstances around hardware requirements.
Also, calling Windows 11 'wildly unpopular' as Klein does at one point is equally unfair - even if adoption of the operating system has been very sluggish, admittedly. There's a definite bias towards shooting the OS down across all fronts, and I think that weakens Klein's argument.
But my main bone of contention here is that Klein ignores the concession Microsoft has made in terms of the extended year of support for consumers who want to stay on Windows 10. As the lawsuit states, this extra support through to October 2026 can be had for the price of $30, but recently, Microsoft introduced the ability to get this extension for free, well, kind of. (Financially, you won't pay a penny, but you need to sync some PC settings to OneDrive, and I don't think that requirement is too onerous myself.)
That was an important move by Microsoft, which it isn't given any credit for here, but that said, I still don't think the company goes far enough. As I've said before, an extra year of support is certainly welcome, but Microsoft needs to look at a further extended program for consumers.
So, while the lawsuit does go off the rails (at least for me) around these issues, it does effectively put a spotlight on how we're looking at measuring support, and a different perspective other than a hard timeframe. Instead of talking about 'x' years of extended coverage, it mentions a level of Windows 10 adoption that should be reached before Microsoft pulls the plug on support for the OS.
I think that's a valuable new angle on this whole affair, and while 10% of total Windows users – which is the low bar Klein sets for Windows 10 – maybe feels too low, there's an interesting conversation to be had here. (The other route Klein's suit suggests, which others have raised, is Microsoft simply relaxing the hardware requirements for Windows 11 - but I think at this stage of the game, we can safely conclude that this won't be happening.)
(Image credit: MAYA LAB / Shutterstock)5. Under pressureMy final point in terms of why this lawsuit could prove a compelling kick in the seat of the pants for Microsoft is that while, as already observed, I can't see Klein triumphing over the company, it's more fuel to the fire in the campaign to stave off a potentially major e-waste catastrophe.
Simply put, the PR around this – and it has been spinning up headlines aplenty over the past couple of days – is another reason for Microsoft to sit up, take notice, and maybe do some rethinking over exactly how Windows 10's End of Life is being implemented.
We've already seen one concession – the aforementioned free route to get extended support for Windows 10 – in recent times, which surely must have been a reaction to the frustration that Klein and many others feel. So, perhaps this lawsuit could be the catalyst to prod Microsoft into going further in its appeasement of the unhappy Windows 10 users out there – fingers crossed, at any rate.
You might also likeTesla has shut down its Dojo supercomputer team, in what appears to be a major shift in the company’s artificial intelligence plans.
Reports from Bloomberg claim the decision followed the exit of team leader Peter Bannon and the loss of about 20 other staff members to a newly formed venture called DensityAI.
The remaining team members will now be reassigned to other computing and data center projects within Tesla.
Leadership exit triggers Tesla shake-upThe Dojo system was originally developed around custom training chips designed to process large amounts of driving data and video from Tesla’s electric vehicles.
The aim was to use this information to train the company’s autonomous driving software more efficiently than off-the-shelf systems.
However, CEO Elon Musk said on X it no longer made sense to split resources between two different AI chips.
Tesla has not responded to requests for comment, but Musk has outlined the company’s focus on developing its AI5 and AI6 chips.
He said these would be “excellent for inference and at least pretty good for training” and could be placed in large supercomputer clusters, a configuration he suggested might be called “Dojo 3.”
The company’s shift away from the Dojo project comes amid broader restructuring efforts that have seen multiple executive departures and thousands of job cuts.
Tesla has also been working on integrating AI tools such as the Grok chatbot into its vehicles, expanding its AI ambitions beyond self-driving technology.
Tesla’s plans for future AI computing infrastructure and chip production after Dojo rely heavily on outside technology suppliers, with Nvidia and AMD expected to provide computing capabilities, while Samsung Electronics will manufacture chips for the company.
Samsung recently secured a $16.5 billion deal to supply AI chips to Tesla, which are expected to power autonomous vehicles, humanoid robots, and data centers.
Musk has previously said Samsung’s new Texas plant will produce Tesla’s AI6 chip, with AI5 production planned for late 2026.
For now, Musk appears confident that Tesla’s chip roadmap will support its ambitions.
But with the original Dojo team largely gone and reliance on external partners increasing, the company’s AI trajectory will depend on whether its new chips and computing infrastructure can deliver the results Musk has promised.
You might also likeDays after the president's call for a "new" census, the top official overseeing the Census Bureau told employees that Congress, not Trump, has final say over the tally, NPR has exclusively learned.
(Image credit: Anna Moneymaker)