Anthropic has offered its Claude AI model to US government agencies for just $1 for the next year.
The offer extends to all three branches of the government, targeting the legislative and judicial branches alongside the executive.
The move comes almost immediately after OpenAI offered its ChatGPT enterprise for all US federal government workers for $1 per year per agency, as the firms look to undercut each other - and presumably create a reliance within the public sector, which is likely to use AI tools to help streamline their work and save money on admin costs.
Government contracts“As AI adoption leads to transformation across industries, we want to ensure that federal workers can fully harness these capabilities to better serve the American people. By removing cost barriers, we're enabling the government to access the same advanced AI that's already proving its value in the private sector,” Anthropic said in a statement.
LLM companies are racing to obtain government contracts, with Anthropic, OpenAI, and xAI awarded a $200 million AI development deal with the US Department of Defence - all to develop models for US government customers for national security.
Claude has already been added to the General Services Administration’s (GSA) schedule to help streamline procurement, with Claude for Enterprise and Claude for Government offering support with handling sensitive unclassified work.
The firm will also give assistance to rapidly implement AI across agencies - with technical support for successful adoption into their ‘productivity and mission workflows’.
“This OneGov deal with Anthropic is proof that the United States is setting the standard for how governments adopt AI — boldly, responsibly, and at scale,” said GSA Acting Administrator Michael Rigas.
“This agreement puts the most advanced American AI models directly into the hands of those serving the American people.”
You might also likeIn his latest tweet on the social media platform X, Sam Altman, CEO of OpenAI, has confirmed that all paid ChatGPT subscribers will be getting access to not only the old GPT-4o model, but also older LLMs like o3, 4.1.
The popular ChatGPT-4.5 will also be coming back, but it will only be available to Pro subscribers. Altman says this is because “it costs a lot of GPUs”, a reference to the amount of compute power that it requires.
In the wake of the backlash against the removal of the popular 4o model with absolutely no warning when GPT-5 was released, Altman seems to have learned a lesson and has promised, “If we ever do deprecate it, we will give plenty of notice.”
All paid users of ChatGPT should now find a 'Show additional models' toggle in the ChatGPT web settings, which will give you access to all the older LLM models. You’ll also be able to add a new GPT-5 Thinking mini model.
Updates to ChatGPT:You can now choose between “Auto”, “Fast”, and “Thinking” for GPT-5. Most users will want Auto, but the additional control will be useful for some people.Rate limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking…August 13, 2025
Altman also makes reference to the highly criticized ‘colder’ tone of the new ChatGPT-5, which has alienated many users in the tweet: “We are working on an update to GPT-5’s personality which should feel warmer than the current personality, but not as annoying (to most users) as GPT-4o”.
His reference to ChatGPT-4o being annoying refers to the sycophantic phase that GPT-4o seemed to enter after an upgrade back in April.
Altman continues, ”However, one learning for us from the past few days is we really just need to get to a world with more per-user customization of model personality.”
Multiple personalitiesAltman’s reference to “per-user customization” reflects OpenAI's recognition that what its users want is an easier way to select how formal, humorous, empathetic, or direct the assistant is.
Altman endured a recent AMA chat on Reddit where he got to listen to users' complaints firsthand. It seems to be GPT-5's lack of a personality that has most angered ChatGPT users, who had gotten used to building quite a rapport with GPT-4o.
If I were given free rein to imagine how I'd like ChatGPT to work, I’d like to get to the stage where ChatGPT's personality traits could be represented via sliders, like ‘professional vs. casual’ or ‘concise vs. detailed’. That would make it far easier to get the results you are looking for.
While CustomGPTs already exist, I’d love it if it were possible to easily switch between personality types, like ‘Work Assistant’ or ‘Creative Writing Coach’. However, I get the feeling it will be a long time yet before we get such an easily customizable AI chatbot to talk to.
You might also likeGPT-5 just got its first major change, and now users can select between different modes when using the new model in ChatGPT.
Confirmed by OpenAI CEO, Sam Altman, on X earlier today, ChatGPT users can now choose between Auto, Fast, Thinking, and Thinking-mini when using GPT-5.
Each new mode offers a different way for GPT-5 to, you guessed it, think. "Auto" lets GPT-5 decide for itself how long to think, Fast" gives you instant answers, "Thinking-mini" thinks quickly, and "Thinking" will take longer to think for better answers.
The change comes following mass backlash related to GPT-5's performance, and will now give users multiple tiers of performance to choose from. We've yet to test all of the new thinking modes; however, when OpenAI decided to limit choice and remove legacy models, the lack of variety was met with widespread criticism.
OpenAI has since reverted back on those decisions, making 4o available again for paid subscribers, and adding the choice of multiple thinking abilities in GPT-5 only further cements the U-turn.
Updates to ChatGPT:You can now choose between “Auto”, “Fast”, and “Thinking” for GPT-5. Most users will want Auto, but the additional control will be useful for some people.Rate limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking…August 13, 2025
3000 messages a week? Yes pleaseNew thinking modes aren't the only changes coming to GPT-5. Altman also announced the increase in rate limits for the brand new AI model following discontent from ChatGPT Plus users who pay $20/£20 a month to access the premium tier.
At launch, GPT-5's Thinking model was limited to 200 messages per week for Plus subscribers, now Altman says the rate limits have been increased to 3,000 a week. He also notes, "Context limit for GPT-5 Thinking is 196k tokens. We may have to update rate limits over time depending on usage."
Earlier this week, Altman said ChatGPT-5 Pro might be coming to Plus subscribers too, although he now appears to have backtracked, claiming, "we do not have the compute to do it right now."
GPT-5 hasn't even been out a week yet, but OpenAI has started to right the wrongs of the initial launch. With new rate limits and more choices in how long the AI model takes to respond with less or more thinking process, the company is trying to recapture its user base's trust.
You might also likeOpenAI has rolled out some handy new updates to Pro subscribers that will see ChatGPT link in more closely with top productivity tools such as Gmail, Google Calendar, Google Contacts and GitHub to reference content without the services inside conversations.
Plus members also get a few connectors, too, including collaboration tools such as Microsoft Teams and SharePoint, along with the likes of Box, Canva, Dropbox, HubSpot and Notion.
As has often proven to be the case with ChatGPT, other paying tiers including Plus, Team, Enterprise and Edu will also get the Pro features in the coming weeks via a staged rollout.
ChatGPT connects to even more workplace appsWe've already seen connectors link to some third-party services for easier, faster access to information, including Google Drive, but the latest update marks a considerable improvement with links to even more platforms.
However, there's one key twist that means millions of users will not be able to use them – OpenAI explained, "connectors for Plus/Pro plans are not available in EEA, Switzerland, and the UK." TechRadar Pro has sought confirmation as to why this is the case.
The news comes as OpenAI releases its GPT-5 and GPT-5 Thinking models to the world, with the company announcing the availability for business plans now.
Users can now select between 'Auto', 'Fast' and 'Thinking' variants of GPT-5 based on how much control they may require, with Plus users being granted 3,000 messages per week with GPT-5 Thinking before OpenAI directs them to the lighter GPT-5 Thinking mini model.
4o has also returned into the model picker following uproar that all previous models got removed upon the launch of GPT-5.
"Paid users also now have a 'Show additional models' toggle in ChatGPT web settings which will add models like o3, o4-mini, 4.1, and GPT-5 Thinking mini," OpenAI explained in a support page. "4.5 is only available to Pro users due to GPUs."
You might also likeLinkedIn has added another game to its portfolio in the hope that it can keep more of its 1.2 billion users engaged with the job site platform for longer.
The launch of Sudoku marks LinkedIn's sixth game, which is designed to be played more quickly (within two to three minutes) with a 6x6 layout compared with traditional 9x9 versions of the game.
As with previous games added to the platform, LinkedIn believes Sudoku could serve as an ice-breaker to spark friendly competition among colleagues.
LinkedIn continues to add games to the platformAlthough the platform is primarily designed for professional social networking, millions are said to play games on the platform daily, with peak time at 7am ET.
"More than a year after launching LinkedIn Games, engagement remains strong," the company wrote in a post.
It's estimated 86% of today's players will return tomorrow, and 82% will return next week, with Gen Z most likely to participate in online gaming.
Although Meta's platforms count more users than LinkedIn (3.5 billion daily users) and better fiscal growth, LinkedIn is less challenged in the space, focusing on professional networks rather than personal engagement - last quarter, the Microsoft-owned platform saw a 9% growth in revenue to $4.6 billion.
Recent months have seen countless incremental upgrades to the platform, including the addition of new games and useful injections of AI tools to help both job seekers and recruiters be more efficient.
This particular game comes with plenty of credentials, being built in collaboration with Nikoli (the Japanese publisher than popularized Sudoku) and Thomas Snyder, three-time World Sudoku Champion and puzzle designer.
"We don’t want to have a puzzle on LinkedIn that takes 20 minutes to solve, right?” LinkedIn Senior Director of Product Lakshman Somasundaram said in an interview with CNBC, speaking about the game's more condensed design.
You might also likeThe UK and EU face a defining challenge—and opportunity—as they chart their digital economic futures. How can we unlock the full value of transformative technologies like AI, quantum computing, and cloud infrastructure while managing the growing tide of cyber threats?
The answer lies not in choosing between innovation and regulation, but in reimagining cybersecurity policy as a strategic lever for economic growth.
Today, trust in digital systems is a prerequisite for digital transformation. From small businesses to multinational firms, no organization can scale without confidence in the security of its infrastructure.
However, trust doesn’t emerge on its own—it’s built through smart, risk-informed policy. That’s why cybersecurity must be at the center of economic strategy, not an afterthought to it.
Growing recognitionAcross the UK and Europe, there’s growing recognition of this link. For example, the UK’s Cyber Security and Resilience Bill positions cyber readiness as a core part of economic resilience. The EU’s cybersecurity policies also explicitly supports digital skills, market development, and cross-border data flows.
But to truly crystalize this moment, a clearer statement of how these policies are being designed to meet the moment is needed from government officials.
I recently attended the RSA Conference in the US and then travelled across both the UK and EU. Speaking with a variety of policymakers in different regions reminded me of the need we have to focus on partnerships, procurement and pivot in our cyber policy frameworks. I call these the “three Ps.”
Partnerships – Getting governments and the private sector on the same side of the tableHigh profile attacks such as those on the NHS, retailers and TfL over the past year have really brought into focus the impact cyberattacks can have on the wider population, and how fragile our digital systems are.
Cyber threats and how cyber policy can protect AI, cloud systems, and critical infrastructure were among the top concerns in every conversation I had with government stakeholders across the UK and EU.
To deliver cyber policy, however, governments and industry must sit on the same side of the table, working together to reduce systemic risk; cybersecurity cannot be delivered top-down. This means moving beyond passive compliance checklists toward dynamic, data-driven collaboration.
Private sector businesses often possess advanced technological capabilities and gather vast amounts of data through their daily operations, offering invaluable insights into emerging cyber threats.
Government agencies, on the other hand, bring a broader geopolitical and strategic understanding that helps interpret private sector data within the context of national and international security threats.
Bringing the government’s geopolitical context and regulatory levers together with the private sector’s technical capabilities and real-time intelligence, creates far more effective policies and faster threat responses.
Governments need to go beyond self-attested best practices and design partnerships that actively analyze the data gathered to identify which behaviors and deterrents actually work within a nation’s unique risk environment.
For small and medium-sized businesses in particular, clear, practical guidance shaped in collaboration is often the difference between resilience and risk exposure.
Some governments are doing better than others in recognizing the ability to translate complex policy goals into actionable, plain-speak directives, but this needs more intentional thought and design.
Procurement – Building success for the futureEconomic growth will continue to increasingly depend upon digital infrastructure. For example, the UK government announced this year the AI Opportunities Action Plan and a £121 million investment boost for quantum technology. At the core of both announcements was how AI and quantum support the government’s economic mission.
Cybersecurity also plays a foundational role in the creation of resilient economic strategies. However, similar to intelligence sharing between the public and private sectors, the two parties often develop capabilities in silos that don’t work together. This leads to gaps in terms of the capabilities governments need and the solutions available to them on the market.
Cyber policy should guide how governments buy, fund, and signal the technologies they want to see in the market. This essentially means thinking about how the systems you build today will support success tomorrow.
We’re seeing governments improve in this area. For example, the NCSC’s guidance on post-quantum cryptography is a great example of future-focused leadership. While we don’t yet know when the "quantum year" will arrive, it’s encouraging to see progress and growing awareness that organizations need to be ready.
However, this alone is not enough. More incentives are needed to signal this as a priority for the private sector. Remember, procurement isn’t just a back-office function—it’s an economic strategy.
Research and Development (R&D) projects are an effective way to encourage collaboration and build momentum, and this is particularly needed in AI.
Britain, for instance, has some of the best universities and R&D centers in the world but loses talent to better-funded AI hubs. Governments have to create a long-term AI skills and R&D strategy that not only develops expertise but retains it.
Pivot! Pivot! Pivot!In many of my conversations, stakeholders repeatedly used the word “pivot.” I was intrigued as to why this word came up so often. When pressed, I learned that what they really meant was “review.”
This is because not all regulations age well. You just have to look at the growing calls to review the Computer Misuse Act, for example. There’s a growing recognition among the UK and EU that some aspects of tech policy and investment need reviewing.
Some cybersecurity rules, though well-intentioned, may add a compliance burden—which in itself is a risk—without reducing actual cyber or business risk. Software misconfigurations, third-party supply chain risks, and emerging threats are not always addressed by the ever-growing complexity of overlapping regulations and rules designed to manage cyber risk.
This isn’t particularly new—we’ve long debated the balance between regulation and building trusted partnerships. While we want to open new frontiers for investment and innovation, it shouldn’t come at the expense of public trust.
However, this age-old argument is starting to shift. There’s greater recognition that the best way to maintain public trust isn’t necessarily through universal regulations, but through considered trade-offs.
Policymakers must be willing to pivot—reviewing what’s working, sunsetting what isn’t, and designing regulation that is adaptive, risk-based, and innovation-friendly.
The key is balance. Governments have to keep in mind the overall goal of policy: understanding the security of systems, minimizing the impact on resilience, and ensuring long-term economic growth.
Cyber is at the forefront of policyAlthough I’ve had many different conversations with decision-makers, what struck me most was that security is no longer an afterthought, it’s now a central focus for governments.
From a private sector standpoint, cybersecurity is no longer a cost of doing business—it’s a condition for doing business. And it’s a competitive advantage waiting to be seized.
If the UK and EU want to continue enabling the next era of digital growth, they must address cybersecurity policies as a suite of policies that enable economic growth, focusing on partnerships and procurement, and having the courage to pivot when necessary.
We list the best Request For Proposal (RFP) platform.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The pioneering, triple-folding Huawei Mate XT that launched in 2024 is due to get a successor later this year – and the latest rumor suggests the premium device is going to be unveiled around the same time as the Apple iPhone 17 series.
This information comes from well-known tipster Fixed Focus Digital on Chinese social media platform Weibo (via Android Headlines), who says we can expect to see the Huawei Mate XT 2 announced on Wednesday, September 10.
If you've been keeping pace with the flurry of iPhone 17 rumors in recent weeks, then you'll know those are pointing to Tuesday, September 9 as the big day for the grand unveiling of Apple's next flagship phones.
The usual iPhone upgrades are in the pipeline – a faster processor, better cameras, and so on – but there's no doubt that it's Huawei that will be unveiling the most innovative and exciting handset that week, if these rumors prove to be accurate.
When are we getting a foldable iPhone?Apple has always taken a rather slow and steady approach to smartphone innovation, which helps explain why Huawei is now on its second tri-fold phone and Samsung is on its seventh round of foldables, while Apple has yet to even hint that a foldable iPhone is coming.
The most recent information we have suggests that Apple will finally launch a folding iPhone in September 2026, alongside the iPhone 18 line. After that, we might get treated to a new model every 12 months, as Apple gets more familiar with the manufacturing process.
Rumors indicate that Apple has been working hard to minimize the crease on its foldable iPhone, and we're expecting it to cost a fair bit too. Other leaks suggest it won't claim the title of the thinnest foldable phone when it appears.
A folding iPhone has been a long time coming, and we're looking forward to seeing it, but Apple has a lot of catching up to do at this point, with Samsung expected to launch its own tri-fold phone at some point in October.
You might also likeThe rumored iPhone 17 Air is an intriguing handset, as it could be a big hit for Apple – or a colossal failure, depending on how well Apple balances its build with its specs, and on how much people take to a super-slim iPhone. The latest leak though suggests Apple might not have got the balance right for me.
According to Fixed Focus Digital – a leaker with a reasonable track record – posting on Weibo (via GSMArena), the iPhone 17 Air will have an A19 Pro chipset, just like the iPhone 17 Pro and the iPhone 17 Pro Max. Except, it won’t quite be the same here, as this source claims that the Air’s version will have five GPU cores, while the version used by the Pro phones will have six.
It remains to be seen how much difference that will make, but it would mean the iPhone 17 Air is less powerful than the iPhone 17 Pro and Pro Max – though still probably more powerful than the base iPhone 17, which will reportedly have a non-Pro version of the A19.
Battery, screen, and camera compromisesThe iPhone 16 Pro (Image credit: Future / Lance Ulanoff)But this isn’t the Air’s only rumored compromise, as the same source also says that it will have a worse screen and battery than the iPhone 17 Pro.
They don’t get specific here as to the ways in which they’re worse, but presumably this means a lower battery capacity. As for the screen, that’s probably a reference to a previous claim they made that the base iPhone 17 and iPhone 17 Air wouldn’t have a variable refresh rate, and in turn therefore probably wouldn’t have an always-on display – though they will at least apparently have 120Hz screens.
So that’s quite a lot of compromises, and – coupled with the iPhone 17 Air probably just having one rear camera – this would almost certainly be too much of a compromise for me. Really, it seems only those who value aesthetics over everything else would choose to purchase the Air.
Yet there might be a lot of buyers like that, with Fixed Focus Digital predicting that the iPhone 17 Air will be a hit. So it will be interesting to see how well it actually does. We should find out in September, as that’s when the entire iPhone 17 series is expected to launch.
You might also likeAlien: Earth's cast have teased what fans can expect from their characters' complex relationship following their reunion in episode 2.
Speaking to TechRadar before Alien: Earth's two-episode premiere, Sydney Chandley and Alex Lawther suggested there'll be many moments of "tension" and "vulnerability" between their characters throughout the sci-fi horror show's next six entries.
Major spoilers immediately follow for Alien: Earth episodes 1 and 2. Turn back now if you haven't watched them yet.
Sydney Chandley plays Wendy, a synthetic with the consciousness of a child (Image credit: FX Networks/Hulu/Disney+)As I briefly touched on in my Alien: Earth review, Chandler plays a Hybrid called Wendy. Created by Prodigy Corp., one of Earth's five multinationals, Hybrids are technological prototypes that see the consciousness of a child transferred into the body of an adult-sized synthetic. The reason children are used for such experiments is that their minds are more malleable than adults, so they won't reject the transformation process.
Episodes 1 and 2 of the Alien franchise's first-ever TV show reveal there's more to Wendy's creation than meets the eye, though. For one, her real name isn't Wendy, but rather the name this Hybrid picks for her transcendence. As we learn, her actual name is Marcy and she was chosen to be the first Hybrid because she had a terminal illness.
That's not all. Marcy was the biological sister of Joe 'CJ' Hermit, a medic employed by Prodigy who's portrayed by Lawther. Instead of telling Joe the truth about what happened to his younger sibling, Prodigy claimed Marcy had died. Oh, and the nefarious megacorporation also lied to Marcy about why Joe couldn't visit her at their secret Neverland headquarters – Prodigy telling Marcy he was always too busy to pay her a visit.
Joe is involved in the search and rescue operation in Prodigy City (Image credit: FX Networks)However, when Wendy learns that Joe is part of the search and rescue operation after a Weyland-Yutani deep space research vessel crash lands on Prodigy City – a spaceship filled with terrifying creatures, no less – she convinces Prodigy CEO Boy Kavalier to send her, Kirsh, and her fellow Hybrids to aid the rescue effort. Long story short: Wendy/Marcy tracks down her brother, but it's not exactly the perfect reunion she was hoping for.
Considering he'd made peace with his sister's passing, it's easy to see why Joe can't grasp the fact that Marcy is somehow back from the dead. And, while Wendy/Marcy manages to convince Joe it's really her via a trip down memory lane in the Alien series' second chapter, it's clear that things can't go back to the way they were when the pair were kids.
"It's really fun to play with the vulnerability and innocence she carries, and marry that with the fact she's basically a weapon," Chandler said. "What does that do to the mind of a child? And what does that do to your sense of fear and your sense of identity?
"For Wendy, I think if she doesn't have her brother and that tie to her real life, her understanding of her identity could start to wobble," Chandler added. "It's very important for other reasons as well, but he's the only other person on earth who knows her as her full self. Everyone else is telling her she's something different, so she needs Joe to keep reminding herself of who she is."
A post shared by Alien: Earth (@alienearthfx)
A photo posted by on
"Nobody sees Marcy the way that Joe does," Lawther added. "It becomes that thing of longing for this person [Marcy] to be the person that they say they are, rather than what they seem to be [Wendy].
"And that causes tension between them," Lawther continued. "Joe's hanging onto this idea of his sibling who she can no longer be. He can't quite grasp this concept of this Hybrid being his sister, but being something else, too. He has a hard time recognizing the person that he lost and we'll see how that all unfolds as time goes on."
Alien: Earth episodes 1 and 2 are out now on Hulu (US) and Disney+(internationally). New episodes air weekly.
You might also likeThe majority of workers say they are comfortable working with AI agents, however far fewer (30%) are comfortable being managed by them, new research has found.
The findings from Workday comes as four in five (82%) organizations expand their use of AI agents, with workers now demanding clearer boundaries and reassurance about their roles.
On the whole, the study found workers are generally happier when they're in control of artificial intelligence, with 75% fine with AI tools recommending skills or working alongside them compared with 24% who are comfortable with it operating in the background, without human knowledge.
Workers prefer to know when AI is being usedHow much a worker trusts AI comes down to how much they use it – 95% of experienced users trust the tech, with only 36% of AI 'explorers' trusting responsible use.
"Building trust means being intentional in how AI is used and keeping people at the center of every decision," Workday AI VP Kathy Pham explained.
However, despite apprehension around advanced agentic AI taking control in the background, workers still acknowledge how it could help them.
Nine in 10 employees believe AI agents will help them get more done. To that degree, nearly half (48%) worry that the added productivity could come with increased pressure at work, potentially by increased workloads, as well as a decline in critical thinking (48%).
Rather than seeing AI as a human replacement and full colleague, most of the study's participants prefer to see AI as a teammate that can boost their own productivity. Sensitive areas like hiring, finance and legal matters are where it's perceived less favorably, underscoring the need for human oversight.
"We’re entering a new era of work where AI can be an incredible partner, and a complement to human judgement, leadership, and empathy," Pham added,
Still, despite early concerns, workers are less likely to worry about AI taking their jobs (12%), with most believing AI could actually help address ongoing talent shortages (76%).
You might also likeEpson has launched four new projectors, and two of them will be of particular interest to Apple users. That's because for the first time, Epson is delivering AirPlay compatibility for retail home cinema projector purchasers – something that's been available in the likes of LG's Cinebeam range for some time.
The four projectors are all 3-chip 3LCD models and they're divided into two products: Home Cinema projectors, and Pro projectors. The AirPlay models also support Miracast.
The Home Cinema projectors are the Home Cinema 1100, which has AirPlay, and the Home Cinema 980, which hasn't.
The Pro models are the Pro EX9270 wireless projector, which is the AirPlay model, and the EX3290, which isn't.
The Pro EX9270 delivers 1080p at up to 300 inches and has AirPlay on board. (Image credit: Epson)Epson's new projectors: key features and pricingThe $899 (so around £660 or AU$1,370) Home Cinema 1100 with AirPlay is rated for 3,400 lumens of color and white brightness, and the $799 Home Cinema 980 is rated for 4,000 lumens.
Both deliver 1080p Full HD resolution at sizes up to 300 inches, both have picture skew sensors, and both have two HDMI ports. They feature Epson's 3-chip 3LCD technology that delivers "outstanding" images in a wide range of lighting conditions.
The $999 Pro EX9270 with AirPlay is rated for 4,100 lumens and the $649 Pro EX3290 is rated for 4,000. Like the Home Cinema projectors they too feature the 3LCD system and can throw images up to 300 inches; the EX9270 is full HD and the EX3290 is WXGA. There are twin HDMI ports, an image skew sensor and built-in speakers, and the Pro EX9270 also has 1.6x optical zoom.
All four projectors are available now directly from Epson and from authorized retailers.
You might also likeCould AI be the answer to the UK’s productivity problem? More than half (58%) of organizations think so, with many experiencing a diverse range of AI-related benefits including increased innovation, improved products or services and enhanced customer relationships.
You don’t need me to tell you this – chances are you’re one of the 7 million UK workers already using AI in the workplace. Whether you’re saving a few minutes on emails, summarizing a document, pulling insights from research, or creating workflow automations.
Yet while AI is a real source of opportunities for companies and their employees, pressure for organizations to adopt it quickly can inadvertently give rise to increased cybersecurity risks. Meet shadow AI.
What is shadow AI?Feeling the heat to do more with less, employees are looking to GenAI to save time and make their lives easier – with 57% of office workers globally resorting to third-party AI apps in the public domain. But when employees start bringing their own tech to work without IT approval, shadow AI rears its head.
Today this is a very real problem, with as many as 55% of global workers using unapproved AI tools while working, and 40% using those that are outright banned by their organization.
Further, internet searches for the term “shadow AI” are on the rise – leaping by 90% year-on-year. This shows the extent to which employees are “experimenting” with GenAI – and just how precariously an organization's security and reputation hangs in the balance.
Primary risks associated with shadow AIIf UK organizations are going to stop this rapidly evolving threat in its tracks, they need to wake up to the threat of shadow AI – and fast. This is because the use of LLMs within organizations is gaining speed, with over 562 companies around the world engaging with them last year.
Despite this rapid rise in use cases, 65% of organizations still aren’t comprehending the implications of GenAI. But each unsanctioned tool leads to significant vulnerabilities that include (but are not limited to):
1. Data leakage
When used without proper security protocols, shadow AI tools raise serious concerns about the vulnerability of sensitive content, e.g. data leakage through the learning of information in LLMs.
2. Regulatory and compliance risk
Transparency around AI usage is central to ensuring not just the integrity of business content, but users’ personal data and safety. However, many organizations lack expertise or knowledge around the risks associated with AI and/or are deterred by cost constraints.
3. Poor tool management
A serious challenge for cybersecurity teams is maintaining a tech stack when they don’t know who is using what – especially in a complex IT ecosystem. Instead, comprehensive oversight is needed and security teams must have visibility and control over all AI tools.
4. Bias perpetuation
AI is only as effective as the data it learns from and flawed data can lead to AI perpetuating harmful biases in its responses. When employees use shadow AI companies are at risk of this – as they have no oversight of the data such tools draw upon.
The fight against shadow AI begins with awareness. Organizations must acknowledge that these risks are very real before they can pave the way for better ways of working and higher performance – in a secure and sanctioned way.
Embracing the practices of tomorrow, not yesterdayTo realize the potential of AI, decision makers must create a controlled, balanced environment that puts them in a secure position – one where they can begin to trial new processes with AI organically and safely. Crucially though, this approach should exist within a zero-trust architecture – one which prioritizes essential security factors.
AI shouldn’t be treated as a bolt-on. Securely leveraging it requires a collaborative environment that prioritizes safety. This ensures AI solutions enhance – not hinder – content production. Adaptive automation helps organizations adjust to changing conditions, inputs, and policies, simplifying deployment and integration.
Any security experience must also be a seamless one, and individuals across the business should be free to apply and maintain consistent policies without interruption to their day-to-day. A modern security operations center looks like automated threat detection and response that not only spot threats but handles them directly, making for a consistent, efficient process.
Robust access controls are also key to a zero-trust framework, preventing unauthorized queries and protecting sensitive information. While these governance policies have to be precise, they must also be flexible to keep pace with AI adoption, regulatory demands, and evolving best practices.
Finding the right balance with AIAI could very well be the answer to the UK’s productivity problem. But for this to happen, organizations need to ensure there isn't a gap in their AI strategy where employees feel limited by the AI tools available to them. This inadvertently leads to shadow AI risks.
Powering productivity needs to be secure, and organizations need two things to ensure this happens – a strong and comprehensive AI strategy and a single content management platform.
With secure and compliant AI tools, employees are able to deploy the latest innovations in their content workflows without putting their organization at risk. This means that innovation doesn’t come at the expense of security – a balance that, in a new era of heightened risk and expectation, is key.
We list the best IT management tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
You can now create a WordPress website in minutes, with the help of Generative AI (GenAI), without needing a third-party website builder or AI tool. Everything can be done in WordPress directly, through a chat interface, and without the website builder’s branding showing anywhere on the site.
This is all courtesy of the website builder platform 10Web, which just announced the launch of its fully white-labeled AI website builder solution. It comes in the form of a WordPress plugin, and allows users to create a website inside their hosting stack without relying on a separate builder platform.
In a press release shared with TechRadar Pro earlier this week, 10Web says the new offering should further increase ARPU, reduce churn, and differentiate through same-day AI website delivery.
“Hosting companies have been stuck selling blank WordPress installs,” said Arto Minasyan, Founder and CEO of 10Web. “With this solution, they can launch fully functional websites under their own brand in seconds. It’s the simplest way to deliver real customer value, without changing how they host or deploy WordPress.”
WooCommerce includedUsually, when a customer buys a hosting service, they get either a blank WordPress dashboard, or one bundled with themes and plugins. However, with the emergence of GenAI, expectations changed, and customers have gotten used to the “describe and build” experience, the company claims.
That being said, it claims “early tests” showed users being 30% more likely to publish their site compared to traditional WordPress onboarding flows. It didn’t say when the tests took place, who was tested, and against what, though.
In any case, 10Web says the plugin is built on its proprietary AI technology which leverages advanced models from OpenAI, Gemini, and Anthropic. The sites are mobile-friendly, fully structured, and based on a “simple business description”.
When users create a site, they will see a branded AI flow that generates the entire website, including WooCommerce integration, if needed. Finally, everything is white-labeled with the hosting provider’s name and logo, and includes a visual editor with AI Co-Pilot.
More from TechRadar ProIn boardrooms and investor meetings, artificial intelligence is now table stakes. AI tools are everywhere. Analysts are forecasting trillions in potential value. McKinsey estimates that generative AI alone could boost the global economy by up to $4.4 trillion a year.
And yet, in the enterprise? Something’s not clicking.
Despite the hype, most AI projects are still stuck in the sandbox; demo-ready, not decision-ready. The issue isn’t model performance. It’s operationalization. Call it the Enterprise AI Paradox: the more advanced the model, the harder it is to deploy, trust, and govern inside real-world business systems.
The heart of the paradoxAt the heart of this paradox, McKinsey argues, lies a misalignment between how AI has been adopted and how it generates value.
Horizontal use cases, notably tools like Microsoft’s Copilot or Google's Workspace AI, proliferate rapidly because they're easy to plug in and intuitive to use. They provide general assistance, they summarize emails, draft notes, simplify meetings, and so on.
Yet these horizontal applications scatter their value thinly, spreading incremental productivity improvements so broadly that the total impact fades into insignificance.
As the McKinsey report puts it, these applications deliver "diffuse, hard-to-measure gains.”
In sharp contrast, vertical applications (those baked into core business functions) carry the promise of significant value but struggle profoundly to scale. Less than 10 percent of these targeted deployments ever graduate beyond pilot phases, trapped behind technological complexity, organizational inertia, and a lack of mature solutions. LLMs are extraordinary. But they’re not enough.
It’s like trying to run a Formula 1 car on a farm trackThe real enterprise challenge isn’t building a big, clever model. It’s orchestrating intelligence, across systems, teams, and decisions.
The world’s most innovative companies don’t want a single mega-model spitting out answers from a black box. They want a system that’s intelligent across the board: data flowing from hundreds of sources, automated agents taking action, results being validated, and everything feeding back into an improved loop.
That’s not one model. That’s many. Talking to each other. Acting with autonomy. And constantly learning from a dynamic environment.
This is the future of enterprise AI, and it’s what’s known as agentic.
What is agentic AI, and why does it matter?Agentic AI systems are different from monolithic LLMs in one key way: they think and act like a team. Each agent is a specialist, trained on a narrow domain, given a clear role, and capable of working with other agents to complete complex tasks.
One might handle user intent. Another interfaces with an internal database. A third enforces compliance. They can run asynchronously, reason over real-time data, and retrain independently.
Think of it like microservices, but for cognition. Unlike traditional generative AI, which remains largely reactive (waiting passively for human prompting) agents introduce something entirely different. "AI agents mark a major evolution in enterprise AI - extending gen AI from reactive content generation to autonomous, goal-driven execution,” McKinsey researchers explain.
This isn’t some speculative vision from a Stanford whitepaper. It’s already happening, in advanced enterprise labs, in the open-source community, and in early production systems that treat AI not as a product, but as a process.
It’s AI moving from intelligence-as-an-output to intelligence-as-infrastructure.
Why most enterprises aren’t ready (yet)If agentic systems are the answer, why aren’t more enterprises deploying them?
Because most AI infrastructure still assumes a batch world. Systems were designed for analytics, not autonomy. They rely on periodic data snapshots, siloed memory, and brittle pipelines. They weren’t built for real-time decision-making, let alone a swarm of AI agents operating simultaneously across business functions.
To make agentic AI work, enterprises need three things:
Live data access – Agents must act on the most current information available
Shared memory – So knowledge compounds, and agents learn from one another
Auditability and trust – Especially in regulated environments where AI decisions must be traced, explained, and governed
This isn’t just a technology problem, it’s actually an architectural one. And solving it will define the next wave of AI leaders.
From sandbox to systemEnterprise AI isn’t about making better predictions. It’s about delivering better outcomes.
To do that, companies must move beyond models and start thinking in systems. Not static models behind APIs, but living, dynamic intelligence networks: contextual, composable, and accountable.
The Agentic Mesh, as McKinsey calls it, is coming. And it won’t just power next-gen applications. It will reshape how decisions are made, who makes them, and what enterprise infrastructure looks like beneath the surface.
It isn’t simply a set of new tools bolted onto existing systems. Instead, it represents a shift in how organizations conceive, deploy, and manage their AI capabilities.
To really make this work, McKinsey says it’s time to wrap up all those scattered AI experiments and get serious about what matters most. That means clear priorities, solid guardrails, and picking high-impact "lighthouse" projects that show how it's done.
The agentic mesh isn't just a fancy architecture - it’s a call for leaders to rethink how the whole enterprise runs. Because real enterprise transformation won’t come from scaling a smarter model. It will come from orchestrating a smarter system.
We list the best AI chatbot for business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Listening to entrepreneurs discuss the potential of AI cybersecurity will give you déjà vu. The discussions are eerily similar to how we once talked about cloud computing when it emerged 15 years ago.
At least initially, there was a surprisingly prevalent misconception that the cloud was inherently more secure than on-premises infrastructure. In reality, the cloud was (and is) a massive attack surface. Innovation always creates new attack vectors, and to say AI is no exception is an understatement.
CISOs are generally aware of AI’s advantage, and for the most part they’re similarly aware that it’s creating new attack vectors. Those who took the right lessons from the development of cloud cybersecurity are right to be even more hesitant about AI.
Within the cloud, proper configuration of the right security controls keeps infrastructure relatively static. AI shifts faster and more dramatically, and is thus inherently more difficult to secure. Companies that got burned by being overeager about cloud infrastructure are now hesitant about AI for the same reasons.
Multi-industry AI adoption bottleneckThe knowledge gap isn’t about AI’s potential to drive growth or streamline operations; it’s about how to implement it securely. CISOs recognize the risks in AI’s expansive attack surface.
Without strong assurances that company data, access controls, and proprietary models can be safeguarded, they hesitate to roll out AI at scale. This is likely the biggest reason why AI apps at the enterprise level are coming out only at a trickle.
The rush to develop AI capabilities has created a multi-industry bottleneck in adoption, not because companies lack interest, but because security hasn’t kept pace. While technical innovation in AI has accelerated rapidly, protections tailored to AI systems have lagged behind.
This imbalance leaves companies exposed and without confidence to deploy at scale. Making matters worse, the talent pool for AI-specific cybersecurity remains shallow, delaying the hands-on support organizations need to integrate safeguards and move from adoption intent to execution.
A cascade of complicating factorsThis growing adoption gap isn’t just about tools or staffing—it’s compounded by a broader mix of complicating factors across the landscape. Some 82% of companies in the US now have a BYOD policy, which complicates cybersecurity even absent AI.
Elon Musk’s Department of Government Efficiency (DOGE) has fired hundreds of employees at the U.S. government’s cybersecurity agency CISA, which worked directly with enterprises on cybersecurity measures. This dearth of trust only tightens this bottleneck.
Meanwhile, we’re seeing AI platforms like DeepSeek become capable of creating the basic structure for malware. Human CISOs, in other words, are trying to create AI cybersecurity capable of facing AI attackers, and they’re not sure how. So rather than risk it, they don’t do it at all.
The consequences are now becoming evident, and dealing a critical blow to adoption. It just about goes without saying: AI won’t reach its full potential absent widespread adoption. AI is not going to fizzle out like a mere trend, but AI security is lagging and inadequate and it’s clearly hampering development.
When “good enough" security isn’t enoughAI security is shifting from speculative to strategic. This is a market brimming with potential. Enterprises are grappling with the severity and scale of AI-specific threats, and the demand those challenges created are attracting wider investor interest. Organizations have no choice but to secure AI to fully harness its capabilities. Those that aren’t hesitating are actively seeking solutions through dedicated vendors or by building internal expertise.
This has created a lot of noise. A lot of vendors claim to be doing AI red teaming, while really just offering basic penetration testing in a shiny package. They may expose some vulnerabilities and generate initial shock value, but they fall short of providing the continuous and contextual insight needed to secure AI in real-world conditions.
If I were trying to bring AI into production in an enterprise environment, a simple pen test wouldn’t cut it. I would require robust, repeatable testing that accounts for the nuances of runtime behavior, emergent attack vectors, and model drift. Unfortunately, in the rush to move AI forward, many cybersecurity offerings are relying on this “good enough” pen testing, and that’s not good enough for smart organizations.
The reality is that AI security requires a fundamentally different approach – this is a new class of software. Traditional models of vulnerability testing fail to capture how AI systems adapt, learn, and interact with their environments.
Worse still, many model developers are constrained by their own knowledge silos. They can only guard against threats they’ve seen before. Without continuous external evaluation, blind spots will remain.
As AI becomes embedded across sectors and systems, cybersecurity needs to provide actually suitable solutions. That means moving beyond one-time audits or compliance checkboxes. It means adopting dynamic, adaptive security frameworks that evolve alongside the models they’re meant to protect. Without this, the AI industry will stagnate or risk serious security breaches.
We list the best encrypted messaging app for Android.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Generative AI is changing the software development game. Beyond its capabilities in IT automation, the tool is also empowering professionals to contribute where it matters most, specifically at the strategic level.
This is the case of developers, who are no longer confined to their expectations of simply building applications but are increasingly becoming more involved with strategic business outcomes.
Gartner predicts that by 2028, 75% of enterprise software engineers will use AI. This figure doesn’t just represent technological advancements and the growing role of generative AI in enterprise software but also serves as a wakeup call for businesses to rethink the role of their developers.
The connecting factorOrganizations need to recognize that developers are the connecting factor between their needs and digital solutions. Those that recognize this early on and harness developer expertise will be those that succeed ahead.
This shift is already underway across multiple industries. In healthcare, for example, developers are addressing clinical needs by designing solutions that reduce operational friction, giving practitioners more time to focus on patient care.
In financial services, they are driving growth in a highly regulated and competitive environment, enhancing fraud detection while making financial services accessible and convenient for customers.
Meanwhile, in the retail sector, developers are elevating the technologies behind customer experiences to meet rising expectations. Across the board, developers are emerging as strategic innovators, leveraging technology, not just to solve problems, but to deliver meaningful business outcomes.
Developers are keepers of insightAcross sectors, businesses are beginning to rethink how they engage with their developers. The conversation is now shifting from basic interaction to empowering them to contribute strategically.
With developers holding a deeper understanding of their business's needs, they are more frequently asking to be heard and consulted upon the innovation strategy to better support business objectives.
Therefore, unlocking the potential of AI will require a mindset shift - one that acknowledges generative AI’s role not just to accelerate development but also elevate individuals behind it. To move forward, organizations need to recognize the value that developers bring to the table; including solving the issues that generative AI alone cannot solve.
Empowering developers: how low-code and AI are redefining complexityObject oriented low-code and no-code platforms and generative AI have fundamentally changed how developers can leverage their business relevance in their organizations. By eliminating some of the complexity of line-by-line code development, this allows them to move quickly from idea to implementation, creating more room for innovation, experimentation, and collaboration with other stakeholders.
As a result, developers are finding it easier to take a much bigger seat at the table, thereby helping to guide business strategy. Developers bring unique value: they are embedded in systems, close to the problems that need solving, and often have first-hand insight into operational inefficiencies and user frustrations. They understand the organization not just from a technical perspective, but from a business one.
Low-code and generative AI free developers from repetitive, technical tasks and enable them to focus on solving real business problems. As a result, developers are no longer just responding to requirements - they are helping to shape them. This shift gives developers a greater voice in strategic discussions and positions them as key players in driving business success.
Generative AI as a copilotGenerative AI copilots go beyond traditional tools by actively assisting developers throughout the software development lifecycle. Instead of working within rigid frameworks that slow innovation, developers can now brainstorm ideas and instantly generate code, receive intelligent suggestions, automate repetitive routine tasks, like debugging, or documentation. These copilots act as intelligent partners, freeing developers to concentrate on solving high-impact problems faster.
The critical advantage of a DevOps team with time, is their ability to more proactively engage with the overall direction of business. Generative AI amplifies the value of human insight further by enabling developers to focus on the work that matters most including creativity, judgement, and a deep understanding of organizational context. In addition, when generative AI is paired with low-code, developers have a co-pilot aiding them on the journey to create better applications and services for the industry they work in.
Developers delivering value across industriesAn industry where the shift in developers offering insights is most apparent is in healthcare. The development of applications in this sector isn’t just about building tools, but more importantly reducing friction for clinical practitioners and returning time to patient care.
Developers who understand the pain points and frustrations clinical staff face, are better equipped to create applications that minimize these complexities. Generative AI and low-code development platforms make it possible to quickly build, iterate, and improve these tools, resulting in better alignment between healthcare technology and frontline needs.
Another telling example is the financial services sector where 75% of financial firms already use AI. Developers are able to redirect their focus from routine tasks and offer value by modernizing legacy systems, streamlining compliance and enhancing fraud detection, all while supporting rapid product innovation.
Building solutionsIn a tightly regulated industry, their ability to build secure, efficient and customer-centric solutions is critical. Developers offer real value by creating solutions without compromising safety or security. With AI, developers can move faster, meet regulatory requirements, and deliver personalized experiences that build trust and retention.
In retail, developers are using customer feedback to solve friction points in the shopping journey. They are building tools that personalize the user experience, boost satisfaction and increase sales. With AI and low=code automating routine tasks, developers can focus on innovation, from responding to consumer trends to improving supply chain resilience.
Across sectors, the combination of generative AI, low-code platforms, and developer insight is accelerating innovation and unlocking strategic business value.
Time to push the needleIt is time for businesses to push the needle, not just to adopt generative AI but also to empower developers to lead innovatively. With the use of generative AI and AI-powered low-code, developers can reallocate time that they can then reinvest towards targeting strategic business needs. Thanks to their strong understanding of business needs and pain points, developers are able to shape solutions that align digital solutions with business objectives.
Successful businesses will be those that recognize that AI will not be replacing developers but rather promoting them to more strategic roles.
We list the best sites for hiring developers.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The Samsung Galaxy Z Flip 7 might just reel me back into using a vertical-style flip phone. I used to count on the Galaxy Z Flip 5 before I got my S24 Ultra, finding the handset to be more than capable of keeping up with my daily needs while also offering an immense level of cool. The Galaxy Z Flip 7 keeps the cool factor going, with an exceptionally minimized crease and a cover display that I just can’t help but love.
If you’re not aware, the cover screen is an essential component of any foldable smartphone. It allows for functionality when the primary screen is inaccessible due to being closed like a clamshell. It can also be used to take selfies using the rear cameras, conveniently placed at the bottom of the cover screen, or top of the phone.
Now, for a lot of foldables, the cover screen isn’t feature rich – by default. To maintain a seamlessly premium feel, Samsung actively restricts how much a user can do with the screen to a handful of supported widgets and cover screen elements. It’s not a bad idea and it keeps the level of polish to a high standard, but some folks, like me, may be left wanting to do more with the conveniently small screen. Thankfully Samsung has an easy solution to this – Multistar.
Give Multistar a go(Image credit: Zachariah Kelly / TechRadar)Multistar isn’t new. It’s been around for several generations of the Galaxy Z Flip, but it’s always been limited by how small the cover screen is. That’s no longer an issue with the Galaxy Z Flip 7, with its cover screen spanning the entire top of the folded phone.
Multistar is an essential piece of the puzzle. This official Samsung extension, accessible through the phone’s cover screen settings and then downloaded from the Galaxy Store, allows you to put apps directly onto one of the widget menus of the cover screen, allowing you to swipe through Bluesky or even play games like Crossy Road.
It’s not a complete solution – the screen doesn’t display notification bar information, navigating between apps is extremely basic (limited to a single swipe up) and indeed some apps are still inaccessible, such as Samsung’s own contacts and phone apps – but it does feel more useful than previous generations of the Flip, and I feel like I could sufficiently use much of my smartphone with just this small screen and my selected apps.
(Image credit: Zachariah Kelly / TechRadar)But is it worth the extra cost?As much as I love the cover screen, its functionality, and the concept of a compact, square phone over a plain rectangle, it's hard to justify the higher price – especially with a more affordable option on the market.
Alongside the Galaxy Z Flip 7, Samsung also released the Z Flip 7 FE, a cheaper handset with many of the same specs found in the Z Flip 6 – including its smaller cover screen that’s capable of a lot of the same functionality. Similarly, I’d recommend checking out Motorola’s Razr range of foldable smartphones, as those can be used with similar utility when it comes to apps at a more accessible price.
For now though – I’m a big fan of the funny little square I’ve been using instead of a boring rectangle.
You might also like...Anthropic has given Claude a memory upgrade, but it will only activate when you choose. The new feature allows Claude to recall past conversations, providing the AI chatbot with information to help continue previous projects and apply what you've discussed before to your next conversation.
The update is coming to Claude’s Max, Team, and Enterprise subscribers first, though it will likely be more widely available at some point. If you have it, you can ask Claude to search for previous messages tied to your workspace or project.
However, unless you explicitly ask, Claude won’t cast an eye backward. That means Claude will maintain a generic sort of personality by default. That's for the sake of privacy, according to Anthropic. Claude can recall your discussions if you want, without creeping into your dialogue uninvited.
By comparison, OpenAI’s ChatGPT automatically stores past chats unless you opt out, and uses them to shape its future responses. Google Gemini goes even further, employing both your conversations with the AI and your Search history and Google account data, at least if you let it. Claude’s approach doesn't pick up the breadcrumbs referencing earlier talks without you asking it to do so.
Claude remembersAdding memory may not seem like a big deal. Still, you'll feel the impact immediately if you’ve ever tried to restart a project interrupted by days or weeks without a helpful assistant, digital or otherwise. Making it an opt-in choice is a nice touch in accommodating how comfortable people are with AI currently.
Many may want AI help without surrendering control to chatbots that never forget. Claude sidesteps that tension cleanly by making memory something you summon deliberately.
But it’s not magic. Since Claude doesn’t retain a personalized profile, it won’t proactively remind you to prepare for events mentioned in other chats or anticipate style shifts when writing to a colleague versus a public business presentation, unless prompted mid-conversation.
Further, if there are issues with this approach to memory, Anthropic’s rollout strategy will allow the company to correct any mistakes before it becomes widely available to all Claude users. It will also be worth seeing if building long-term context like ChatGPT and Gemini are doing is going to be more appealing or off-putting to users compared to Claude's way of making memory an on-demand aspect of using the AI chatbot.
And that assumes it works perfectly. Retrieval depends on Claude’s ability to surface the right excerpts, not just the most recent or longest chat. If summaries are fuzzy or the context is wrong, you might end up more confused than before. And while the friction of having to ask Claude to use its memory is supposed to be a benefit, it still means you'll have to remember that the feature exists, which some may find annoying. Even so, if Anthropic is right, a little boundary is a good thing, not a limitation. And users will be happy that Claude remembers that, and nothing else, without a request.
You might also likeWindows 10 reaches its End of Life in October 2025, and a California resident is particularly disgruntled about this looming deadline.
He isn't alone, of course, but Lawrence Klein feels strongly enough that Microsoft is out of order in bringing the shutters down on Windows 10 in just a couple of months that he has fired up a lawsuit against the company.
As The Register reports, Klein has accused Microsoft of violating consumer legal code and business code (including false advertising law) by winding up support for Windows 10 too early, in his opinion.
The crux of the argument is that too many people remain on Windows 10 for the operating system to have support pulled (there are nuances here, which I'll come back to). And that some 240 million devices don't meet the hardware requirements to upgrade to Windows 11 – due to Microsoft setting those PC specifications at an unreasonable level – and the potential e-waste nightmare that could prompt.
In short, the upgrades required for Windows 11 - including TPM 2.0 security, as well as ruling out some surprisingly recent processors - aren't justified.
Furthermore, Klein argues that this upgrade timeline is all part of Microsoft's drive to push folks to use its Copilot AI with Windows 11, in a broader push to get more adoption for Copilot+ PCs - in other words, to buy new machines and discard old Windows 10 hardware (and again, we're back to that e-waste issue).
You can read the lawsuit in its entirety (it's a PDF) here, but that's the gist, and Klein argues that Microsoft should postpone killing off Windows 10 and wait until far fewer people are using the older operating system.
As the suit states: "[The] Plaintiff seeks injunctive relief requiring Microsoft to continue providing support for Windows 10 without additional fees or conditions until the number of devices running the operating system falls below a reasonable threshold, thereby ensuring that consumers and businesses are not unfairly pressured into unnecessary expenditures and cybersecurity risks [of running a Windows 10 PC without security updates]."
Is Klein justified in this lawsuit? In some respects, I think so, and while I don't imagine for a minute that this legal action will go anywhere in terms of the outcome of the suit itself, I've a feeling it could come into play, and be important, indirectly.
What do I mean by that exactly? Well, let's dive into the thinking behind Klein's lawsuit, and the key reasons why it might force Microsoft to sit up and take notice.
(Image credit: fizkes / Shutterstock)1. Windows 11's hardware requirements really are unreasonableDo we really need TPM 2.0 forced upon us? Yes, it ushers in a better level of security, I don't dispute that – but hundreds of millions of PCs potentially heading to landfill seems too heavy a price to pay. For me, as already mentioned, the decision to rule out some relatively new CPUs in the Windows 11 specs is particularly baffling.
The key point here is that Microsoft has never pushed the PC hardware requirements as hard as it has with Windows 11, and that leaves it open to criticism, although this observation is nothing new. What is new, though, is that the lack of fairness in setting this higher hardware bar has become crystal clear with the number of people who are still using Windows 10, which brings us onto my next point.
(Image credit: Microsoft)2. This close to End of Life, there are clearly too many people still using Windows 10The lawsuit cites outdated figures as to how many folks are still on Windows 10 - an estimate drawn from April 2025 suggests that 53% remain on the older OS. While that's no longer the case, the level remains high.
Based on the latest report from StatCounter (for July 2025), Windows 10 usage is 43%, which is very high with the End of Life deadline imminent. Normally, an outgoing Windows version would have way fewer users than this - Windows 7 had a 25% market share when it ran out of support (and it was a popular OS).
There are always holdouts when a new version of Windows comes out, but it's looking like this is going to be really bad with Windows 10's end of support. This is Klein's central argument, and I think it's a key factor that Microsoft doesn't appear to be taking into account - or perhaps doesn't want to face up to.
Maybe the software giant is thinking there'll be a last-minute flood of Windows 11 migration, but given the outlined hardware requirements problem, I doubt it.
(Image credit: NATNN / Shutterstock)3. Proving the cynics right?Another part of Klein's case against Microsoft is the assertion that the company is using Windows 10's end of support and Windows 11 upgrades to persuade people to buy new PCs that major in AI, namely Copilot+ PCs. And indeed Microsoft hasn't helped itself here, openly pushing these Windows 11 devices as the lawsuit points out – and that includes intrusive full-screen advertisements piped to Windows 10 machines.
That feels like a crass tactic, and makes it seem like part of this is indeed about pushing those Copilot+ laptops. Yes, by all means, advertise Copilot+ PCs and their AI abilities (which are limited thus far, I should note) – but don't do it in this way, directly at Windows 10 users, and expect that to be viewed in anything other than a negative and cynical light.
(Image credit: Shutterstock)4. Microsoft has already made a concession, true - but it's not enoughIt's worth noting that not everything Klein puts forward in this suit seems reasonable. I don't think you can argue that 10 years of support is stingy, which is what Microsoft has given Windows 10. However, Klein picks out 'transitional' support in his lawsuit, meaning the length of support after a succeeding OS has been launched – four years in this case – which isn't entirely fair and looks lean. The problem here is not the length of time for support as such, but the different circumstances around hardware requirements.
Also, calling Windows 11 'wildly unpopular' as Klein does at one point is equally unfair - even if adoption of the operating system has been very sluggish, admittedly. There's a definite bias towards shooting the OS down across all fronts, and I think that weakens Klein's argument.
But my main bone of contention here is that Klein ignores the concession Microsoft has made in terms of the extended year of support for consumers who want to stay on Windows 10. As the lawsuit states, this extra support through to October 2026 can be had for the price of $30, but recently, Microsoft introduced the ability to get this extension for free, well, kind of. (Financially, you won't pay a penny, but you need to sync some PC settings to OneDrive, and I don't think that requirement is too onerous myself.)
That was an important move by Microsoft, which it isn't given any credit for here, but that said, I still don't think the company goes far enough. As I've said before, an extra year of support is certainly welcome, but Microsoft needs to look at a further extended program for consumers.
So, while the lawsuit does go off the rails (at least for me) around these issues, it does effectively put a spotlight on how we're looking at measuring support, and a different perspective other than a hard timeframe. Instead of talking about 'x' years of extended coverage, it mentions a level of Windows 10 adoption that should be reached before Microsoft pulls the plug on support for the OS.
I think that's a valuable new angle on this whole affair, and while 10% of total Windows users – which is the low bar Klein sets for Windows 10 – maybe feels too low, there's an interesting conversation to be had here. (The other route Klein's suit suggests, which others have raised, is Microsoft simply relaxing the hardware requirements for Windows 11 - but I think at this stage of the game, we can safely conclude that this won't be happening.)
(Image credit: MAYA LAB / Shutterstock)5. Under pressureMy final point in terms of why this lawsuit could prove a compelling kick in the seat of the pants for Microsoft is that while, as already observed, I can't see Klein triumphing over the company, it's more fuel to the fire in the campaign to stave off a potentially major e-waste catastrophe.
Simply put, the PR around this – and it has been spinning up headlines aplenty over the past couple of days – is another reason for Microsoft to sit up, take notice, and maybe do some rethinking over exactly how Windows 10's End of Life is being implemented.
We've already seen one concession – the aforementioned free route to get extended support for Windows 10 – in recent times, which surely must have been a reaction to the frustration that Klein and many others feel. So, perhaps this lawsuit could be the catalyst to prod Microsoft into going further in its appeasement of the unhappy Windows 10 users out there – fingers crossed, at any rate.
You might also likeTesla has shut down its Dojo supercomputer team, in what appears to be a major shift in the company’s artificial intelligence plans.
Reports from Bloomberg claim the decision followed the exit of team leader Peter Bannon and the loss of about 20 other staff members to a newly formed venture called DensityAI.
The remaining team members will now be reassigned to other computing and data center projects within Tesla.
Leadership exit triggers Tesla shake-upThe Dojo system was originally developed around custom training chips designed to process large amounts of driving data and video from Tesla’s electric vehicles.
The aim was to use this information to train the company’s autonomous driving software more efficiently than off-the-shelf systems.
However, CEO Elon Musk said on X it no longer made sense to split resources between two different AI chips.
Tesla has not responded to requests for comment, but Musk has outlined the company’s focus on developing its AI5 and AI6 chips.
He said these would be “excellent for inference and at least pretty good for training” and could be placed in large supercomputer clusters, a configuration he suggested might be called “Dojo 3.”
The company’s shift away from the Dojo project comes amid broader restructuring efforts that have seen multiple executive departures and thousands of job cuts.
Tesla has also been working on integrating AI tools such as the Grok chatbot into its vehicles, expanding its AI ambitions beyond self-driving technology.
Tesla’s plans for future AI computing infrastructure and chip production after Dojo rely heavily on outside technology suppliers, with Nvidia and AMD expected to provide computing capabilities, while Samsung Electronics will manufacture chips for the company.
Samsung recently secured a $16.5 billion deal to supply AI chips to Tesla, which are expected to power autonomous vehicles, humanoid robots, and data centers.
Musk has previously said Samsung’s new Texas plant will produce Tesla’s AI6 chip, with AI5 production planned for late 2026.
For now, Musk appears confident that Tesla’s chip roadmap will support its ambitions.
But with the original Dojo team largely gone and reliance on external partners increasing, the company’s AI trajectory will depend on whether its new chips and computing infrastructure can deliver the results Musk has promised.
You might also like