Nvidia's RTX 5000 series GPU lineup launch is seemingly nearing completion - since the desktop RTX 5060 hasn't been officially announced - with the RTX 5070 slated for release on March 5. However, we may already have a bit of insight into what this midrange GPU could bring to the table.
As reported by Wccftech, benchmark leaks reveal that the RTX 5070 will have up to 20% better performance than its predecessor, the RTX 4070. This conflicts with Nvidia's claims at CES 2025 that the RTX 5070 would be equivalent to the RTX 4090 in performance - notably, the RTX 5080 and 5070 Ti's uplifts over previous-gen GPUs were not significant enough to fully corroborate Team Green's other claims.
The RTX 4070 is a significantly weaker GPU compared to the RTX 4090, so the claims of the RTX 5070's performance were far-fetched, to say the least (even considering DLSS 4's impressive Multi Frame Generation feature). In terms of the direct comparison to its predecessor though, the uplift (if legitimate) could be a reasonable one - Wccftech highlighted the RTX 5070 scoring 187,414 in a Vulkan (a graphics API used in most AAA games) benchmark versus the RTX 4070's 156,601 points.
Fortunately, reports suggest the RTX 5070 likely won't face the same stock and supply issues seen with the already-launched Blackwell GPUs. If prices aren't inflated and end up far above MSRP (as they did with the flagship RTX 5090), this could be a solid GPU upgrade - just don't expect RTX 4090 performance.
(Image credit: Nvidia) Is MFG a good reason for an upgrade?The suggested generational performance uplift in Vulkan for the RTX 5070 sounds decent, but Team Green's Multi Frame Generation (MFG) has the potential to take this up a notch. I'm well aware of the criticism surrounding frame-generation software ("fake frames", I know) but since ghosting is significantly reduced with the new model, it's an addition that shouldn't be ignored.
DLSS 4's improvements for all RTX GPU users are enough for me to suggest sticking with your current GPU (if compatible) - DLSS 4's super-resolution performance mode in particular looks better if not equivalent to DLSS 3's quality mode in games, and you can still enjoy a steady frame rate.
If you already have an RTX 4000 series GPU (which has DLSS 3's regular Frame Generation, not MFG), you're likely better off sticking with it - it's also been revealed that some Blackwell GPUs are shipping with missing ROPs, an inconvenience I'd certainly want to avoid.
However, if you're still on an RTX 3000 series GPU, an upgrade to an RTX 5070 seems sensible enough based on these benchmark leaks. Its launch is right around the corner, and I'm hoping there aren't any major issues this time.
You may also like...OpenAI has confirmed it recently identified a set of accounts involved in malicious campaigns, and banned users responsible.
The banned accounts involved in the ‘Peer Review’ and ‘Sponsored Discontent’ campaigns likely originate from China, OpenAI said, and “appear to have used, or attempted to use, models built by OpenAI and another U.S. AI lab in connection with an apparent surveillance operation and to generate anti-American, Disrupting malicious uses of our models: an update February 2025 3 Spanish-language articles”.
AI has facilitated a rise in disinformation, and is a useful tool for threat actors to use to disrupt elections and undermine democracy in unstable or politically divided nations - and state-sponsored campaigns have used the technology to their advantage.
Surveillance and disinformationThe ‘Peer Review’ campaign used ChatGPT to generate “detailed descriptions, consistent with sales pitches, of a social media listening tool that they claimed to have used to feed real-time reports about protests in the West to the Chinese security services”, OpenAI confirmed.
As part of this surveillance campaign, the threat actors used the model to “edit and debug code and generate promotional materials” for suspected AI-powered social media listening tools - although OpenAI was unable to identify posts on social media following the campaign.
ChatGT accounts participating in the ‘Sponsored Discontent’ campaign, were used to generate comments in English and news articles in Spanish, consistent with ‘spamouflage’ behavior, primarily using anti-American rhetoric, probably to spark discontent in Latin America, namely Peru, Mexico, and Ecuador.
This isn’t the first time Chinese state-sponsored actors have been identified using ‘spamouflage’ tactics to spread disinformation. In late 2024, a Chinese influence campaign was discovered targeting US voters with thousands of AI generated images and videos, mostly low-quality and containing false information.
You might also like2024 was quietly a revolutionary year for TVs, because the best mini-LED TVs suddenly went from being a premium tech that dabbled in dropping to mid-range, to being tech that reached all the way from premium down to the budget end, with a really solid option for everyone.
In the US, we raved about the Hisense U8N and the TCL QM851G for delivering unbelievable bang for your buck. The Hisense U7N quickly staked a place as top-value mid-range TV, with some unbelievable deals available, despite packing great brightness and 4K 120Hz support to complete with the best gaming TVs.
In the UK, we raved about the TCL C855K and the TCL C805K, the latter of which is obscenely inexpensive for something that looks that good.
If anything, the aggressive pricing on these TVs creates the issue of them nearly crashing into each other – we gave the Hisense U6N a less effusive score than the U7N or U8N, but that was mostly because it cost far too close to the U7N while offering a notable step down in features.
But over time, the U6N has started dropping to some absolutely ridiculous prices, and there's a deal on the 75-inch model right now that is absolutely the budget giant-screen mini-LED I'd buy if I were in the market for one.
Today's best deal on the Hisense U6NThe Hisense U6N is an impressive mini-LED screen, delivering bright images that are great for sports, TV and movies in particular. Mini-LED's control of contrast means the picture quality holds up across the huge screen of this 75-inch TV, unlike more basic TV panels. You don't get 4K 120Hz gaming, and the picture weakens if you sit too far off-center, but when it comes to bang for your buck in a big-screen TV, this is the best you can get at the time of writing.
• In the UK? Get the Hisense U6N 75-inch for £799 at Amazon UKView Deal
It's funny to think that the first mini-LED TVs only launched in 2021 – making them far, far younger than the 11 years that the best OLED TVs have been a staple of the TV market – and yet they've already moved from launching as a high-end only option to covering the whole range of budgets… something OLED has never managed, and doesn't look likely to any time soon.
OLED has certain high costs around the reliability and complexity of the manufacturing process that haven't been fundamentally solved, and the price simply can't come down much further while these continue. Even new inkjet-printed methods – which are finally beginning to become real for OLED monitors – look unlikely to solve this for TVs in the near term.
TCL, which is leading on this technique, told me there will still be manufacturing reliability problems with larger screens that mean it won't be as cost-effective for TVs immediately, so we won't see it used for that yet – though we might in a few years.
As a result, the cheapest 77-inch OLED TV you can get is the LG B4 for $1,796 – which is more than $1,000 pricier than the Hisense U6N. Yet the U6N has far superior full-screen brightness (though less-good contrast, viewing angles or gaming features).
We tested the U6N's brightness and color accuracy, and it's amazingly accurate and consistent on colors out of the box, matching the more expensive Samsung QN85D. (Image credit: Future)Mini-LED, however, offers all kinds of ways for manufacturers to be flexible on price. For a start, they can change exactly how mini those mini-LEDs are, and how many of them there are. They can change how many different dimming zones they work in. They can use a different LCD pixel panel to bring the price down. And the cost of the parts themselves comes down as they become more common.
The technology doesn't need the delicate manufacturing touch that OLED does, and advancements to the technology don't always need to cost much, if any, more than the last version of the tech – the next big thing in mini-LED TVs is RGB backlighting, and Samsung told me that it doesn't expect its version of the tech to cost more than current mini-LED tech (though we'll see what happens in practice).
The TVs from 2024 that are still around may get cheaper again by the time they get replaced by the 2025 models – and the 2025 models will likely be the same kind of price, but with bolder colors and even more impressive contrast at all budgets. I've especially got my eye on the Hisense QM7K, which looked beautiful at CES 2025, will have a mid-range price, and will go all the way up to 98 inches.
@techradar ♬ original sound - TechRadarI'm an OLED guy. I have an OLED TV at home that I recently upgraded, and that technology is where I find that the most interesting tech developments are happening – at least in terms of tech that's making it into fairly mainstream TVs.
But I do a lot of recommending of TVs for other people, and this is the year I found things really tipping in mini-LED's favor in most of my conversations. Not so much at the high end, but at the "You don't have to buy premium to get a seriously good experience, just look at these mid-range models" end. And then also at the low-end, where I find myself saying that you only need to spend a tiny bit more to get a big step up to a mini-LED set from a basic LED set, which will be a better long-term investment.
Mini-LED is only going to cement its place as the people's TV tech this year. A genuinely good 75-inch 4K TV for under $600? Hail to the king.
Intel has unveiled its latest range of data center hardware as it looks to keep pace with the likes of Nvidia and AMD.
The new Xeon 6 processors with P-cores provide a major boost in power and intelligence as Intel states its claim to stay at the top of the charts when it comes to AI processing and other crucial enterprise tasks.
But the new series isn't limited to the data center, with Intel promising improved performance across network and edge infrastructure alongside server and data center workloads.
Intel Xeon 6 with P-cores“We are intensely focused on bringing cutting-edge leadership products to market that solve our customers’ greatest challenges and help drive the growth of their business,” said Michelle Johnston Holthaus, interim co-CEO of Intel and CEO of Intel Products.
“The Xeon 6 family delivers the industry’s best CPU for AI and groundbreaking features for networking, while simultaneously driving efficiency and bringing down the total cost of ownership.”
Designed for the data center, the new Intel Xeon 6700P/6500P series features the company's P-cores, offering what it says is the "perfect balance between performance and energy efficiency".
Offering up to 86 cores, Intel is promising an average of 1.4x improved performance on a number of enterprise workloads compared to the previous generation, and 1.5x better performance in AI inference on chip compared to AMD's 5th-Gen EPYC, whilst also using one-third fewer cores.
But this power also brings improved efficiency, allowing for much greater consolidation on five or even ten-year-old servers, with Intel saying the Xeon 6 with P-cores is an ideal option for businesses looking to refresh aging infrastructure to better deal with new AI tasks.
Describing the new offerings as the, "world's best CPU for AI", the Xeon P-core chips offer more bandwidth and cache, with up to 504MB low latency LLC and support for MRDIMM memory, alongside built-in AI accelerators and a comprehensive software suite across classical ML and small GenAI models.
(Image credit: Intel)The Xeon 6 for network and edge is Intel's most-developed SoC, designed for a wide range of use cases, and again promising greater performance and efficiency than ever. The chip includes Intel vRAN Boost built-in, allowing for up to 2.4x the capacity for RAN networks, which could be vital as demand for such connections continues to increase.
The Xeon 6 will also be the first in the industry to feature a built-in media accelerator, with the Intel Media Transcode Accelator offering up to 14x performance per watt gain versus previous models.
Intel says both releases will be ideal for businesses looking to expand and evolve their AI-ready workforces and processes, allowing them to optimize workloads, reduce costs and build network which are flexible and scalable when needed.
The company says more than 500 designs are either available now or in progress, with top OEMs such as Dell, Samsung, Ericsson, HPE, Lenovo and many others already signed up.
You might also likeInternal chat logs detailing the inner workings of the Black Basta ransomware group were just leaked online.
An individual (or a group) with the alias ExploitWhispers has apparently pulled the information from Matrix, an open source, decentralized communication protocol used for secure and real-time messaging. Matrix is often used for encrypted chats, making it popular among cybersecurity professionals, privacy advocates, but also, unfortunately, cybercriminals.
ExploitWhispers first uploaded the archive to MEGA, but after it was pulled down, they set up a dedicated Telegram channel and leaked it there.
Targeting domestic banks“A place to discuss the most important news about Black Basta, one of the largest groups of health workers in Russia, which recently hacked domestic banks,” the leakster said on Telegram. “With such matters, we can say that they crossed the border, so we are dedicated to revealing the truth and exploring the next steps of Black Basta. Here you can find information that you can trust, and read all the most important in one channel.”
Whoever ExploitWhispers is, they weren’t happy with what Black Basta was doing in recent times. They can either be a disgruntled member, or a security researcher.
In any case, Black Basta was allegedly targeting Russian banks, which didn’t sit well with them.
The leak covers chats between September 2023, and September 2024, and contains valuable information about the group’s internal structure.
An individual called Lapa is one of the admins. Cortes is a threat actor with links to the Qakbot group, YY is the main admin, and Trump is the key figure. There are some indications that Trump’s real name might be Oleg Nefedov.
It also shows the group’s phishing templates, emails, cryptocurrency addresses, data drops, victim credentials, and more.
Analyzing the data dump, BleepingComputer said the archive also contains 367 unique ZoomInfo links, which could indicate the number of companies targeted during this period.
Via BleepingComputer
You might also likeFinancial services firms are increasingly investing in the cloud, especially as technology such as generative AI matures, opening up opportunity to scale, streamline, and personalize services. Yet there’s still a gap between the volume of dollars spent and the value generated. Simply stated, countless banks and insurers can do more to successfully maximize their investments.
Why not? In our view, it is because some haven’t truly adopted a bold, cloud-first approach. As a result, they’re missing out on ways to fully capitalize on opportunities that deliver quantifiable dollar value. Maybe their data quality is poor, or too often it remains in silos across legacy systems instead of integrated with cloud-based infrastructure.
In comparison, organizations that have gone all in are enjoying big benefits. Case in point: in Q1 2024, Nubank marked a staggering 64 percent increase in year-over-year quarterly revenue after it adopted a cloud-native digital-only approach. Today, the Brazilian neo-bank serves over 100 million active clients.
A strong cloud foundation is essential, as is efficient end-to-end operations, but Capgemini’s research indicates that only 12 percent of banks and insurers are true “cloud innovators”. Joining their ranks doesn’t require magical thinking as much as a shift in mindset. Take JPMorgan Chase, for instance. Some might consider the centuries-old US financial institution a legacy bank, but it is a disruptor, automating the customer onboarding process using an API-driven cloud platform for faster product innovation. In 2023, it realised a year-over-year increase of 35 percent in top-line value in this area.
Cloud adoption is not a means to an end – it’s part of the journey in continuous innovationThe real requirement companies overlook involves a cultural evolution throughout the enterprise. And that’s where the key lies to unlocking top-line growth. With the infusion of generative AI across the financial sector, banks and insurers are changing significantly. They’re becoming more data-driven, more cloud-focused, and exponentially more customer centric.
However, organizations that have been successful in their cloud journeys share something in common: they’ve been more intentional in their overarching aims and tactics. For some, that journey involves engendering a superior customer experience. This can be offering an omnichannel approach to positively impact the entire ecosystem – being available and ready to meet clients wherever they are. For others, it’s about narrowing their focus to achieve differentiated top-line growth, like insurers using CRM to excel at developing customer profiles and personas.
Financial institutions are blessed with troves of client data. We are seeing the “cloud innovators” utilize this high-quality data into their generative AI investments across the banking and insurance value chain. This could be from pre-onboarding to payments, the latter closely tied to complex and costly anti-money laundering (AML) and data-protection requirements. Everything is connected, or ought to be, when data is used efficiently and effectively in the cloud. And there’s power in partnerships. Along with the expected hyperscalers, other ecosystem players include fintech and insurtech companies that offer expertise, whether helping to adjust go-to-market strategies or develop customer-experience offerings.
Regulatory pressure intensifies but opportunities remainNaturally, the associated first- and third-party data collected are information goldmines, but client security and privacy remain paramount. This is particularly true with global cybercrime on the increase and evolving regulatory requirements and legislation. In the US, updates to Section 1033 of the Dodd–Frank Act will give consumers more rights with regard to personal data. Across the European Union, the 2023 Digital Operational Resilience Act (DORA) has just gone into effect requiring banks to reassess how they regard risk and resilience against such threats.
In this world of open finance, we can anticipate more “Nubanks” entering the fray. Innovative banks, not surprisingly, don’t see these changes as onerous, but as opportunities to build transparency and trust with clients. A true competitive advantage. The flexibility of cloud services, for instance, offers dynamic scalability, so keeping the costs of risk and compliance in check is easier. Once again, mindset is the key. Innovators don’t get fixated on finances, they see such pressure points as opportunities for positive change, especially when linked with a personalized customer experience.
On the insurance side, innovators are making data-driven decisions across the operations, from product development to claims, renewal and servicing. They integrate traditional and third-party data for the underwriting process and also actively leverage data during claims process to automate claims triage, identify fraudulent claims, and estimate damage value.
Exciting opportunities for innovation are constantly on the horizon, especially when leaders are willing to challenge conventional thinking. Generative AI delivers huge potential for both financial institutions and insurers to achieve more nuanced insights, supported by human expertise, across the value chain. Organizations that have a firm, cloud-based foundation are the ones that consistently – and continuously – find new ways to enhance their operations and generate positive revenue growth.
We've featured the best cloud storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Netflix has reportedly suffered its second big data breach in six months – and, this time, some absolutely huge Stranger Things season 5 spoilers have appeared online.
In what'll be another significant blow to the streaming giant's internal security measures, key information about Stranger Things' final season has been leaked. The forthcoming season's release date, some spoiler-filled plot details, and other assets have been shared across the internet since the breach occurred last Saturday (February 22). This marks the second time that Netflix has been hit by a major data breach in six months, with the streamer "aggressively taking action" after multiple episodes of Arcane season 2, Heartstopper season 3, and other shows leaked online last August.
What's on Netflix, one of the first outlets to cover the leak, has indicated that Netflix has moved quickly to takedown social media posts, YouTube and TikTok videos, and other internet posts that include any materials relating to Stranger Things season 5. Unfortunately, the world's best streaming service appears to be fighting a losing battle as more and more users continue to not only share the leaked information, but discuss it on social media and various forums.
I've reached out to Netflix for an official comment on the data breach, but didn't receive a response by the time of publication. I'll update this article if I receive a reply.
What Stranger Things season 5 details have leaked online? Stranger Things' fifth and final season will be released sometime in 2025 (Image credit: Netflix)I won't be including any spoiler-based details about Stranger Things 5 in this story. The cast and crew spent 12 months capturing over 650 hours worth of footage for its eight blockbuster movie-like episodes – not to mentioned those who are involved in post-production, marketing, and other departments whose work often goes unnoticed – and most viewers will want to get that pure experience, so I'm not going to risk ruining anyone's day with season 5's story and/or character arcs here. Netflix has also reportedly been very legally aggressive about anywhere that shows details, and I'd like this article to stay live…
What I can report on is the basics that What's on Netflix discusses in its own article. According to its sources, over 400 individual assets were accessed and downloaded by the original leaker, as was a document containing around 90 pages of confidential information. The latter seems to indicate that Netflix is eyeing a late 2025 launch for one of its most successful original TV shows, and that it'll follow in the footsteps of Stranger Things season 4 by releasing in two parts. We already knew that Stranger Things season 5 was going to debut sometime in 2025 and, given how long it'll take to complete its post-production phase, a late 2025 release is more likely than not.
If you want official details on what to expect from one of the best Netflix shows' last season, I'd advise you to read my dedicated Stranger Thing season 5 guide, which is full of information on its confirmed cast, story specifics, and the franchise's future.
You might also likeMicrosoft is finally fixing an annoying problem with File Explorer being slow in Windows 11, but there are still a good many complaints about this central part of the interface being overly sluggish.
For the uninitiated, File Explorer is the app that shows you the files and folders stored on your PC – and it’s the medium by which you browse through said files (hence the ‘explorer’ name).
The fix mentioned is a solution for File Explorer being “very slow to close” when the ‘X’ button (top-right) is clicked to shut it, and Microsoft has announced that this remedy is being applied to Windows 11’s test builds soon.
So at least that cure is in the pipeline, but as Windows Latest points out, there are a whole lot of other niggles with File Explorer, alongside some major frustrations, too.
That includes a whole heap of users in the Feedback Hub for Windows 11 who have serious beef with the time it takes the whole of File Explorer’s interface (all the various little menus and options) to load. According to some reports, users can be waiting for up to 10 seconds to see everything appear after first opening a folder.
One of the posts in that hub observes: “Windows 11 File Explorer is the slowest since I started using Windows in the early ’90s. I was very excited to finally have a multi-tab Explorer, but it is so slow. Opening new tabs or new Explorer windows does not speed up. I have to watch the navigation pane, then the ribbon, then the folders, and then finally, the tabs appear in slow motion. It doesn’t matter what the system specs are. It doesn’t change. The fastest it opens is about 2 seconds.”
On modern PCs with a healthy allocation of memory and fast SSD storage, that kind of lag should absolutely not be happening.
There’s no shortage of evidence as to the level of frustration that some users are experiencing on Reddit, either. Such as this post from someone who has just switched from a MacBook to a Windows 11 laptop, who complains: “I switched from a MacBook Pro to ThinkPad for work and File Explorer is so slow it’s driving me nuts. I do not understand why it takes 5-10 times as long to open a folder, to search a folder, or to do quite literally anything.”
Or how about this comment observing that File Explorer has a nasty habit of crashing with some regularity when working with tabs (and that the right-click context menu is slow to appear initially). Other reports of File Explorer crashing on Reddit, and general jankiness around this part of the interface, aren’t exactly difficult to come by.
(Image credit: TechRadar) Analysis: Critters in the works for far too longTo be fair to Microsoft, the root cause of some of these reports may not be down to File Explorer, but perhaps it could be due to certain Windows 11 configurations, or ailing hardware that’s past its sell-by-date and is misfiring somehow.
But as Windows Latest highlights, there’s such a lot of material on the Feedback Hub that it’s clear enough that Microsoft has further work to do in making File Explorer perform better.
Seeing as it is arguably the central cog of the Windows 11 interface – it’s the very files and folders you work with on the desktop likely on a daily basis – this is not a part of the operating system that should be underperforming for what seems to be quite a number of users.
After all, Windows 11 has been around for a long time now, well over three years at this point, and File Explorer has not yet been knocked fully into shape, at least going by this recent feedback. Indeed, Microsoft has been taking steps backwards at times, notably with the latest update for Windows 11 24H2 which has pretty much broken File Explorer for some people (or that’s the claim).
It's long overdue that File Explorer became more reliable under Windows 11 – which is, after all, supposed to be an upgrade to Windows 10, which isn’t suffering the same Explorer woes – and Microsoft needs to focus more resources on getting these lingering performance-related bugs sorted out.
You may also like...A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Monday's puzzle instead then click here: NYT Strands hints and answers for Monday, February 24 (game #358).
Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.
SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.
NYT Strands today (game #359) - hint #1 - today's theme What is the theme of today's NYT Strands?• Today's NYT Strands theme is… Life is like a box of chocolates
NYT Strands today (game #359) - hint #2 - clue wordsPlay any of these words to unlock the in-game hints system.
• Inside bonbons
NYT Strands today (game #359) - hint #4 - spangram position What are two sides of the board that today's spangram touches?First side: top, 3rd column
Last side: bottom, 4th column
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Strands today (game #359) - the answers (Image credit: New York Times)The answers to today's Strands, game #359, are…
I love chocolate and will eat anything that’s covered in it – strawberries, dried chickpeas, coffee beans, ants. Basically, I’d eat my shoe if you poured some melted milk chocolate over it. However, chocolate TOFFEE is a crime against humanity.
I seriously do not know how anyone eats it.
My disgust, however, has nothing to do with taste; instead, it's the texture and specifically what damage it could inflict to my teeth – particularly my sad British teeth with all my many cheap NHS fillings (not to be confused with FILLINGS).
Anyway, a fun puzzle today that taught me how to spell LIQUEUR and which triggered some ant-toffee rage. Apologies.
How did you do today? Let me know in the comments below.
Yesterday's NYT Strands answers (Monday, 24 February, game #358)Strands is the NYT's new word game, following Wordle and Connections. It's now out of beta so is a fully fledged member of the NYT's games stable and can be played on the NYT Games site on desktop or mobile.
I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Monday's puzzle instead then click here: NYT Connections hints and answers for Monday, February 24 (game #624).
Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.
SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.
NYT Connections today (game #625) - today's words (Image credit: New York Times)Today's NYT Connections words are…
What are some clues for today's NYT Connections groups?
Need more clues?
We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…
NYT Connections today (game #625) - hint #2 - group answersWhat are the answers for today's NYT Connections groups?
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Connections today (game #625) - the answers (Image credit: New York Times)The answers to today's Connections, game #625, are…
Some very sensible groups in Connections today, which was a pleasant change but which also made the puzzle strangely unsatisfying and rudimentary.
Normally BRICK, FISH TANK, MICROWAVE and SHOEBOX would be something like “Items seen in the first 23 minutes of the 1988 movie Fish Called Wanda” or something similarly impossible to solve. Instead, RECTANGULAR PRISMS was exactly what I thought it would be.
Same with ENTHUSIASM and “MANY” IN DIFFERENT LANGUAGES – which should have included Magir (Many in Icelandic) or something similarly obscure, so I could at least kid myself I’m clever.
So yes, here I am complaining about Connections being too easy. And yes, I will regret this when tomorrow's game is inevitably impossible.
How did you do today? Let me know in the comments below.
Yesterday's NYT Connections answers (Monday, 24 February, game #624)NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.
It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It's playable for free via the NYT Games site on desktop or mobile.
A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Monday's puzzle instead then click here: Quordle hints and answers for Monday, February 24 (game #1127).
Quordle was one of the original Wordle alternatives and is still going strong now more than 1,100 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.
SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.
Quordle today (game #1128) - hint #1 - Vowels How many different vowels are in Quordle today?• The number of different vowels in Quordle today is 3*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Quordle today (game #1128) - hint #2 - repeated letters Do any of today's Quordle answers contain repeated letters?• The number of Quordle answers containing a repeated letter today is 1.
Quordle today (game #1128) - hint #3 - uncommon letters Do the letters Q, Z, X or J appear in Quordle today?• No. None of Q, Z, X or J appear among today's Quordle answers.
Quordle today (game #1128) - hint #4 - starting letters (1) Do any of today's Quordle puzzles start with the same letter?• The number of today's Quordle answers starting with the same letter is 2.
If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:
Quordle today (game #1128) - hint #5 - starting letters (2) What letters do today's Quordle answers start with?• T
• T
• M
• C
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
Quordle today (game #1128) - the answers (Image credit: Merriam-Webster)The answers to today's Quordle, game #1128, are…
Today I decided to go with three start words again, but to tweak things use the results of the first two to influence the third. As I had three letter Es in the wrong position, I decided to go with a double-E word (WHEEL) for my third guess, which turned two of them green and set me on my way to glory.
Even with this headstart I still needed a little bit of luck to finish without errors, opting for CLEAR instead of CREAM, which would also have fit.
How did you do today? Let me know in the comments below.
Daily Sequence today (game #1128) - the answers (Image credit: Merriam-Webster)The answers to today's Quordle Daily Sequence, game #1128, are…
The excitement about DeepSeek is understandable, but a lot of the reactions I’m seeing feel quite a bit off-base. DeepSeek represents a significant efficiency gain in the large language model (LLM) space, which will have a major impact on the nature and economics of LLM applications. However, it does not signal a fundamental breakthrough in artificial general intelligence (AGI), nor a fundamental shift in the center of gravity of AI innovation. It’s a sudden leap along an expected trajectory rather than a disruptive paradigm shift.
DeepSeek’s impressive achievement mirrors the broader historical pattern of technological progression. In the early 1990s, high-end computer graphics rendering required supercomputers; now, it’s done on smartphones. Face recognition, once an expensive niche application, is now a commodity feature. The same principle applies to large language models (LLMs). The surprise isn’t the nature of the advance, it’s the speed.
For those paying attention to exponential technological growth, this isn’t shocking. The concept of Technological Singularity predicts accelerating change, particularly in areas of automated discovery and invention, like AI. As we approach the Singularity, breakthroughs will seem increasingly rapid. DeepSeek is just one of many moments in this unfolding megatrend.
DeepSeek’s architectural innovations: impressive, but not newDeepSeek’s main achievement lies in optimizing efficiency rather than redefining AI architecture. Its Mixture of Experts (MoE) model is a novel tweak of a well-established ensemble learning technique that has been used in AI research for years. What DeepSeek did particularly well was refine MoE alongside other efficiency tricks to minimize computational costs:
Parameter efficiency: DeepSeek’s MoE design activates only 37 billion of its 671 billion parameters at a time. This means it requires just 1/18th of the compute power of traditional LLMs.
Reinforcement learning for reasoning: Instead of manual engineering, DeepSeek’s R1 model improves chain-of-thought reasoning via reinforcement learning.
Multi-token training: DeepSeek-V3 can predict multiple pieces of text at once, increasing training efficiency.
These optimizations allow DeepSeek models to be an order of magnitude cheaper than competitors like OpenAI or Anthropic, both for training and inference. This isn’t a trivial feat—it’s a major step toward making high-quality LLMs more accessible. But again, it’s a stellar engineering refinement, not a conceptual leap toward AGI.
The well-known power of open-sourceOne of DeepSeek’s biggest moves is making its model open-source. This is a stark contrast to the walled-garden strategies of OpenAI, Anthropic and Google – and a nod in the direction of Meta’s Yann LeCun. Open-source AI fosters rapid innovation, broader adoption, and collective improvement. While proprietary models allow firms to capture more direct revenue, DeepSeek’s approach aligns with a more decentralized AI future—one where tools are available to more researchers, companies, and independent developers.
The hedge fund HighFlyer behind DeepSeek knows open-source AI isn’t just about philosophy and doing good for the world; it’s also good business. OpenAI and Anthropic are struggling with balancing research and monetization. DeepSeek’s decision to open-source R1 signals confidence in a different economic model—one based on services, enterprise integration, and scalable hosting. It also gives the global AI community a competitive toolset, reducing the grip of American Big Tech hegemony.
China’s role in the AI raceSome in the West have been taken aback that DeepSeek’s breakthrough came from China. I’m not so surprised. Having spent a decade in China, I’ve witnessed firsthand the scale of investment in AI research, the growing number of PhDs, and the intense focus on making AI both powerful and cost-efficient. This isn’t the first time China has taken a Western innovation and rapidly optimized it for efficiency and scale.
However, rather than viewing this solely as a geopolitical contest, I see it as a step toward a more globally integrated AI landscape. Beneficial AGI is far more likely to emerge from open collaboration than from nationalistic silos. A decentralized, globally distributed AGI development effort—rather than a monopoly by a single country or corporation—gives us a better shot at ensuring AI serves humanity as a whole.
DeepSeek’s broader implications: The future beyond LLMsThe hype around DeepSeek largely centers on its cost efficiency and impact on the LLM market. But now more than ever, we really need to take a step back and consider the bigger picture.
LLMs are not the future of AGIWhile transformer-based models can automate economic tasks and integrate into various industries, they lack core AGI capabilities like grounded compositional abstraction and self-directed reasoning.
If AGI emerges within the next decade, it’s unlikely to be purely transformer-based. Alternative architectures—like OpenCog Hyperon and neuromorphic computing—may prove more fundamental to achieving true general intelligence.
The commoditization of LLMs will shift AI investmentDeepSeek’s efficiency gains accelerate the trend of LLMs becoming a commodity. As costs drop, investors may begin looking toward the next frontier of AI innovation.
This could drive funding into AGI architectures beyond transformers, alternative AI hardware (e.g., associative processing units, neuromorphic chips), and decentralized AI networks.
Decentralization will shape AI’s futureThe AI landscape is shifting toward decentralized architectures that prioritize privacy, interoperability, and user control. DeepSeek’s efficiency gains make it easier to deploy AI models in decentralized networks, reducing reliance on centralized tech giants.
DeepSeek’s role in the AI Cambrian explosionDeepSeek represents a major milestone in AI efficiency, but it doesn’t rewrite the fundamental trajectory of AGI development. It’s a sudden acceleration along a predictable curve, not a paradigm shift. Still, its impact on the AI ecosystem is significant:
It pressures incumbents like OpenAI and Anthropic to rethink their business models.
It makes high-quality AI more accessible and affordable.
It signals China’s growing presence in cutting-edge AI development.
It reinforces the inevitability of exponential progress in AI.
Most importantly, DeepSeek’s success should serve as a reminder that AGI development isn’t just about scaling up transformers. If we truly aim to build human-level AGI, we need to go beyond optimizing today’s models and invest in fundamentally new approaches.
The Singularity is coming fast—but if we want it to be beneficial, we must ensure it remains decentralized, global, and open. DeepSeek is not AGI, but it’s an exciting step in the broader dance toward a transformative AI future.
We've featured the best AI phone.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Hostinger is betting big on AI with its launch of a new tool designed to help users build, edit and publish their own web apps without the need for any coding expertise.
Hostinger Horizons has already started rolling out to a select group of customers who have been using the no-code tool to build web apps like a language-learning card game and a time management tool.
The web hosting platform highlighted the versatility of web apps, which offer more interactive and personal experiences compared with traditional websites.
Hostinger HorizonsHostinger revealed the “vast majority” of its more than 3.5 million customers build traditional sites, highlighting how companies like Duolingo, Notion and Airbnb had initially started off as simple web apps – sites that had previously required greater expertise to build.
“Coding web apps from scratch takes weeks or months, and hiring a developer can cost thousands. For larger projects, both time and costs grow exponentially," noted Chief Product and Technology Officer Giedrius Zakaitis.
To build a web app with Horizons, users will interact with a chatbot which uses AI to create, add and edit components on the site. The chat interface also has support for voice prompts and image uploads, and from launch, it’s set to support more than 80 languages.
Zakaitis added: “Web apps have turned ideas into million-dollar startups, but building one always required coding or hiring a developer. We believe it is time to change the game.”
Once complete, users can publish their web apps to a custom domain, and they can always come back to it to refine and update at any time.
The new product launch comes just over a year after Hostinger added Kodee, a chat assistant that offers users guidance when it comes to building their site.
With four in five of the company’s Website Builder users now utilizing artificial intelligence during their site-building phase, Horizons expands AI’s usefulness to a different type of website.
Hostinger says Horizons will be generally available for new and existing customers beginning next month.
You might also likeApple has committed to spending $500 billion over the next four years to invest in its facilities and operations across the US, marking its largest ever commitment in the States.
Central to the announcement is Apple’s intention to support US manufacturing – something that President Trump is keen to do as he looks to reduce the country’s reliance on China, which has been a longstanding manufacturing partner for almost all companies.
Michigan, Texas, California, Arizona, Nevada, Iowa, Oregon, North Carolina and Washington will all benefit from the Cupertino giant’s cash.
Apple invests half a trillion into the US economyOver the next four years, the $3.69 trillion company (and currently the world’s most valuable company) is set to invest half a trillion dollars into the States, which equates to roughly one year’s revenue given that the company posted a 4% year-over-year rise in quarterly revenue to $124.3 billion in the three months ending December 28, 2024.
CEO Tim Cook said: “We are bullish on the future of American innovation, and we’re proud to build on our long-standing U.S. investments with this $500 billion commitment to our country’s future.”
Key to the announcement is a 250,000-square-foot server manufacturing facility in Houston, Texas, where Apple Intelligence-supporting servers are set to be built. Until now, servers have been manufactured outside of the US.
Data center capacity in North Carolina, Iowa, Oregon, Arizona and Nevada will also follow as the iPhone maker looks to power its AI.
Apple will also double its US Advanced Manufacturing Fund from $5 billion to $10 billion and increase R&D spend, which it says is important given the launch of its new C1 cellular modem – its first in-house effort.
Looking ahead, Apple says it will focus on R&D, silicon engineering, software development, and AI and machine learning roles over the next four years, hiring an estimated 20,000 new workers on top of the 2.9 million jobs it currently supports in the US.
You might also like