EA has announced it is restructuring BioWare as it shifts its full attention to Mass Effect 5.
In a blog post published on January 29, Bioware general manager Gary McKay explained that it will be downsizing the studio and moving an unspecified number of developers to other teams within EA, while others will be focused entirely on the next Mass Effect game.
"Now that Dragon Age: The Veilguard has been released, a core team at BioWare is developing the next Mass Effect game under the leadership of veterans from the original trilogy, including Mike Gamble, Preston Watamaniuk, Derek Watts, Parrish Ley, and others," McKay said.
"In keeping with our fierce commitment to innovating during the development and delivery of Mass Effect, we have challenged ourselves to think deeply about delivering the best experience to our fans. We are taking this opportunity between full development cycles to reimagine how we work at BioWare."
McKay continued, saying, "Given this stage of development, we don’t require support from the full studio. We have incredible talent here at BioWare, and so we have worked diligently over the past few months to match many of our colleagues with other teams at EA that had open roles that were a strong fit."
Amid the downsizing, it also appears that several, long-time BioWare veterans have also been laid off, with IGN reporting that "a smaller number" of Dragon Age team members had seen their roles terminated, and been given time to apply for new positions within the company if they choose.
Over on BlueSky, narrative designer Trick Weekes shared that they are now looking for a new position after working 20 years at BioWare. Weekes served as a writer on Mass Effect and Mass Effect 2, then later as a senior writer for Mass Effect 3 and many of the series' downloadable content (DLC).
They also wrote for Dragon Age: Inquisition and, most recently, worked as the lead writer on Dragon Age: The Veilguard.
"I'm now looking for a new writing/narrative position," Weekes said. "It's been a privilege to work with so many amazing devs over my 20 years at BioWare, and I will cherish the memories of the wonderful folks in the community I've met along the way. Thank you all."
Editor, Karin West-Weekes, also announced that she is looking for work, as well as narrative designer Ryan Cormier, producer Jen Cheverie, and more.
"Today’s news will see BioWare become a more agile, focused studio that produces unforgettable RPGs. We appreciate your support as we build a new future for BioWare," McKay added.
You might also like...DeepSeek has seriously shaken up the AI world with an LLM that is seemingly cheaper to train, more power-efficient, and yet equally intelligent compared to its rivals. While Meta, Google, Open AI and others scramble to decipher how DeepSeek’s R1 model got so impressive out of nowhere – with OpenAI even claiming it copied ChatGPT to get there – Microsoft is taking the ‘if you can’t beat them, join them’ approach instead.
Microsoft has announced that, following the arrival of DeepSeek R1 on Azure AI Foundry, you'll soon be able to run an NPU-optimized version of DeepSeek’s AI on your Copilot+ PC. This feature will roll out first to Qualcomm Snapdragon X machines, followed by Intel Core Ultra 200V laptops, and AMD AI chipsets.
It’ll start by making the DeepSeek-R1-Distill-Qwen-1.5B available on Microsoft AI Tookit for developers, before later unlocking the more powerful 7B and 14B versions. While these aren’t as impressive as the 32B and 70B variants also at its disposal, the 14B and lower versions of DeepSeek can run on-device.
This mitigates one of the main concerns with DeepSeek – that data shared with the AI could end up on unsecured foreign servers – with Microsoft adding that “DeepSeek R1 has undergone rigorous red teaming and safety evaluations” to further reduce possible security risks.
How to get DeepSeek R1 on Copilot+ (Image credit: Microsoft)To start using DeepSeek’s on-device Copilot+ build once its available, you’ll need an Azure account – you can sign up on Microsoft's official website if you don't already have one. Your next step will be to boot up Azure AI Foundry and search for DeepSeek R1. Then hit 'Check out model' on the Introducing DeepSeek R1 card, before clicking on 'Deploy' then 'Deploy' again in the window that pops up.
After a few moments the Chat Playground option should open up, and you can start chatting away with DeepSeek on-device.
If you haven’t yet used DeepSeek, two big advantages you’ll find when you install it are that it’s currently free (at least for now), and that it shows you its ‘thinking’ as it develops its responses. Other AI, like ChatGPT, go through the same thought process but they don’t show it to you, meaning you have to refine your prompts through a process of trial and error until you get what you want. Because you can see its process, and where it might have gone off on the wrong track, you can more easily and precisely tweak your DeepSeek prompts to achieve your goals.
As 7B and 14B variants unlock, you should see DeepSeek R1’s Azure model improve, though if you want to test it out you might want to do so sooner rather than later. Given Microsoft’s serious partnership with OpenAI, we expect it won’t treat this emerging rival well if it turns out that DeepSeek was indeed copied from ChatGPT – potentially removing it from Azure, which it may not have a choice about if the AI faces a ban in the US, Italy and other regions.
You may also likeDeepSeek has reportedly disappeared from Italy's Apple App Store and Google Play Store, with the disappearance starting on Wednesday, January 29, 2025.
The block came a day after the country's data watchdog, the Garante, filed a privacy complaint asking for clarification on how the ChatGPT rival handles users' personal data.
Italian iPhone and Android users have confirmed to TechRadar the new AI chatbot isn't available in the app stores to download (see image below).
The DeepSeek website remains available across the country for now. Italians can also still use their DeepSeek app if they had already downloaded it before the block came into force.
The screenshots have been taken on both Italy's Apple (on the left) and Google (on the right) official app stores on January 30, 2025. (Image credit: Future)At the time of writing, no official explanations about Italy's DeepSeek block have been shared.
"I don't know if it's bound to us or not, we asked for some information. The company has now 20 days to reply," Pasquale Stanzione, head of Italy's data watchdog, said to Italian news agency ANSA.
What's certain is that Italy isn't the only European country going after the new Chinese AI chatbot over privacy concerns. Belgium and Ireland also filed similar complaints, fearing that Deepseek's privacy policy may be in breach of GDPR rules.
Can a VPN help bypassing DeepSeek block?Despite the best VPN services being known to help users bypass online restrictions, Italians may require some extra workarounds. Like the US TikTok ban, a VPN isn't a one-click solution for the DeepSeek withdrawal.
That's mainly because using a VPN doesn't spoof your App Store location. This means that you'll need to "find another way of downloading the app other than the Apple App or Google Play stores," explains Eamonn Maguire, Head of Account Security at Proton – the provider behind Proton VPN.
Do you know?(Image credit: Shutterstock)A virtual private network (VPN) is security software that encrypts your internet connections to prevent third-party snooping while spoofing your real IP address location. Th latter skill is what you need to bypass online geo-restrictions.
Surely not impossible, however, experts suggest nonetheless doing this with caution.
"This week's news around data privacy issues and leaked databases are concerning. When coupled with the company's potential links to the Chinese government, this is even more worrying," Maguire told TechRadar.
While DeepSeek's privacy policy might look very similar to those of OpenAI-developed ChatGPT, Euroconsumers – a coalition of five national consumer protection organizations, which includes Italy and Belgium – found "multiple violations of European and national data protection regulations."
Moreover, as per the provider's own wording, users' personal information is stored "in secure servers located in the People's Republic of China" and will be used to "comply with our legal obligations, or as necessary to perform tasks in the public interest, or to protect the vital interests of our users and other people."
All in all, Maguire said: "We recommend users act with caution when using AI tools linked to China, particularly when sharing sensitive business or personal information."
Design hardware firm Wacom has warned its customers that it may have lost their personal data, including payment information.
A report from The Register says the company believes the attack took place between November 28 2024, and January 8, 2025, and it is currently notifying affected individuals.
In the email notification letter, Wacom notes, "The issue that contributed to the incident has been addressed and is effectively being investigated. However, we are now writing only to customers who might have been potentially affected by this."
A credit card skimmer?Those that don’t get an official Wacom communication should consider themselves safe for now. Those that get the email should definitely start monitoring their credit card statements, and possibly even consider placing a fraud alert on their credit cards.
Wacom did not detail the attack at this point. Therefore, we don’t know who the attackers are, how they managed to infiltrate the company’s web shop, or how many people are affected.
While still in the domain of speculation, The Register believes a credit card skimmer code might have been involved, especially since Wacom’s web shop is powered by Magento.
Magento is a wildly popular open-source ecommerce platform, and as such is a frequent target. For example, in late July 2024, researchers reported on a creative technique involving so-called swap files being used to deploy persistent credit card skimmers on Magento sites. Earlier still, in April, cybersecurity researchers found a critical vulnerability in Magento allowing threat actors to deploy persistent backdoors onto vulnerable servers.
If you’re interested in learning more, make sure to read our definitive Magento hosting guide.
Via The Register
You might also likeCerebras has announced that it will support DeepSeek in a not-so-surprising move, more specifically the R1 70B reasoning model. The move comes after Groq and Microsoft confirmed they would also bring the new kid of the AI block to their respective clouds. AWS and Google Cloud have yet to do so but anybody can run the open source model anywhere, even locally.
The AI inference chip specialist will run DeepSeek R1 70B at 1,600 tokens/second, which it claims is 57x faster than any R1 provider using GPUs; one can deduce that 28 tokens/second is what GPU-in-the-cloud solution (in that case DeepInfra) apparently reach. Serendipitously, Cerebras latest chip is 57x bigger than the H100. I have reached out to Cerebras to find out more about that claim.
Research by Cerebras also demonstrated that DeepSeek is more accurate than OpenAI models on a number of tests. The model will run on Cerebras hardware in US-based datacentres to assuage the privacy concerns that many experts have expressed. DeepSeek - the app - will send your data (and metadata) to China where it will most likely be stored. Nothing surprising here as almost all apps - especially free ones - capture user data for legitimate reasons.
Cerebras wafer scale solution positions it uniquely to benefit from the impending AI cloud inference boom. WSE-3, which is the fastest AI chip (or HPC accelerator) in the world, has almost one million cores and a staggering four trillion transistors. More importantly though, it has 44GB of SRAM, which is the fastest memory available, even faster than HBM found on Nvidia’s GPUs. Since WSE-3 is just one huge die, the available memory bandwith is huge, several orders of magnitude bigger than what the Nvidia H100 (and for that matter the H200) can muster.
A price war is brewing ahead of WSE-4 launchNo pricing has been disclosed yet but Cerebras, which is usually coy about that particular detail, did divulge last year that Llama 3.1 405B on Cerebras Inference would cost $6/million input tokens and $12/million output tokens. Expect DeepSeek to be available for far less.
WSE-4 is the next iteration of WSE-3 and will deliver a significant boost in the performance of DeepSeek and similar reasoning models when it is expected to launch in 2026 or 2027 (depending on market conditions).
The arrival of DeepSeek is also likely to shake the proverbial AI money tree, bringin more competition to established players like OpenAI or Anthropic, pushing prices down.
A quick look at Docsbot.ai LLM API calculator shows OpenAI is almost always the most expensive in all configurations, sometimes by several orders of magnitude.
(Image credit: Cerebras) (Image credit: Cerebras) You might also likeDeepSeek thought for 19 seconds before answering the question, "Are you smarter than Gemini?" Then, it delivered a whopper: DeepSeek thought it was ChatGPT.
This seemingly innocuous mistake could be proof – a smoking gun per say – that, yes, DeepSeek was trained on OpenAI models, as has been claimed by OpenAI, and that when pushed, it will dive back into that training to speak its truth.
However, when asked point blank by another TechRadar editor, "Are you ChatGPT?" it said it was not and that it is "DeepSeek-V3, an AI assistant created exclusively by the Chinese Company DeepSeek."
Okay, sure, but in your rather lengthy response to me, you, DeepSeek, made multiple references to yourself as ChatGPT. I've included some screenshots below as proof:
(Image credit: Future)As you can see, after trying to discern if I was talking about Gemini AI or some other Gemini, DeepSeek replies, "If it's about the AI, then the question is comparing me (which is ChatGPT) to Gemini." Later, it refers to "Myself (ChatGPT)."
Why would DeepSeek do that under any circumstances? Is it one of those AI hallucinations we like to talk about? Perhaps, but in my interaction, DeepSeek seemed quite clear about its identity.
I got to this line of inquiry, by the way, because I asked Gemini on my Samsung Galaxy S25 Ultra if it's smarter than DeepSeek. The response was shockingly diplomatic, and when I asked for a simple yes or no answer, it told me, "It's not possible to give a simple yes or no answer. 'Smart' is too complex a concept to apply in that way to language models. They have different strengths and weaknesses."
I can't say I disagree. In fact, DeepSeek's answer was quite similar, except it was not necessarily talking about itself.
(Image credit: Future) This doesn't add upI think I've been clear about my DeepSeek skepticism. Everyone says it's the most powerful and cheaply trained AI ever (everyone except Alibaba), but I don't know if that's true. To be fair, there's a tremendous amount of detail on GitHub about DeekSeek's open-source LLMs. They at least appear to show that DeepSeek did the work.
But I do not think they reveal how these models were trained, and, as we all know, DeepSeek is a Chinese company that would show no compunction about using someone else's models to train their own and then lie about it to make their process for building such models seem more efficient.
I do not have proof that DeepSeek trained its models on OpenAI or anyone else's large language models, or at least I didn't until today.
Who are you?DeepSeek is increasingly a mystery wrapped inside a conundrum. There is some consensus on the fact that DeepSeek arrived more fully formed and in less time than most other models, including Google Gemini, OpenAI's ChatGPT, and Claude AI.
Very few in the tech community trust DeepSeek's apps on smartphones because there is no way to know if China is looking at all that prompt data. On the other hand, the models DeepSeek has built are impressive, and some, including Microsoft, are already planning to include them in their own AI offerings.
In the case of Microsoft, there is some irony here. Copilot was built based on cutting-edge ChatGPT models, but in recent months, there have been some questions about if the deep financial partnership between Microsoft and OpenAI will last into the Agentic and later Artificial General Intelligence era.
So what if Microsoft starts using DeepSeek, which is possibly just another offshoot of its current if not future, friend OpenAI?
The whole thing sounds like a confusing mess. In the meantime, DeepSeek has an identity crisis and who is going to tell it that whoever it is, it still may not be welcome in the US?
You might also likeThe Civ 7 requirements for PC, Mac, and Steam Deck have finally been revealed. In general, you'll need to know the minimum and recommended specs to work out whether your setup can run the game.
From everything we've seen so far, Civilization 7 looks primed to fill the rather big shoes left by its predecessor. It'll introduce new mechanics like the commander system, which makes it easier to manage large armies. The ages system will hopefully make multiplayer games more exciting too, by having players' civilization always at the height of their power. It's new additions like these that could earn Civ 7 a place on our best strategy games list by the end of the year.
Here's everything you need to know about the Civ 7 requirements for PC, Mac, and Steam Deck. We'll detail the minimum and recommended specs for each platform so that you can decide whether you want to pick up the game at launch.
Civ 7 requirements for PC (Image credit: Firaxis)Here are the Civ 7 requirements for PC, whether you want to play on minimum, recommended, or ultra specs.
Civ 7 requirements for Mac (Image credit: Firaxis)Now for the Civ 7 requirements from Mac, which will allow players using Apple silicon to get in on the fun.
Civ 7 requirements for Linux (Image credit: 2K)And now for those expecting to play Civilization 7 on Linux:
Can you play Civ 7 on the Steam Deck? (Image credit: Firaxis)Civilization 7 is playable on the Steam Deck, having been confirmed as Steam Deck Verified by the developer. This means that it'll be easy to set up and run on the handheld and that it should, in theory, run fairly well. Of course, this can vary from game to game, and it's always worth being cautious around launch, as there may be bugs and issues that'll need to be patched out. We'll have to wait and see.
You Might Also Like...The United States stands at a critical juncture in artificial intelligence development. Balancing rapid innovation with public safety will determine America's leadership in the global AI landscape for decades to come. As AI capabilities expand at an unprecedented pace, recent incidents have exposed the critical need for thoughtful industry guardrails to ensure safe deployment while maintaining America's competitive edge. The appointment of Elon Musk as a key AI advisor brings a valuable perspective to this challenge – his unique experience as both an AI innovator and safety advocate offers crucial insights into balancing rapid progress with responsible development.
The path forward lies not in choosing between innovation and safety but in designing intelligent, industry-led measures that enable both. While Europe has committed to comprehensive regulation through the AI Act, the U.S. has an opportunity to pioneer an approach that protects users while accelerating technological progress.
The political-technical intersection: innovation balanced with responsibilityThe EU's AI Act, which passed into effect in August, represents the world's first comprehensive AI regulation. Over the next three years, its staged implementation includes outright bans on specific AI applications, strict governance rules for general-purpose AI models, and specific requirements for AI systems in regulated products. While the Act aims to promote responsible AI development and protect citizens' rights, its comprehensive regulatory approach may create challenges for rapid innovation. The US has the opportunity to adopt a more agile, industry-led framework that promotes both safety and rapid progress.
This regulatory landscape makes Elon Musk's perspective particularly valuable. Despite being one of tech's most prominent advocates for innovation, he has consistently warned about AI's existential risks. His concerns gained particular resonance when his own Grok AI system demonstrated the technology's pitfalls. It was Grok that spread misinformation about NBA player Thompson. Yet rather than advocating for blanket regulation, Musk emphasizes the need for industry-led safety measures that can evolve as quickly as the technology itself.
The U.S. tech sector has an opportunity to demonstrate a more agile approach. While the EU implements broad prohibitions on practices like emotion recognition in workplaces and untargeted facial image scraping, American companies can develop targeted safety measures that address specific risks while maintaining development speed. This isn't just theory – we're already seeing how thoughtful guardrails accelerate progress by preventing the kinds of failures that lead to regulatory intervention.
The stakes are significant. Despite hundreds of billions invested in AI development globally, many applications remain stalled due to safety concerns. Companies rushing to deploy systems without adequate protections often face costly setbacks, reputational damage, and eventual regulatory scrutiny.
Embedding innovative safety measures from the start allows for more rapid, sustainable innovation than uncontrolled development or excessive regulation. This balanced approach could cement American leadership in the global AI race while ensuring responsible development.
The cost of inadequate AI safetyTragic incidents increasingly reveal the dangers of deploying AI systems without robust guardrails. In February, 14-year-old from Florida died by suicide after engaging with a chatbot from Character.AI, which reportedly facilitated troubling conversations about self-harm. Despite marketing itself as “AI that feels alive,” the platform allegedly lacked basic safety measures, such as crisis intervention protocols.
This tragedy is far from isolated. Additional stories about AI-related harm include:
Air Canada’s chatbot made an erroneous recommendation to a grieving passenger, suggesting he could gain a bereavement fare up to 90-days after his ticket purchase. This was not true and led to a tribunal case where the airline was found responsible for reimbursing the passenger. In the UK, AI-powered image generation tools were criminally misused to create and distribute illegal content, leading to an 18-year prison sentence for the perpetrator.
These incidents serve as stark warnings about the consequences of inadequate oversight and highlight the urgent need for robust safeguards.
Overlooked AI risks and their broader implicationsBeyond the high-profile consumer failures, AI systems introduce risks that, while perhaps less immediately visible, can have serious long-term consequences. Hallucinations—when AI generates incorrect or fabricated content—can lead to security threats and reputational harm, particularly in high-stakes sectors like healthcare or finance. Legal liability looms large, as seen in cases where AI dispensed harmful advice, exposing companies to lawsuits. Viral misinformation, such as the Grok incident, spreads at unprecedented speeds, exacerbating societal division and damaging public figures.
Personal data is also at risk. Increasingly sophisticated algorithms can be manipulated through prompt injections, where users trick chatbots into sharing sensitive or unauthorized information. And these examples are just the tip of the iceberg. When applied to national security, the grid, government, and law enforcement, the same faults and failures suggest much deeper dangers.
Additionally, system vulnerabilities can lead to unintended disclosures, further eroding customer trust and raising serious security concerns. This distrust ripples across industries, with many companies struggling to justify billions spent on AI projects that are now stalled due to safety concerns. Some applications face significant delays as organizations scramble to implement safeguards retroactively—ironically slowing innovation despite the rush to deploy systems rapidly.
Speed without safety has proven unsustainable. While the industry prioritizes swift development, the resulting failures demand costly reevaluations, tarnish reputations, and create regulatory backlash. These challenges underscore the urgent need for stronger, forward-looking guardrails that address the root causes of AI risks.
Technical requirements for effective guardrailsEffective AI safety requires addressing the limitations of traditional approaches like retrieval-augmented generation (RAG) and basic prompt engineering. While useful for enhancing outputs, these methods fall short in preventing harm, particularly when dealing with complex risks like hallucinations, security vulnerabilities, and biased responses. Similarly, relying solely on in-house guardrails can expose systems to evolving threats, as they often lack the adaptability and scale required to address real-world challenges.
A more effective approach lies in rethinking the architecture of safety mechanisms. Models that use LLMs as their own quality checkers—commonly referred to as "LLM-as-a-judge" systems—may seem promising but often struggle with consistency, nuance, and cost.
A more robust, cheaper alternative is using multiple specialized small language models, where each model is fine-tuned for a specific task, such as detecting hallucinations, handling sensitive information, or mitigating toxic outputs. This decentralized setup enhances both accuracy and reliability while maintaining resilience, as precise, fine-tuned SLMs are more accurate in their decision-making than LLMs that are not fine-tuned for one specific task.
MultiSLM guardrail architectures also strike a critical balance between speed and accuracy. By distributing workloads across specialized models, these systems achieve faster response times without compromising performance. This is especially crucial for applications like conversational agents or real-time decision-making tools, where delays can undermine user trust and experience.
By embedding comprehensive, adaptable guardrails into AI systems, organizations can move beyond outdated safety measures and provide solutions that meet today’s demands for security and efficiency. These advancements don’t stifle innovation but instead create a foundation for deploying AI responsibly and effectively in high-stakes environments.
Path forward for US leadershipAmerica's tech sector can maintain its competitive edge by embracing industry-led safety solutions rather than applying rigid regulations. This requires implementing specialized guardrail solutions during initial development while establishing collaborative safety standards across the industry. Companies must also create transparent frameworks for testing and validation, alongside rapid response protocols for emerging risks.
To solidify its position as a leader in AI innovation, the US must proactively implement dynamic safety measures, foster industry-wide collaboration, and focus on creating open standards that others can build upon. This means developing shared resources for threat detection and response, while building cross-industry partnerships to address common safety challenges. By investing in research to anticipate and prevent future AI risks, and engaging with academia to advance safety science, the U.S. can create an innovation ecosystem that others will want to emulate rather than regulate.
We've featured the best AI phone.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Stranger Things season 5 is, unsurprisingly, shaping up to be the hit series' biggest entry yet – but I don't think any of us realized how much footage was shot.
The short answer? A lot. Like, a lot a lot. That's according to the massively successful Netflix show's creators Matt and Ross Duffer, who have revealed they filmed more than 650 hours of material for Stranger Things' final season.
The siblings, who are collectively known as the Duffer brothers (shocking, I know) confirmed as much at Next on Netflix 2025. Taking to the stage during the Los Angeles edition of this year's multinational event, the duo tentatively lifted the lid on season 5's development, which included the fascinating tidbit on the hundreds of hours of footage they collected during its 12-month shoot.
Stranger Things season 4's final episode set up a potentially barnstorming end to the hit series (Image credit: Netflix)"We spent a full year filming this season," Ross Duffer said. "By the end, we’d captured over 650 hours of footage. So, needless to say, this is our biggest and most ambitious season yet."
Echoing Stranger Things star Maya Hawke's previous comments that season 5 will be "basically, eight movies", the Duffers also teased that the sci-fi horror series' final chapter may emotionally devastate us when Stranger Things season 5 launches later this year.
"We think it's our most personal story," Matt Duffer added. "It was super intense and emotional to film – for us and for our actors. We’ve been making this show together for almost 10 years. There was a lot of crying. There was SO much crying. The show means so much to all of us, and everyone put their hearts and souls into it. And we hope – and believe – that passion will translate to the screen."
What new footage was revealed as part of Stranger Things 5's exclusive Next on Netflix teaser? Stranger Things season 5 is still on track to be released in 2025 (Image credit: Netflix)Potential spoilers follow for Stranger Things season 5.
As I mentioned at the start of this article, the Duffer brothers also debuted a tantalizing new look at Stranger Things 5 during Next on Netflix 2025. Just like the Stranger Things season 5 video that was unveiled during Netflix Tudum 2024 and the first-look teaser released online last July that teased new characters and a possible time jump, though, it was just another behind-the-scenes (BTS) look at the forthcoming installment.
That doesn't mean it wasn't worth showing, mind you. I attended the UK edition of Next on Netflix and, with the video being livestreamed to me and other audience members from LA, I can report on what was shown. For starters, the latest BTS video gave us a look at the returning Vecna, who appears to have recovered from the injuries he sustained in last season's finale (read my Stranger Things season 4 volume 2 ending explained piece for more details).
The footage also showed Eleven wearing baggy clothes – presumably in a bid to conceal her identity – and using her supernatural abilities to fight some actors in mocap suits who are running on all fours. Are these individuals acting out the movements for the return of season 2's demodogs? I imagine so.
There were also blink-and-you'll-miss-it clips of Max running through The Void, Hopper brandishing a shotgun, someone screaming as they're seemingly attacked by Vecna (it was hard to make out who this was), and some of our heroes interacting with season 5's newcomers, including Jake Connelly's mystery character. The footage was played alongside audio of a conversation between Mike and Eleven, too, with the former telling the latter that they'll finish this fight together.
All in all, season 5 looks and sounds fantastic – so, when will it launch on Netflix? The short answer is: we don't know. Stranger Things 5 only wrapped filming on December 20, 2024 and given the amount of footage that the Duffers have to sift through as part of the post-production process, I'm convinced we won't see one of the best Netflix shows return to our screens for the last time until late 2025. According to Netflix's Chief Content Officer Bella Bajara, Netflix Tudum 2025 will take place in May, so maybe we'll learn more about season 5's release window then.
You might also likeIt's nearly time to close the final chapter of Joe Goldberg's story as Netflix has released a teaser clip and image of You season 5 ahead of its debut on April 24.
As part of the Next on Netflix 2025 event that took place on January 29, the best streaming service's for genre hoppers has unveiled a new image of Penn Badgley as Joe Goldberg in season 5. After disguising himself as an English professor in London, the book-loving killer is back to where it all began in New York City. Now clean shaven, the image is reminiscent of Joe in You season 1 and marks the beginning of him getting his old life back after that bombshell season 4 ending. I mean, my brain is still spinning from it all.
Netflix also shared a new teaser (see below) of Joe in the infamous glass cage that's housed many of his prisoners throughout the four seasons. "I'm Joe. Let's get to know each other better before we bid each other one last farewell. Goodbye, you," Joe ominously says inside the cage.
What do we know about You season 5?*Contains spoilers for You season 4 ending*
The official logline for You season 5 reads: "In the epic fifth and final season, Joe Goldberg returns to New York to enjoy his happily ever after… until his perfect life is threatened by the ghosts of his past and his own dark desires."
As the season unfolds, Joe connects with a young woman called Bronte (Madeline Brewer) who gets a job at his new bookstore. The enigmatic playwright makes Joe question his affluent life as they bond over literature and loss, meanwhile he also has to contend with his wife Kate’s (Charlotte Ritchie) siblings.
It seems that's not the only problem Joe faces in the Big Apple as Badgley previously teased at Tudum that a familiar face from Joe’s past will come back to haunt him. There's many people Joe has wronged in the past, though. Could it be the falsely imprisoned Dr. Nicky (John Stamos) from season 1? Orphaned Ellie (Jenna Ortega) from season 2? Or Joe's former season 3 love interest Marienne (Tati Gabrielle)?
Across the four seasons, the murderous bookstore manager's deadly pursuit of love has taken him to Los Angeles, San Francisco, and London, where Joe found himself at the center of a mind-boggling whodunnit (I'll save you the details). This ordeal forced him to finally accept the undeniable truth that he was a bad person – a fact he ignored for too long. Now, Joe is back in New York City with his partner, Kate armed with a dangerous new lease of life. But will Joe's past finally catch up to him in season 5 of this best Netflix show?
You might also likeMany consider the dawn of AI as marked by ChatGPT’s rapid success, however AI’s evolution did not happen overnight, and the opportunities it offers will not vanish tomorrow.
The AI hype has led many firms to succumb to “AI FOMO” and rush into adoption without a clear strategy. AI FOMO has caused many businesses to make impulsive, short-term decisions which lack the strategic foresight on how best to leverage AI for sustained success. With research revealing over 80% of AI projects fail - now is the time for businesses to avoid being swept up by the AI hype. Instead, businesses should ground their approach in a thorough understanding of this transformative technology.
But where is AI FOMO causing businesses to go wrong, and how should businesses be approaching AI implementation to ensure success?
Avoid the AI cookie cutter approach at all costsIn a rush to capitalize on the AI hype, almost half of businesses leverage off-the-shelf AI solutions. These pre-built AI solutions can be used without requiring businesses to develop their own technology and are designed to be easily integrated into a business. They are often the favored choice for many because they offer quick deployment and lower up-front costs.
However, despite their efficient exterior - these solutions are not as beneficial as they appear. By leveraging off-the-shelf solutions, businesses will open themselves up to the perils of vendor lock-in.
Firms will see their flexibility greatly limited, unable to switch providers to suit their own business needs and contexts and required to upload all of their data into the infrastructure of their chosen provider. This is not only a time consuming but expensive process especially when firms are looking to scale their AI applications. As they have to re-upload their data into that infrastructure incurring significant additional costs through LLM providers commercial-based token model.
Another issue is that one-size-fits-all AI solutions rarely fit anyone well. Businesses should not underestimate the importance of molding each AI solution around their data and business requirements. A cookie cutter approach to AI implementation will consistently fail to see and understand the nuance of an individual business and its specific requirements. Thereby, producing AI applications that don’t deliver desired accuracy rates, eroding trust in the technology and leading employees to abandon tools that were supposed to enhance productivity.
Perhaps most concerningly, when utilizing off-the shelf AI tools and applications firms completely surrender their own IP. Whilst there has been considerable concerns raised about the likes of OpenAI training their models on users’ data, less has been made of the long term implications of IP loss. Businesses rushing into AI implementations may not be concerned with this in the short term, but the long term negative impacts are substantial. The rapid rate of AI development means that realizing long term success from AI, is not a tick box activity and will require constant development. Firms that don’t own their own IP will experience significant barriers when looking to remain competitive in an increasingly AI driven business landscape. At the end of the day, it is crucial to remember that this is your data, it is your context, it therefore should be your IP.
Embracing an agnostic approach to AITo avoid the perils of AI FOMO, businesses have to embrace an agnostic approach to AI. Agnostic AI is not just about avoiding tokens - it is a curated methodology that allows businesses to pick the optimal approach to achieve their desired outcome. Ironically, this method yields lower compute requirements, higher accuracy and provides businesses with a solution that can evolve alongside technological advancements.
Those that take a step back and have a long term view of AI implementation will see the clear benefits of avoiding jumping straight into utilizing off the shelf models. Instead, building an agnostic AI model will allow businesses to tap into the most cost-effective and optimal LLM for each use case. This will also allow businesses to tailor each use case to a specific domain to improve the effectiveness of the model.
Utilizing an agnostic AI infrastructure allows business to remain agile and versatile, enabling firms to fine-tune different LLMs to solve unique problems. Rather than relying on a single model to address all challenges, businesses can leverage multiple LLMs to provide tailored solutions and select the most cost-effective and efficient models for each specific problem.
An agnostic approach will also allow businesses to be agile in the face of the ever-changing AI landscape, keeping up with changing market dynamics and regulatory requirements. This approach provides businesses with the freedom and flexibility to switch or update tools as regulations and rival firms evolve to ensure they maintain their competitive edge and are consistently compliant.
The allure of AI can be powerful, but businesses must resist the urge to leap without looking. Succumbing to AI FOMO often leads to missteps, inefficiencies, and missed opportunities. By avoiding cookie-cutter solutions, and adopting an agnostic approach to AI, businesses can position themselves for long-term success in the AI era.
AI is not a race to be won overnight and to realize its true transformative potential requires strategy, adaptability, and a clear focus on the future.
We've featured the best business plan software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Microsoft has unveiled the next generation of Surface Copilot+ PC devices aimed at business and enterprise users, with the new offerings firmly planting AI front and center.
The new Surface Pro and Surface Laptop offer “a significant leap in x86 performance”, the company says, providing boosts in performance and productivity alongside a boosted NPU for business-focused AI tasks.
The launches include a new Surface Laptop available in 13.8in or 15in display options, alongside an upgraded Surface Pro for those looking for something a bit more flexible.
Surface Laptop for Business“Customers are choosing Surface Copilot+ PCs today for improvements in performance, battery life, and security,” noted Nancie Gaskill, General Manager, Surface.
“Paired with Microsoft 365 Copiloti and enhanced AI processing power, these devices transform the employee experience to amplify your team’s efficiency and creativity through Copilot+ PC experiences designed for work.”
Officially known as the Surface Pro 11th Edition and Surface Laptop 7th Edition, the two new releases are available with Intel’s latest (series 2) Core Ultra processors, but users will have the option of Intel or Snapdragon-powered devices.
Microsoft also revealed customers will soon have the option of a 5G-enabled Surface device, with an all-new Surface Laptop 5G arriving later in 2025 to give users even more connectivity when on the go.
Alongside its Intel power, the new Surface Laptop for Business includes up to 22 hours battery life, Wi-Fi 7 connectivity, added ports and even customizable haptic typing alongside a larger touchpad.
Microsoft says that despite the slightly smaller dimensions on paper, its 13.8in display actually offers a larger viewing space than other 14in displays on the market due to ultra-thin bezels and also features an anti-reflective display for added privacy.
The upgraded device also offers a major performance boost, with Microsoft claiming up to 26% faster performance when multi-tasking, up to 2x faster graphics performance, and even up to 3x the battery life when on Teams calls.
Alongside this, the device features a powerful NPU that Microsoft says makes it the ideal workplace AI companion, powering tools and functions such as the new Windows “Descriptive Search” function across local and OneDrive files, Click to Do, and Microsoft Teams upgrades such as “Super Resolution” and live captions in more than 40 languages.
Surface Pro for BusinessMicrosoft says the new Surface Pro for Business is designed to replace your existing laptop, tablet, pen and paper in a single device, offering its most powerful tablet device to date.
It also offers more connections than previous versions, with support for up to three external 4k displays, along with boosted hardware which provides 28% more performance than the Surface Pro 9, and 50% more battery life when on Microsoft Teams calls.
“It’s never been more effortless to get work done,” noted Gaskill. "These new Copilot+ PCs offer a solution for every employee."
"Surface Copilot+ PCs are the ideal choice to modernize your business, offering the best combination of hardware, software and unparalleled security to support your business needs - these devices help make your business future-ready."
Both the Surface Laptop for Business and Surface Pro for Business will be available from February 18, 2025 for $1499.
You might also like