Christmas just came early for fans of The White Lotus, because HBO Max has dropped the season 3 trailer and confirmed that The White Lotus season 3 will be available to stream from February 16, 2025 – we'd guessed January or February so we're feeling pretty smug right now.
In a big surprise, it looks like the luxury hotel brand will once again be plagued by murder. We know. We can't believe it either. And according to showrunner Like White, season 3 is going to be "longer, bigger, crazier".
We don't yet know who the victim is or how they met their end. But the trailer does feature a body in a bag, so it's pretty clear that things are going to end badly for at least one visitor.
What we know so far about The White Lotus Season 3We already knew a few things thanks to the short teaser trailer that Max posted back in August with the caption "new luxuries await you in Thailand". That trailer also gave us glimpses of the new cast, including Jason Isaacs, Parker Posey, and Patrick Schwarzenegger. That teaser ended with Posey by the pool alongside Leslie Bibb and Michelle Monaghan before the tagline "what happens in Thailand stays in Thailand" appeared.
One thing the new trailer has revealed is how Natasha Rothwell's character Belinda ended up there: she's doing a transfer between hotels so she can get a change of scene. And the new guests all seem pretty horrible, with Walton Goggins a particular standout. It's nice to see him with a nose again after watching him in Fallout.
Here's the confirmed cast for season 3 of one of the best Max shows so far:
Seasons 1 and 2 of The White Lotus are streaming now on Max.
You might also likeFormer Syrian President Bashar al-Assad is believed to have issued his first remarks since opposition forces took over the capital over a week ago in a Telegram post from Moscow.
(Image credit: Claire Harbage)
German Chancellor Olaf Scholz had hoped for this outcome when he called for the confidence vote, analysts say. His aim: to win fresh elections in February and come back with a stronger mandate.
(Image credit: Markus Schreiber)
A new survey from the company Sellcell has found that most iPhone and Samsung users don't actually think AI improves their daily lives.
The survey asked iPhone users with Apple Intelligence and Samsung users with access to Galaxy AI, whether or not the AI features on their smartphones were actually useful, and most don't seem to think so. According to Sellcell, 73% of iPhone users and 87% of Samsung users say AI features add little to no value, showcasing that AI is yet to show its raison d'être on the best smartphones.
The survey also found that 1 in 6 iPhone users would make the jump to Android for AI features if there was an enticing enough AI-fuelled feature worth making the move for. Interestingly, nearly 50% of iPhone users said AI was a major factor when deciding on their next smartphone purchase, that number was 23.7% for Samsung users.
The survey in itself doesn't highlight any surprising results, after all, we're still at the very beginning of AI development in consumer products, with Apple only launching Apple Intelligence in September and features still rolling out. According to the 1000+ iPhone users asked, Writing Tools was deemed the most popular AI feature interesting 72% of all those who responded. ‘Notification summaries’ (54%), ‘Priority Messages’ (44.5%), ‘Clean Up in Photos’ (29.1%), and ‘Smart Reply in Mail and Messages’ (20.9%) rounded out the list. This survey would've been conducted before the launch of iOS 18.2 and the arrival of Genmoji, ChatGPT integration in Siri, and Image Playground which all launched last week.
On the Samsung side, users found ‘Circle to Search’ (82.1%), ‘Photo Assist’ (55.5%), ‘Chat Assist’ (28.8%), ‘Note Assist’ (17.4%), and ‘Browsing Assist’ (11.6%) as the most interesting AI features.
Is AI just a gimmick? (Image credit: Future / Apple)Surveys like this one from Sellcell shine a light on the larger public perception of AI tools in smartphones, that said it's still early days and the best is yet to come. As an iPhone user myself, I'm slowly noticing Apple Intelligence features creep into my workflow more and more, and I expect that's the same for users on Samsung and other Android devices with Galaxy AI and Gemini.
AI features on smartphones need a few more years to fully cement themselves into mobile operating systems for us to see if they are indeed a total gimmick or tools that can infinitely improve our lives.
I for one, remain hopeful, especially because Genmoji in itself has made my day-to-day much better. Who doesn't want to generate a frog emoji for every conversation topic they have during the day? My life now has more frogs in it, and for that, I need to thank AI.
You might also like...When you're a hammer, everything looks like a nail; when you're a drone spotter, everything looks like a drone.
As someone who's flown more than a few drones in his lifetime and sometimes carries a foldable one in his pocket during weekend hikes so he can randomly pull it out and fly it over hills, lakes, homes, and trees, I'm a bit of a drone watcher; not to be confused with a bird watcher (I'm that, too), who keeps his eyes and ears open for the aviary kind. Come to think of it, drone spotting is a bit like that, too. Usually, I hear the high-pitched buzz, and then I cast about, scanning the skies for the tell-tale movement (hover, sprint, hover, zig-zag) and spinning rotors of a consumer-grade drone.
Like a birder, I'll call out, "Oh, look, someone's flying a drone over us." Over the years, I have seen consumer class (usually under 250 grams) drones fly over my home. I typically try to identify which DJI drone it is; maybe a DJI Mini, perhaps a Mavic or DJI Mavic Pro. Usually, it's not one of the larger Phantom Pro drones since most people are neither qualified nor allowed by the FAA to fly them over residential areas.
What's up with that?Naturally, I've been intrigued by the explosion of east coast US drone sightings in recent weeks. In the videos (mostly unverified) I've seen on TikTok, they tend to be much larger than anything I've flown. In fact, they appear to be huge (think five or six feet across) enterprise-grade drones used by businesses for surveillance, package delivery, and feature film operations.
Where I live – in New York – and surrounding cities along the east coast of the US, drone spotting is now something of an obsession, though I haven't quite caught the bug.
During a recent crystal clear night that featured a brilliantly bright full Moon and Jupiter nearby, my offspring and I dragged our Celestron telescope outside to star gaze. As we stood shivering in the night, trying to get Jupiter's moons lined up in our scope, I started pointing out a few low-orbit satellites silently dashing by: They're usually easily identified by their speed, straight-as-an-arrow trajectory and lights that blink at regular intervals. It never occurred to me to suspect them of being drones. Perhaps I know better, or maybe, unlike others, I'm not looking for drones in the night sky.
Look, I'm not saying there aren't drones flying over the East Coast of the US. They may be, but I don't think it's an invasion. Here are some ideas about what's going on:
Now, I tend to agree that the government (local and federal) has moved too slowly to address the "drone invasion" (they've finally agreed to send in special drone-detecting technology), but I also think the FAA has been too lax about drone registration and tracking. Essentially, anything that takes off in public airspace should instantly become a tracked dot on local flight tracking systems. FWIW, if you ever found my lost drone, you'd open up the battery compartment to find my drone pilot registration number.
All this aside, I'm almost certain that the majority of drones people think they're seeing are not. They're still planes, helicopters, and satellites. However, until the US government responds effectively to consumer concerns, the drone hysteria will grow, people will start shooting at these drones, and then someone will probably really get hurt.
Don't be a hammer looking for a nail.
You might also likeWe're in the second week of "12 Days of Open AI", which is OpenAI's Christmas gift to all of us. So far, on every day of the period OpenAI has released a new product – highlights of the first week have been Sora, OpenAI's long-awaited AI video generator, ChatGPT's new o1 LLM (which is better at reasoning than before) and ChatGPT's new Canvas and Projects features.
When he launched the 12-day project, OpenAI CEO Sam Altman said that there would be "some big ones and some stocking stuffers", and so far the company has alternated between big and small announcements on subsequent days.
However, the big ones really have stood out; in fact, the first week of 12 Days of OpenAI produced so many amazing new products that we're still getting our heads around it all. And now we've got another whole week of new announcements to come. Hurrah! So what can we expect?
Well, keep checking back regularly, because we're going to cover everything that OpenAI announces and all the rumors that go with it. So grab a hot drink, sit back and relax and get ready for a new bunch of releases from OpenAI.
12 days of OpenAI – everything announced so farWelcome to day eight of '12 Days of OpenAI'! We've had the weekend to think about all the good stuff OpenAI released last week (Sora, ChatGPT Canvas and Projects, plus ChatGPT o1) and that's got us wondering what we can expect from the AI giant this week?
There's still been nothing announced on the AI image generation front, so could we see a new DALL-E release today?
We're here from today until Friday this week (the 20th) when we'll get our final day of OpenAI releases. Today's announcement will kick off at 10am PT, so don't miss it.
Today is a good day to reflect on the goodies that OpenAI announced last week, and while Sora was the highlight, without a doubt, it was another of its announcements that I found the most useful...
I'm talking about the Canvas feature. As I wrote at the time, it has completely changed the way I use ChatGPT. The new writing tools are really useful, and I love the way you can keep refining the same piece of text over and over, without having to keep generating reams of text each time you want to change just one element of it.
If you haven't had a play with Canvas yet I'd recommend you give it a go. It's free!
Could we see a major update to DALL-E in today's announcement? I highly doubt it, but you never know.
Today, TechRadar's Senior AI Writer, John-Anthony Disotto, has been testing Grok, a competitor to DALL-E from xAI, Elon Musk's AI company. Grok 2 is now free to all users on X (formerly Twitter), and it's capable of some crazy unrestricted image generation results.
In his piece, titled "I used Grok’s new free tier on X but I can’t show you the results because it could infringe Nintendo's copyright", he talks about how OpenAI's AI image generator won't create images of copyrighted characters or public figures, while Grok will do whatever you ask it to. Despite the limitations, DALL-E 3 as part of ChatGPT remains one of our picks for best AI image generator, but could it get a whole lot better in today's announcement? Time will tell.
Time for an AI podcast generator? (Image credit: Shutterstock / Stock-Asso)Something we haven't seen from OpenAI so far is an AI podcast generator. While Google has been having a lot of success with NotebookLM, its research tool that will generate a fantastically real podcast between two AI hosts from whatever text, video or PDF sources you feed it, we haven't seen anything from OpenAI on this front so far.
Google is rolling out a new feature to NotebookLM that lets you join in the conversation with the AI hosts too.
NotebookLM has been so popular that we're starting to wonder if this week we'll see OpenAI step into the AI podcast game with one of its '12 Days of OpenAI' releases? Time will tell.
We asked ChatGPT what announcement we'll get today (Image credit: Shutterstock/Daniel Chetroni)We asked ChatGPT what it thought that OpenAI would be releasing today because, well, if anybody should know it should be ChatGPT, right? It came back with:
"Given that image generation updates have been notably absent so far, many speculate that a DALL-E update could be coming today. The announcement is scheduled for 10 a.m. PT, so keep an eye out for news regarding potential advancements in OpenAI's creative tools and accessibility features."
To be honest, we think it's right, but it also sounds a lot like ChatGPT has been reading our own blog post on the subject of today's release (see down below), so, er, thanks for nothing ChatGPT...
And away we go! OpenAI is kicking off day 8 of its 12 Days of announcements. Kevin, OpenAI's product lead, is kicking things off and quickly shared that the focus for today is ChatGPT Search.
First, it's arriving to everyone globally and on every platform where ChatGPT is available beginning today. OpenAI is also saying broadly that they've made ChatGPT Search better and are rolling out the ability to search while you're talking with ChatGPT Advanced Voice Mode.
(Image credit: Future)Beyond rolling out ChatGPT search to even more users, OpenAI is also integrating the feature more seamlessly into its Android or iOS mobile app. When you ask a question, say a restaurant in a specific area, like how the OpenAI team demoed during the reveal, it will list the results in line. Further, you can have a more natural conversation about the results to find what you're truly after. It's pretty neat.
Once you find a restaurant in the ChatGPT app for iOS, you can also get directions via Apple Maps as it is integrated.
Also, within the mobile app, you can talk with ChatGPT using voice mode, and it will weave search results and broader web information into its response. For instance, if you're asking for a Christmas market, you can even get more specific for hours and days of operation.
(Image credit: Future)Just as quickly as day 8 of 12 Days of OpenAI began, it's already come to a close. ChatGPT Search was the focus, with some significant enhancements and a much larger rollout for logged-in free users globally, where ChatGPT is available. Much like Canvas, you'll need a free account to use ChatGPT search and get high rate limits.
Kevin also teased that tomorrow, Day 9, will be a mini developer day, so expect the focus to be less on consumer features and more on larger tools.
New research from JetBrains based on survey results from more than 23,000 developers has revealed which AI coding assistants developers are actually using, but it just confirms what we already know.
OpenAI’s ChatGPT stood out as the leader, with two in three (66.4%) who tried the platform continuing to use it. GitHub Copilot follows closely in second place, with an adoption rate of 64.5%.
However, this is just half the picture, because while 52.4% of those who tried Anthropic Claude continued to use the platform, demonstrating a high level of satisfaction, only 2.7% of developers use it in the first place.
Developers are actually using AI codersSatisfaction aside, JetBrains found nearly half (49%) of the developers surveyed regularly use ChatGPT. Microsoft-owned GitHub Copilot, which is powered by OpenAI’s models, is used by 26% of developers, with Google Gemini favored by just 7% and JetBrains’ own AI Assistant by a mere 5%.
Eight in 10 (80%) companies either allow the use of third-party AI tools to varying extents or don’t have a policy on their use, with only one in 10 (11%) totally banning third-party cloud-based AI tools.
More than half of the respondents acknowledged that AI can increase their productivity (57%), enable them to complete repetitive tasks more quickly (57%) or code and develop more quickly overall (58%), however the key driver for many (67%) was a reduction in time spent searching for information. Only one in four (23%) said that AI created better-quality code.
Additionally, three-fifths (59%) say they save between one and four hours per week when using AI, with only 4% stating that they don’t save any time at all.
Full details of the Developer Ecosystem 2024 report can be found online, including more information about the popularity of coding languages and the rise of virtual reality.
You might also likeJackson made a cameo in the romantic comedy musical & Juliet on Saturday night. She told NPR: "I got a call, and someone said, 'We heard that this was your lifelong dream.' And it is."
(Image credit: Paul Morigi)
Reebok has just unveiled the brand new iteration of its top training shoe, the Nano X5 Training Shoes.
While the Nike Free Metcon 5 currently sits atop our best gym shoe guide, the Nano X line has been a mainstay as a brilliant CrossFit and training shoe designed for stability and heavier lifting.
New for 2025 (with a January 24th release date), Reebok has unveiled the Nano X5, replete with a new Decoupled Metasplit Outsole and a new DUALRESPONSE Midsole. Here's everything you need to know about the newest version of the Official Shoe of Fitness.
Reebok's Nano X5 Training Shoes (Image credit: Reebok)The new nano features a premium flexweave upper that should provide breathability, durability, and comfort.
Underfoot, the new midsole and outsole should add increased flexibility for running and circuit training, possibly gearing the X5 to a more all-rounder status. Specifically, the DUALRESPONSE Midsole provides more cushioning and response upfront for running, but more stability in the rear. Reebok says the Performance Comfort Collar will also provide a 360-degree locked-in fit.
The Reebok Nano X5 will be available to buy from Reebok on January 24th, starting at $140. We'll be hands-on and testing with the Nano X5 very soon, and we'll be able to give you a full run down and review in time for 2025.
The X5 will be available in Women's Sport and Unisex Sport styles, with six colors for each.
You may also likeOur blog published hundreds of stories in 2024. These are the 11 that captured the most pageviews. If you look at the headlines, you'll understand why.
(Image credit: Clockwise from top left: Claire Harbage/NPR; Danielle Villasana for NPR; Muntaka Chasant for Fondation Carmignac; Garry DeLong/ Science Source)
Cl0p ransomware, the hacking group that was responsible for the infamous MOVEit data leak fiasco, has now claimed it was also behind the recent Cleo attacks.
Security researchers from Huntress recently revealed three managed file transfer (MFT) products from Cleo were carrying an unrestricted file upload and download vulnerability that could lead to remote code execution (RCE).
The bug is tracked as CVE-2024-50623, and was found in LexiCom, VLTransfer, and Harmony. Cleo released a patch for it in October 2024, but apparently it wasn’t effective.
The attack "project"Huntress also said that it spotted at least two dozen compromised organizations, since the flaw was actively exploited in the wild:
“Victim organizations so far have included various consumer product companies, logistics and shipping organizations, and food suppliers,” Huntress said in its writeup, adding that countless other companies are at risk.
Soon after Huntress’ announcement, the US Cybersecurity and Infrastructure Security Agency (CISA) added the bug to its Known Exploited Vulnerabilities (KEV) catalog, confirming the findings and giving federal agencies three weeks to patch up or stop using the tools entirely.
At first, the attack was not attributed to any particular group, since the evidence was inconclusive. However, over the weekend, BleepingComputer contacted Cl0p, who confirmed being behind the attacks:
“As for CLEO, it was our project (including the previous cleo) - which was successfully completed,” the group told the publication. “All the information that we store, when working with it, we observe all security measures. If the data is government services, institutions, medicine, then we will immediately delete this data without hesitation (let me remind you about the last time when it was with moveit - all government data, medicine, clinics, data of scientific research at the state level were deleted), we comply with our regulations.”
Clearly, Cl0p does not want to dabble with government or healthcare data, since that incurs the wrath of law enforcement, and most ransomware actors that went for government or healthcare data ended up dismantled, or at least seriously disrupted.
You might also likeWe reckon that Silo is the best dystopian series since Fallout, and we're currently enjoying Silo season 2. But while the showrunner Graham Yost has said that there's a "big mystery" around this season, that mystery isn't "will it be renewed for another season?".
How do we know? Because Tim Cook said so. Posting on X, the Apple boss said that he's "excited to share that Silo will return for a third AND fourth season".
Here's hoping that Apple gives the production company a little bit of extra money to light the Silo scenes in season 3 and 4. The show's interior scenes are so dark my brother ended up buying a new TV in order to see what was going on. Sometimes I wonder if it's all a plot to sell more mini-LED TVs.
Excited to share that “Silo” will return for a third AND fourth season! We’re thrilled to support the imagination and inspiration out of the UK as they continue to create world-class films and series. pic.twitter.com/hmtszs7hf5December 16, 2024
What to expect from Silo season 3 and 4Tim Cook isn't telling: he just wants you to know that "we're thrilled to support the imagination and inspiration out of the UK as they continue to create world-class films and series".
Without any spoilers, the current season – season 2 – raises more questions and myriad mysteries of exactly what happened to the Earth and who built the vast network of doomsday bunkers. It begins with The Engineer, a bloody flashback to an earlier era of the silos, and the season shows Bernard starting to pull on the strings of things he doesn't know while a growing rebellion begins to fester.
If you want to know more, of course, you can turn to the books: Silo is based on Hugh Howey's Wool trilogy of Wool, Shift and Dusk. The first half of book one maps closely to Silo season one; the second half and some of book two, from what we've watched so far, maps to the second season. And that means the next seasons will be drawing from Dusk, which starts with the aftermath of something very big in a "war [that's] just beginning".
That's good news for fans of the show, but it's bittersweet too: Dusk is the last book. And that means season 4 will be the last time we'll see the Silo too. As showrunner Jost told Variety, "we are thrilled to have the opportunity to bring this complete story to the screen over the course of four seasons. With the final two chapters of Silo we can’t wait to give fans of the show an incredibly satisfying conclusion to the many mysteries and unanswered questions contained within the walls of these silos.”
Seasons 1 and 2 of Silo are streaming now on Apple TV Plus, with episode 6 set to be released next Friday (December 20) before the season finale debuts on Friday (January 17).
You might also likeThe US government has announced a strict set of requirements which could effectively block Chinese access to AI chips.
According to Reuters, these requirements will ‘empower companies like Google and Microsoft to act as gatekeepers worldwide’, and includes reporting information to the US government, which would close the export loopholes which currently allow Chinese companies to bypass the restrictions.
A small number of US tech firms will be offered ‘gatekeeper status’, allowing them to offer AI capabilities within the cloud in foreign countries without a license, leaving foreign actors to fight for a very limited number of licenses per country in order to import powerful AMD and Nvidia chips.
A war of attritionThere will be exemptions for 19 allied states, the report confirms, which would mean unlimited access to AI chips and capabilities.
There has been a significant buzz around AI in recent months, but the real value (or concern, depending on your perspective), is the military applications. This is sparking national security concerns on both sides, with China banning key mineral exports to the US, and the US in turn imposing trade sanctions.
The Chinese government recently retaliated against continued US sanctions by labeling chips made in the country as ‘no longer safe’ for use for domestic organizations, and has previously banned the export of gallium, antimony, and germanium to the US.
The US and China have been trading blows this year as they both battle to control the semiconductor market, swapping sanctions and offering domestic incentives.
China’s mineral wealth is crucial to the development of the chips, but the country does not yet have the capabilities to develop the high-powered chips domestically, so the battle between the two nations is likely to continue for the foreseeable future.
You might also likeWorldwide, Generative Artificial Intelligence (GenAI) is transforming industries, from the way we work to how organizations respond to challenges. While executives are exploring opportunities to use generative AI, many organizations continue to struggle to identify the return on investment (ROI) for GenAI solutions.
In fact, Unisys’ recent survey of 250 business executives found that 71% of organizations do not effectively measure the ROI for GenAI. With this realization, organizations have an opportunity to implement practices to better evaluate and understand the costs associated with GenAI, capitalize on its workplace capabilities and identify areas where cost savings are possible to better fuel future success.
Understanding the upfront costsThe first step to realizing GenAI's true ROI is remembering that while this technology holds great potential to enhance business outcomes, it is not a magical solution that can manufacture instant results. Rather, organizations should treat GenAI like a sophisticated tool that can drive substantial growth to a business’s bottom line—and one that should come with clear cost benefits.
However, these cost benefits are only as good as the effort and intention behind them. Executives need to work with department heads to pinpoint clear, concrete use cases where GenAI could have the most impact. Once the best business use cases have been agreed on, several crucial steps need to be taken before GenAI is even deployed.
The first step, and possibly one of the most important, is effectively preparing data. This involves collecting, cleaning and structuring data to optimize it for AI algorithms. By taking inventory of data sources and documenting formats, structures and storage locations, companies can remove outdated, inaccurate data that could impede AI outputs.
Next, it is essential to establish robust data governance policies to maintain data quality. This can be achieved by setting up validation rules, data archiving protocols and ongoing monitoring. This also includes allocating additional resources to IT infrastructures to help ensure a smooth integration of AI solutions.
Finally, organizational leaders need to direct resources to train staff and help employees use AI effectively and responsibly. It is not enough to make these tools available and expect employees to understand how to use them. It takes time and effort to use GenAI effectively and to get the best outputs possible.
Enhancing capabilitiesThese initial steps may seem daunting, but once addressed, GenAI can expand your organization’s capabilities beyond what was once considered possible.
For example, GenAI can empower decision making, stimulate innovation and make deep inroads into marketing, sales and R&D initiatives. It can also help organizations anticipate market shifts, adapt to changing customer preferences and optimize supply chains.
The effects of GenAI can be seen across industries. In finance, it is reshaping customer-centric services, simplifying tax preparation and providing effective digital “assistants” that serve as financial advisors. In healthcare, GenAI can enhance personalized care and accelerate drug discovery, helping to save more lives and improve everyone’s quality of life. In the entertainment industry, GenAI can create interactive storytelling experiences that captivate audiences.
We have yet to see how this technology will truly transform businesses, but one thing is for sure – organizations that capitalize on these innovations will set themselves apart as clear market leaders.
Identifying cost saving opportunitiesWith the power of AI, routine tasks and processes can be made easier, helping organizations cut costs. However, to do so effectively, leaders must pay close attention to which areas this technology should be applied.
For example, GenAI can learn to perform tasks to free up employees’ time so they can focus on more strategic initiatives. Additionally, GenAI can be leveraged across various channels – such as web, mobile, voice and social platforms – to quickly understand and respond to customer queries and requests, allowing employees to focus on more complex tasks.
AI can also be invaluable for identifying internal inefficiencies and helping to develop process improvements, leading to smoother operations, as well as a competitive edge in service delivery. Through intelligent data processing, AI can take internal data insights a step further, pulling information to analyze team performance and opportunities for improvement and providing actionable insights.
Justifying AI investments demands a thorough ROI analysis that considers upfront costs, client benefits, and internal efficiencies. To maximize its value, organizations must set a clear ROI framework that supports overall business objectives. When taking this approach, leaders will not only justify the initial investment but also position their business for long-term success in an increasingly competitive market.
We've featured the best productivity tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro