New data from the UBS Global Entrepreneur Report has revealed the technology startup and small business world is in a healthy place, with three in five (61%) feeling optimistic about the prospects for their industries this year.
The news comes as more entrepreneurs look to invest in expansion, with investments in people and acquisitions on the cards for many.
That said, technology is also said to be a key focus area for entrepreneurs, with the report finding nearly half (45%) are expecting investment in AI infrastructure, applications and/or models, outpacing personnel investments.
Entrepreneurs are growing their businesses and investing moreThree in five (62%) now see AI as offering the greatest commercial opportunity for their industries, with two-thirds (67%) predicting AI will lead to improved productivity for a typical company in their industry by the end of the decade. Nearly half (47%) plan to increase IT and digital transformation spending over the next 12 months.
However, entrepreneurs are continuing to exercise a cautious approach, with many citing political uncertainty and/or instability (53%), higher taxes (42%) and major geopolitical conflicts (42%) as concerns in the next year alone.
Moreover, UBS revealed some key differences across the globe: European entrepreneurs plan to increase AI and IT spending, with digital transformation emerging as key focus areas; while respondents in the Americas will focus on personnel investments. Those located in Asia-Pacific and also Switzerland will focus on both strategic acquisitions and IT/AI growth.
The report also uncovered higher industry outlook optimism in the Americas (71%) compared with Europe (52%), Switzerland (50%) and Asia-Pacific (59%).
By analyzing how entrepreneurs perceive their industries, UBS hopes to understand the effects of the global pandemic and geopolitical conflicts on economic growth.
Head of Strategic Clients Benjamin Cavalli concluded: “This report captures the collective knowledge of some of the brightest business innovators we have the privilege to work with and highlights their insights on topics from their industry outlook to their business plans over a short-term horizon, and to the end of the decade.”
You might also likeWhile the OnePlus 13 and the OnePlus 13R have already broken cover, the expectation is that we might see one or two more models in this range eventually – and a new leak teases the camera setup heading to the rumored OnePlus 13 Mini.
This comes from well-established tipster Digital Chat Station (via GSMArena), who says that on the back of the OnePlus 13 Mini we're going to get a 50MP primary camera, together with a 50MP telephoto camera offering 2x optical zoom.
That contradicts an earlier report from the same source that pointed to a triple-lens rear camera setup for the OnePlus Mini 13: a 50MP primary camera, a 50MP periscope telephoto camera with 3x optical zoom, and an 8MP ultrawide camera.
Given the mixed messages, we're not totally convinced that the camera specs listed here will be those actually offered by the OnePlus 13 Mini – but the latter leak is presumably based on more recent supply chain reports.
A big year for OnePlus The OnePlus 13 (Image credit: OnePlus)We can't really talk about the OnePlus 13 Mini in terms of upgrades, because there's no direct predecessor to compare it to. The OnePlus launch schedule and product range can be difficult to predict, outside of the main flagship models.
That's even more the case now that Oppo and OnePlus are part of the same company. Some Oppo handsets are rebadged as OnePlus phones for international markets, while others aren't, so we're playing a guessing game at the moment with the OnePlus 13 Mini.
According to previous rumors, the OnePlus 13 Mini could be based on the Oppo Find X8 Mini, which has yet to be unveiled. There has been talk of a 6.31-inch display, so this handset wouldn't actually be all that mini after all.
What we do know for sure is that we've been impressed with the hardware offerings from OnePlus so far this year: check out our OnePlus 13 review and our OnePlus 13R review to find out what makes these handsets two of the best Android phones around.
You might also likeIn a bid to cut down on electronic waste, Microsoft has formed a partnership with stores in the US and the UK to establish an official console repair service.
In the US, uBreakiFix by Asurion has become an Xbox Authorized Service Provider, covering in-warranty repairs for Xbox Series X and Xbox Series S consoles. That also includes recent revisions such as the Xbox Series X Digital Edition as well as the 2TB Galaxy Black model.
For repairs, you can either schedule an appointment online using uBreakiFix's store locator, or head into a local store to drop off your console directly. The store's website also notes that in-store repairs "can be completed in a few days," but do keep in mind that Xbox One console repairs will still need to go through Microsoft instead.
Over in the UK, it's hardware retailer Currys that has become an Xbox Authorized Service Provider. The official Currys announcement states: "Consumers will benefit from Currys’ tech expertise, regardless of where their Xbox console was purchased. Whether it’s through visiting one of Currys' stores, filled with expert colleagues, or getting their console booked in to be repaired at Europe’s largest tech repair center in Newark – this includes consoles both in warranty and out of warranty."
It adds that the company is committed to combating the rise of electronic waste, and repair services such as this should ensure players can keep gaming on their current consoles instead of having it be replaced by a separate unit entirely.
You might also like...Recent leaks have suggested that the Samsung Galaxy Z Fold 7 might have a lot in common with last year’s Samsung Galaxy Z Fold Special Edition, including the same-size screens and the same 200MP main camera – but we’re now hearing about one area where the Z Fold 7 might have that previous foldable beat.
According to leaker @PandaFlashPro (via NotebookCheck), Samsung is working on an S Pen that will function without a digitizer in the display. A digitizer is a layer on the screen that registers stylus inputs, and removing it could allow the phone to be slimmer.
Indeed, removing the digitizer is exactly what Samsung did with the Galaxy Z Fold Special Edition, which at just 10.6mm thick when folded and 4.9mm thick when unfolded is a fair bit slimmer than the Samsung Galaxy Z Fold 6. However, removing the digitizer from that phone also meant that Samsung had to remove S Pen support.
Designed To Work Without the Digitizer Display. https://t.co/TpggtPczCNFebruary 5, 2025
The ideal combinationIf the company is working on a new kind of S Pen that doesn’t need a digitizer we could have the best of both worlds with the Samsung Galaxy Z Fold 7: a slim design (though no slimmer than the Z Fold Special Edition according to this source) and support for the S Pen stylus.
We would take this claim with a pinch of salt, both because the source doesn’t mention the Z Fold 7 by name here – meaning that this new S Pen design could be intended for another phone – and because they don’t have much of a track record yet. But this isn’t the first time we’ve heard that the Samsung Galaxy Z Fold 7 might have a new S Pen design and no digitizer, so it could well be accurate.
We certainly hope so, because if everything we’ve heard about the Samsung Galaxy Z Fold 7 so far does pan out then it could be quite an upgrade on the Z Fold 6 – a thinner phone with bigger screens, a better camera, and a more powerful chipset, yet perhaps with the same price tag.
We’ll likely find out in July, as that’s when we expect the Samsung Galaxy Z Fold 7 and Samsung Galaxy Z Flip 7 to be unveiled.
You might also likeNew research by Lenovo has revealed AI spending could account for as much as a fifth (20%) of IT budgets this year, up from 13% in 2024.
The company's research claims IT decision-makers across EMEA reported higher satisfaction rates with their AI projects over the last 12 months, leading them to expect increases in their investments this year by as much as 104%.
Nearly all (94%) AI projects at least met expectations in the region, with nearly one in three (31%) exceeding them, putting EMEA ahead of North America, where only 16% of projects exceeded expectations.
Businesses are starting to see rewards from AIUp until now, optimism hasn’t always been met with the same level of tangible success, but the fact businesses are now being able to see the results proves that AI wasn’t just a fad after all.
“The EMEA markets present a diverse landscape of AI adoption and it is clear that most organisations have moved past the hype phase of AI and have shifted focus from experimentation to full implementation," noted Lenovo’s President of EMEA for Infrastructure Solutions Group, Giovanni Di Filippo.
Looking ahead, companies are expecting to increase their generative AI focus from 12% to 44% over the next 12 months – a stark rise. One key area for increased spending includes developing and managing AI models (32%, up from 22% last year), proving enterprises and many other businesses are seeking to regain control over how they use AI at work.
However, time and time again we’re faced with the same challenges, and it’s clear that businesses haven’t overcome them yet.
Lenovo highlighted how poorly prepared data poses an obstacle for AI implementation, with 29% acknowledging they have data quality issues. Others states that the enforcement of their AI governance, risk and compliance (GRC) policies is limited (26%), with an alarming 22% of the surveyed businesses presenting no plans to establish AI GRC policies.
To address at least some of the concerns, Lenovo believes, “hybrid infrastructure is key.” Two-thirds (65%) of organizations across EMEA are already using on-prem or hybrid as their primary architecture, with 18% preferring public cloud.
AI PCs are also set to play a role in localizing AI capabilities, with 65% planning on using the devices soon. Separately, Canalys Principal Analyst Ishan Dutt predicts that AI PCs could account for 35% of the PC sector in 2025; previous predictions for the final quarter of 2024 had this at 20%, indicative of significant growth in the year to come.
You might also likeBotnet activity on connected devices is up 500% thanks to default passwords, outdated software, and inadequate security protections creating backdoors into enterprise networks. Now, even entry-level hackers with off-the-shelf tools are getting in on the act.
In November, researchers discovered a new and dangerous botnet, Matrix, made from open source and readily available solutions rather than custom code. While not highly sophisticated, this hack shows how bad actors with basic technical knowledge can make and sell botnets with the potential for widescale damage.
This is an escalating issue and something’s got to give. Stricter device regulations are on the way in 2025 but, until they’re enforced, it’s up to admins to step up. This demands immediate action on software patching, strong authentication, and unified device management.
Growing devices, growing botnetsIt’s no coincidence that connected devices and botnets are growing at similar rates. In the past five years, consumers and enterprises have embraced devices in the smart home and office, resulting in a doubling of devices in the Internet of Things (IoT). This number is expected to double again in the next decade to more than 40 billion worldwide.
This is a problem since not all devices are created equal. By scanning the internet for known software flaws or easy-to-break passwords – two common vulnerabilities in cheaper products – hackers can bend these machines to their will. With more devices, there are more botnet targets.
Once compromised, devices become unwitting recruits in massive botnet armies, allowing attackers to spread malware, launch devastating DDoS attacks, and infiltrate critical enterprise systems. Nokia recently reported IoT devices engaged in botnet-driven DDoS attacks are up 500% over the past 18 months and account for 40% of all DDoS traffic.
Matrix only ups the degree of difficulty. This latest arrival demonstrates how making a botnet isn’t as hard as one might think, opening up new avenues for individuals to execute broad, multi-faceted attacks on numerous endpoint vulnerabilities and misconfigurations. Even more concerning? The solution is for sale as commercial botnet-as-a-service, turning basic tech know-how into automated hacking weaponry. And with enterprise ecosystems now counting more endpoints than ever before, it’s clear that admins must redouble their cybersecurity efforts in the face of this escalating threat.
Three ways admins can fight back against botnetsFirst, and it should go without saying, change any default passwords. Generic credentials are often shared across entire fleets of the same device – meaning hackers might already have your login if it’s left unchanged. Regardless of whether you’re securing a camera, sensor, or industrial control, don’t do default. Strong, randomized passwords are non-negotiable and go a step further with two-factor authentication for added protection.
Next, strengthen your software. Half of last year’s enterprise vulnerabilities remain unpatched and outdated, making them perfect botnet targets. Automated patch management isn’t optional – it’s integral to security survival.
Finally, be proactive. Hackers are counting on admin complacency and weak backend safeguards. Prove them wrong. Contain breaches by segmenting networks, consolidate endpoint management with a unified console, and deploy AI monitoring to catch suspicious behavior.
A critical step here is developing an incident response plan. Many organizations discover botnet infections too late because they lack clear protocols for detection and response. Regular tabletop exercises and automated network monitoring (more on that below) can help teams identify weak points and practice responding to potential breaches before they occur. These basics separate minor hiccups from major incidents.
Be smart and proactiveIt’s worth mentioning that various regulations are coming online to help stop botnets this year.
Europe, The United States, and The United Kingdom are taking aim at basic vulnerabilities in different ways. Europe’s Cyber Resilience Act, for example, will go a long way to closing device backdoors by banning default passwords and obligating manufacturers to provide software support throughout a product’s lifespan.
Across the Atlantic, expect to see a consumer tick of approval on connected devices that meet cybersecurity minimums. Let’s hope these concerted efforts across major markets will hit botnets where it hurts – easy to exploit vulnerabilities – and make us all a little safer.
In the meantime, the buck stops with admins, and it’s not easy in a landscape of growing devices, experimental hackers, and stretched IT teams. To close the gap, look for extra and smarter ways to oversee your ecosystem. Make your life easier with automation, maintain a real-time ecosystem inventory, and establish clear security baselines for new endpoints. You’ll find that relatively small changes to how you manage, authenticate, and protect devices can make a big difference to your overall security posture.
This isn’t to say to do away with endpoints – far from it. Connected devices are popular in enterprises big and small for a reason. They unlock operational data, deliver business insights, and achieve newfound efficiencies. The key is to onboard them consciously and carefully, slamming shut every potential backdoor while unleashing the full promise of tomorrow’s smart office.
We've compiled a list of the best endpoint protection software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
As artificial intelligence continues to reshape industries, leaders around the world are navigating the challenge of how to create clear and consistent regulations that balance innovation with safety. In September, representatives from technology companies, institutions, and researchers issued an open letter to European policymakers, warning that fragmented and inconsistent rules risk depriving the EU of two cornerstones of AI innovation: “open” and “multimodal” models. Open models are free and available to everyone to use, modify, and build on, which spreads social and economic opportunity. The latest multimodal models operate fluidly across text, images, and speech and will enable the next wave of breakthroughs in AI.
Multimodal AI represents a significant leap forward from traditional AI systems. Conventional AI typically focuses on one modality at a time, for example, a text-based chatbot processes only text, and a voice assistant like Siri primarily processes voice inputs. Multimodal AI systems process and respond across multiple formats simultaneously — integrating text, voice, images, and gestures to deliver more intuitive user experiences that feel more natural and human.
Transforming customer experience through multiple touchpointsMultimodal AI is revolutionizing customer experience, offering transformative possibilities for how brands and customers interact. At their core, these systems have evolved how customers can engage with brands by offering unmatched flexibility in communication methods. They also boost efficiency by leveraging how humans naturally process information, letting users input data the fastest way they can, through speech, and delivering responses in formats that best suit their preferences or needs.
A customer may, for example, begin their interaction through voice commands while driving, seamlessly switch to text upon entering a quiet environment and receive visual confirmations throughout their journey. This adaptability creates a more natural and comfortable experience while maintaining conversational context across different modes of interaction.
With voice interfaces providing much-needed alternatives for individuals with visual impairments and text and visual outputs serving those with hearing difficulties, multimodal systems are helping to remove barriers and promote inclusivity, broadening access to everyday tasks and interactions with brands.
By synthesizing various forms of input, multimodal AI systems are building a more comprehensive understanding of user intent and context, resulting in more accurate and relevant responses. This deeper level of understanding significantly reduces friction in customer interactions and leads to improved overall satisfaction. Notably, multimodal AI’s ability to process multiple types of input also simultaneously leads to enhanced contextual intelligence.
In the retail sector, for instance, multimodal AI is revolutionizing online and in-store consumer experiences. Leading retailers are using the technology to help customers search for products more easily using a combination of voice queries and images. For example, shoppers can use smartphones to photograph a piece of furniture and then verbally specify modifications such as, “show me this in blue” or “find similar items at a lower price point.”
Smart mirrors with multimodal AI are another innovative retail application. They respond to voice commands and gestures, enable customers to “try on” clothes virtually in their reflections, requesting different sizes or colors, and receive product recommendations. These use cases demonstrate how powerful multimodal AI can be in blending the best of digital and physical retail applications.
Best practices for implementing multimodal AIFor organizations looking to implement multimodal AI solutions, several best practices should be considered:
Seamless Integration: The key to successful multimodal implementation lies in creating smooth transitions between different modes of interaction. Users should be able to switch between voice, text, and visual interfaces without disrupting their experience or losing context.
User-Centric Design: Organizations need to understand the preferences of their specific user base to deliver the best experience. This insight should guide the choice of modalities, ensuring the technology serves real user needs rather than being implemented for its own sake.
Contextual Data Utilization: Effective multimodal systems should leverage available contextual data, including location information, interaction history, and user preferences, to deliver more personalized experiences. However, this must be balanced with strong privacy protections, informed user consent, and transparent data collection and usage policies.
Accessibility First: Rather than treating accessibility as an afterthought, organizations should place it at the core of their multimodal AI strategy. This approach not only serves users with different abilities but often leads to better solutions for all users.
Continuous Improvement: The field of multimodal AI is rapidly evolving, making it essential for organizations to update and refine their systems regularly. This includes incorporating customer feedback, adapting to new technological capabilities, and maintaining robust security measures.
Leverage Third-Party Expertise: Partnering with an expert provider can help organizations navigate the complexities of multimodal AI implementation. These providers bring specialized expertise, ensuring seamless integration, responsible innovation, and adherence to regulatory standards. These collaborations can accelerate deployment while maximizing the technology’s impact on customer experiences.
Looking ahead: the future of CXAs generative AI (GenAI) continues to evolve, multimodal AI is unlocking new opportunities for brands to win customers, build loyalty, and drive higher engagement. Offering seamless and personalized experiences enables brands to attract new customers while strengthening relationships with existing ones, encouraging repeat business and increased spending. This technology enables brands to create more meaningful and impactful customer interactions across the entire customer journey.
For multimodal AI to thrive, technology leaders need to have confidence in consistent rules that balance safety with innovation. Europe has the opportunity to create a regulatory framework that addresses potential risks while unlocking the full potential of this transformative technology.
We've compiled a list of the best customer database software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
OM System just doubled down on its retro appeal with a new OM-3 mirrorless camera. It's a stunning take on the original Olympus OM-1 film SLR from the 1970s, and the closest fans are getting to a reboot of 2016's popular digital Olympus Pen-F.
Costing £1,699 body-only, or £1,999 with the 12-45mm F4 Pro lens (US and Australia pricing to follow), the OM-3 sits between the flagship OM-1 II and the enthusiast-level OM-5. It delivers OM System features we already know; the same stacked 20MP micro four thirds sensor and TruePic X processor as the OM-1 II, together with class-leading in-body image stabilization rated up to 7.5EV and quad-pixel autofocus with AI-subject detection, wrapped in a robust IP53-rated weatherproof body.
The OM-3 also features a creative dial à la Pen-F. Olympus was ahead of the curve with the Pen-F, since we now see the likes of the Fujifilm X-T50 and Panasonic Lumix S9 capitalizing on the interest in custom color profiles with dedicated controls providing quick access to a catalog of custom looks.
In the OM-3, we get a creative dial with four modes; color profile, monochome profile, Art Filters plus Color Creator. Color profile has four customizable settings that emulate film looks, as does the Monochrome profile with its four black-and-white looks.
There's much to like about the OM-3. It packs the same power as the pricier OM-1 II into a more affordable, retro body with direct access to key features. This is a camera that OM System fans in particular have been waiting for, and a compelling Fujifilm alternative.
(Image credit: OM System) More than a pretty faceThe OM-3 is a stunner that weighs under 500g and is smaller than the OM-1 II, but its appeal isn't just surface-deep. It's a highly robust metal camera that's dustproof, splashproof and freezeproof, plus it has a well thought-out design with direct access to many of its powerful, modern features.
There's a new dial for photo, video, or slow and quick motion options, with slow motion recording up to 60fps in 4K and up to 120fps in Full HD. OM System has introduced new Cinema modes with Log color profiles for video that maximize the dynamic range of the camera for grading clips later.
There's also a new button to access computational photography features directly, with OM System's full suite of modes to hand; Live ND filters with six levels of strength from ND2 to ND64, the graduated ND filter effect introduced in the OM-1 II, live composite for light trails, together with a High Res Shot mode to increase resolution up to 80MP, plus a focus stacking mode. These are clever features that minimize the accessories you need and time spent editing at a computer.
Price-wise, the OM-3 is pitted against the likes of the Fujifilm X-T5, despite its lower pixel count and smaller sensor size, which will always be a strike for some. However, I can't think of a faster camera for the money – this 120fps-shooting stunner could be the ultimate travel and outdoors camera, especially for those with an eye for design. OM System also launched mark II versions of three of its lenses alongside the camera, all of which add weather-sealing to the original versions; the M.Zuiko Digital 17mm F1.8 II, the 25mm F1.8 II and the ED 100-400mm F5.0-6.3 IS II. I can see the 17mm lens being a lovely pairing with the OM-3 in particular for street photography. We're currently conducting a full OM-3 review, coming soon.
(Image credit: OM System) You might also likeTikTok parent company, ByteDance, is showing off a new AI video creator that can produce vivid videos of people talking, singing, and moving around from a single photograph. The new OmniHuman model can bring an image to life with eerily accurate body movements, facial expressions, and gestures.
OmniHuman’s breakthrough involved training on more than 18,700 hours of video. The AI can now mimic how humans move, speak, and interact in videos. Notably, this AI can create fully moving characters rather than just animating a face or upper body. That means a single picture can be turned into a video of someone giving a speech, dancing, or even playing an instrument.
The result is a very realistic video, whether the character is a human from a photograph or one from a more stylized painting. You can see examples below.
OmniHuman everywhereIf and when ByteDance does make OmniHuman available, it's easy to imagine it blowing up on TikTok. The company already offers an AI video-maker named Jimeng on the platform, and something like OmniHuman could entice many more people to play with TikTok and its other features.
Of course, ByteDance won't enter the space without competition. OpenAI's Sora has drawn accolades and is a big name in the AI video space, but there are plenty of others, such as Pika, Runway, Pollo, and Luma Labs' Dream Machine.
There's a lot of potential use for ByteDance's model, whether recreating actors of the past for more movies or teaching students history from the simulated mouths of historical figures. Even digital avatars for social media and gaming could become more lifelike, adapting in real-time based on user input.
OmniHuman is still a research project for now, but the fact that ByteDance is already showcasing its capabilities suggests that practical applications aren’t far behind. The AI character below could be the next face of a video trend on TikTok.
You might also likeIf you’re looking for the most immersive visual and sound experience when watching Super Bowl LXI, you might be very thankful for subscribing to Comcast or wish you had this Sunday, February 9, 2025. Comcast is scoring a touchdown as, for the first time, the provider will broadcast Super Bowl LIX with Dolby Atmos sound.
Best of all, it’s alongside Dolby Vision, which isn’t a first, but it’s kind of the perfect pairing and made possible by a partnership between Comcast and Dolby. You’ll need to be subscribed to Xfinity X1 to enjoy this ‘Enhanced 4K’ presentation, which Comcast describes as “an unmatched viewing experience with the best picture and audio quality delivered to the home in the fastest way possible so the action customers see in their living room is only seconds behind the game unfolding in New Orleans.”
That feed will end up on the big screen through a Comcast X1 equipment box or the Xfinity Stream app running on one of the best streaming boxes or sticks. The app is also compatible with iOS and Android, so you can watch it on your phone or tablet.
A look at the Comcast Xfinity X1 interface on a TV from one of the X1 cable boxes. (Image credit: Comcast)Even so, the best way to experience the Super Bowl in Dolby Vision and Dolby Atmos would be on a big screen – like one of TechRadar’s picks for the best TV – and with one of the best soundbars or home theater setups. This way, you can feel the immersion as the audio from the game gets presented in a full Dolby Atmos mix.
Paired with a big, crisp 4K TV, you’ll likely get the feel you’re inside the Superdome in New Orleans. Of course, Fox Sports will still present the game in 4K through Comcast and other providers, including for free on Tubi. You’ll also want your TV – and streaming box – to support the Dolby Vision visual format and the Dolby Atmos audio format. So yes, some specific hardware beyond being an X1 subscriber is required. You'll need either the Xi6, XiOne, or XG1v4 Comcast box, but with the latter Dolby Vision isn't yet supported.
The whole idea with Comcast’s Enhanced 4K product, though, is the best possible resolution – aka 4K – delivered with super low latency in both Dolby Vision and Dolby Atmos. Previous Super Bowls streamed on X1 were broadcast with Dolby Vision, but Dolby Atmos at the 2024 Paris Olympics checked off all the Dolby boxes. This won’t be the first sport presented in Dolby Atmos either; Apple also regularly streams all MLS Season Pass matches on Apple TV Plus with a Dolby Atmos mix.
Still, this is an exciting test for Comcast’s Enhanced 4K product, as the Super Bowl will be one of the most-watched events of the year. And if you have the proper setup at home, you’ll be in for a treat as the Kansas City Chiefs face the Philidelphia Eagles in Super Bowl LIX.
And if you're looking to upgrade your home entertainment setup before the big game, check out the best Super Bowl TV deals here.
You might also likeGigabyte, through its subsidiary Giga Computing, is offering qualified individuals and organizations the opportunity to test one of the world’s most advanced supercomputers, the Gigabyte G383-R80 server powered by AMD MI300A APUs for free.
There's a catch; after the seven-day trial period, the configured price of this high-performance system is $304,207.
The price isn't the only thing that might ward you off; in addition to it being a strict time-bound trial period, distributors aren't eligible to apply, and users must have a clear project in mind to qualify.
Claiming Gigabyte's G383-R80 offerTo participate, applicants must fill out a form on the Gigabyte Launchpad website. Giga Computing will then review the application based on the commercial value or innovative potential of the proposed project.
If approved, the company will contact the applicant within three business days to confirm details. The trial period lasts for seven days, though extensions of up to two weeks can be requested through a sales representative or via email. Access to the server will be granted within two weeks, and users must initiate their project within three days of receiving the access link.
The Gigabyte G383-R80 server is designed for demanding workloads such as AI training, AI inference, and high-performance computing (HPC). It features a 3U rack-mount chassis and supports up to four AMD Instinct MI300A APUs, which combine CPU and GPU cores for accelerated computing.
For storage, it has eight 2.5-inch NVMe Gen5/SATA/SAS hot-swap drive bays, 12 PCIe Gen5 x16 slots, and a variety of storage options, including M.2 NVMe SSDs and U.2/U.3 NVMe SSDs with capacities ranging from 400GB to 61.44TB.
Networking capabilities include onboard 10Gb/s Ethernet ports and support for PCIe expansion cards like RJ45, SFP+, QSFP28, and QSFP56.
You may also likeA team of researchers at the University of Hong Kong has designed and tested an advanced aerial robot capable of navigating complex environments at high speeds of up to 20 meters per second while maintaining precise control.
Named SUPER, the quadcopter drone uses cutting-edge LiDAR technology to detect and avoid obstacles, even thin wires that have posed challenges for traditional drones.
In research published on Science Robotics (via Techxplore), the team noted while SUPER has potential applications in search and rescue, its ability to operate autonomously in unknown environments suggests it could also be used for law enforcement and military reconnaissance.
The power of LiDAR for precision flightUnlike conventional aerial robots that rely on cameras and sensors, SUPER uses 3D light detection and ranging (LiDAR) to map its surroundings and process spatial data in real time, allowing it to function in low-light conditions.
With a detection range of up to 70 meters, the LiDAR system feeds data to an onboard computer that continuously analyzes the environment.
This system enables SUPER to chart two distinct flight paths, one prioritizing safety and another allowing for exploratory movement.
To demonstrate its capabilities, the research team tested SUPER against a commercial drone, the DJI Mavic 3.
While the DJI model avoided larger obstacles, it struggled to detect thin wires of smaller diameters. In contrast, SUPER successfully avoided all obstacles, including 2.5-mm thin wires, thanks to its high-resolution point cloud processing.
The test result also reveals the drone glided through dense forests, tracking moving targets without colliding with trees or branches.
You may also likeGoogle has dropped a major upgrade to the Gemini app with the release of the Gemini 2.0 Flash Thinking Experimental model, among others. This combines the speed of the original 2.0 model with improved reasoning abilities. So, it can think fast but will think things through before it speaks. For anyone who has ever wished their AI assistant could process more complex ideas without slowing its response time, this update is a promising step forward.
Gemini 2.0 Flash was originally designed as a high-efficiency workhorse for those who wanted rapid AI responses without sacrificing too much in terms of accuracy. Earlier this year, Google updated it in AI Studio to enhance its ability to reason through tougher problems, calling it the Thinking Experimental. Now, it’s being made widely available in the Gemini app for everyday users. Whether you’re brainstorming a project, tackling a math problem, or just trying to figure out what to cook with the three random ingredients left in your fridge, Flash Thinking Experimental is ready to help.
Beyond the Thinking Experimental, the Gemini app is getting additional models. The Gemini 2.0 Pro Experimental is an even more powerful one, albeit a somewhat more cumbersome version of Gemini. It's aimed at coding and handling complex prompts. It’s already been available in Google AI Studio and Vertex AI.
Now, you can get it in the Gemini app, too, but only if you subscribe to Gemini Advanced. With a context window of two million tokens, this model can simultaneously digest and process massive amounts of information, making it ideal for research, programming, or rather ridiculously complicated questions. The model can also utilize other Google tools like Search if necessary.
Lite speedGemini is also augmenting its app with a slimmer model called Gemini 2.0 Flash-Lite. This model is built to improve on its predecessor, 1.5 Flash. It retains the speed that made the original Flash models popular while performing better on quality benchmarks. In a real-world example, Google says it can generate relevant captions for around 40,000 unique photos for less than a dollar, making it a potentially fantastic resource for content creators on a budget.
Beyond just making AI faster or more affordable, Google is pushing for broader accessibility by ensuring all these models support multimodal input. Currently, the AI only produces text-based output, but additional capabilities are expected in the coming months. That means users will eventually be able to interact with Gemini in more ways, whether through voice, images, or other formats.
What makes all of this particularly significant is how AI models like Gemini 2.0 are shaping the way people interact with technology. AI is no longer just a tool that spits out basic answers; it’s evolving into something that can reason, assist in creative processes, and handle deeply complex requests.
How people use the Gemini 2.0 Flash Thinking Experimental model and other updates could show a glimpse into the future of AI-assisted thinking. It continues Google's dream of incorporating Gemini into every aspect of your life by offering streamlined access to a relatively powerful yet lightweight AI model.
Whether that means solving complex problems, generating code, or just having an AI that doesn’t freeze up when asked something a little tricky, it’s a step toward AI that feels less like a gimmick and more like a true assistant. With additional models catering to both high-performance and cost-conscious users, Google is likely hoping to have an answer for anyone's AI requests.
You might also likeiOS 18.3 is here – and contrary to rumors being spread on TikTok and elsewhere, it doesn’t install Elon Musk’s Starlink tech on your iPhone. So, there’s no reason not to get the latest iOS update on your iPhone and ensure that you’re up to date.
iOS 18.3 is a relatively minor update, which mostly impacts Apple Intelligence – enabling the AI features by default and rolling out some fixes for Notification Summaries – and fixing several bugs. It does, however, make a change for T-Mobile customers by allowing the iPhone 14, iPhone 15, and iPhone 16 lineups to potentially connect to the Starlink-powered terrestrial network of the carrier.
It is not, however, allowing that connection by default, and T-Mobile’s partnership with Starlink is still in beta for a select few customers who opt to join it and then get selected to participate. Apple doesn’t have a partnership with Starlink, but T-Mobile does, and you need to opt in a few ways. Let’s unpack this ahead.
The myth: iOS 18.3 installs Starlink on your iPhone (Image credit: Future/Jacob Krol)The concern in the now viral TikToks is that the latest version of iOS basically adds a direct connection to Starlink to your iPhone. The main point of concern is that ‘Starlink can now work with iPhone and access it’ without any formal announcement from Apple on if it’s a mandatory connection.
Apple initially launched its Emergency SOS via Satellite service alongside the iPhone 14 – with support for the iPhone 15 and 16 – so the smartphones could connect to a satellite. However, it’s not on by default and is only engaged when no LTE or Wi-Fi network is available.
Since then, some carriers have offered satellite networks alongside a typical phone network. T-Mobile is doing that, and it initially announced its partnership with Starlink in August 2022.
Apple also updated a support page detailing how to turn off carrier-powered satellite features. To do this, open Settings, navigate to Cellular, select your carrier, and turn off ‘Satellite.’
Simply, though, iOS 18.3 does not install Starlink on the iPhone. Essentially, it is packaged within iOS 18.3 as a carrier network settings update for T-Mobile that allows for the connection. It is not on by default, though, and you need to be selected to join the beta after requesting a spot.
The reality: T-Mobile has a partnership with Starlink that is currently in beta and iOS 18.3 is safe to install (Image credit: Apple)So no, iOS 18.3 does not add a direct line to the Starlink network – forced or unforced – as some viral TikToks claim. It makes network settings changes that allow T-Mobile-connected iPhone 14, 15, or 16 to connect to T-Mobile 1900MHz spectrum, accessed through antenna ‘band 25’ on the iPhone to access the Starlink network.
Even for that network connection to happen, you need to have an eligible T-Mobile plan, register for the beta, and be selected to participate in it. Then, you need to be in an area where that network is supported and when a typical cellular network or Wi-Fi is unavailable. You’ll know that is the case when you see “SAT” replace the standard cellular bars and “4G,” “5G,” or “5G UW” in the top right corner of your iPhone.
T-Mobile opened its Starlink network beta program in December 2024, and interested customers have been able to register for it. It was first available to Android smartphones, and then the capability for the iPhone rolled out with iOS 18.3.
The partnership and the ability for T-Mobile devices to connect to Starlink’s direct-to-cell technology aims to reduce dead zones and allow users to stay connected. T-Mobile is also the only cellular network in the United States to have this partnership with Starlink.
Apple’s satellite connectivity for its iPhone under the ‘Emergency SOS’ feature is not Starlink and is done through a partnership with Globalstar.
Furthermore, it’s also best practice to keep your iPhone and other devices up to date, as using older software can make them more susceptible to security and privacy issues. iOS 18.3, like most iOS updates, brings some new features but also, at times, critical bug fixes and important security patches.
So, long story short, iOS 18.3 does not add a direct connection to Starlink to your iPhone. It simply allows a T-Mobile-connected iPhone to use that network when you're outside of traditional coverage if you’ve opted into the beta and have been selected. It’s also a partnership between T-Mobile and Starlink – Apple isn’t involved there and it doesn't have any impact or change to Apple's Emergency SOS via Satellite functionality. That service has been using Globalstar satellites since its inception.
If you’re on T-Mobile and want to opt out of using Starlink, open Settings on your iPhone, click Cellular, select your Carrier (in this case, T-Mobile), and turn off Satellite.
You might also likeChipmaking giant AMD has confirmed it recently patched a high-severity vulnerability affecting its Zen 1 to Zen 4 CPUs.
The company published a new security advisory, detailing the bug and its potential for exploitation, noting, “Researchers from Google have provided AMD with information on a potential vulnerability that, if successfully exploited, could lead to the loss of SEV-based protection of a confidential guest."
SEV is short for Secure Encrypted Virtualization - a hardware-based security feature designed to enhance the confidentiality and integrity of virtual machines (VMs) running on AMD EPYC processors. It encrypts the memory of individual VMs using unique encryption keys, ensuring that neither the hypervisor nor other VMs can access their data.
Mitigations availableThe vulnerability is tracked as CVE-2024-56161, and has a severity score of 7.2/10 (high). It is described as an improper signature verification flaw in AMD CPU ROM microcode patch loader, which could allow threat actors with local admin privileges to load malicious CPU microcode. As a result, the confidentiality and integrity of a confidential guest running under AMD SEV-SNP would be lost.
“AMD has made available a mitigation for this issue which requires updating microcode on all impacted platforms to help prevent an attacker from loading malicious microcode,” the company concluded.
“Additionally, an SEV firmware update is required for some platforms to support SEV-SNP attestation. Updating the system BIOS image and rebooting the platform will enable attestation of the mitigation. A confidential guest can verify the mitigation has been enabled on the target platform through the SEV-SNP attestation report.”
The company only publicly disclosed the flaw recently, but the patch was actually released in mid-December 2024. AMD decided to delay the announcement to give its customers enough time to mitigate the problem.
You might also likeAt least two workstation specialists have put supercharged PCs with Nvidia RTX 5090 GPUs on sale over the past few days. The most impressive of them all is the Bizon ZX5500 which packs up to seven (yes, seven) water-cooled 32GB RTX 5090 GPUs in a tall tower casing. This is the best GPU ever built and buying it through system builders seems to be the only way to avoid months-long wait.
While BizonTech's solution will probably feature in our best workstation guide, it is not as expansive as Comino’s Grando server, which has eight RTX 5090 GPUs, but the latter has yet to get a launch date (I contacted Comino for more details).
The ZX5500 doesn't come cheap at just under $102,000 with the GPUs accounting the lion share (more than 83%) of the total cost. That’s almost 3x the price of MIFCOM’s Big Boss which has seven liquid-cooled RTX 4090 GPUs.
A beefier 6Kw power supply unit plus and the cards cost an extra $85,000 compared to the same system with a pair of RTX 5080 (with 16GB VRAM each). As a reminder, the suggested retail price of the RTX 5090 is ‘just’ $2000.
An RTX 5090 on its retail packaging on a desk (Image credit: Future)The ZX5500 can be updated to a 96-core ThreadRipper Pro CPU with 1TB of DDR5 RAM, almost 1PB of PCIe 4.0 SSDs (15 x 61.44TB SSD) and seven liquid-cooled Nvidia H200 AI GPU; such a configuration pushes the price above half a million US Dollars.
Where to find RTX 5090? Ask Pro system buildersBizontech is a niche boutique vendor that specializes in servers, workstations and clusters for AI, deep learning and HPC. The RTX 5090 is sold out pretty much everywhere and it seems that Nvidia is prioritizing business and creative outlets like Bizontech, Puget Systems and Punch Technology, with workstations seemingly ready to be shipped within days rather than week.
Jon Bach, President, Puget Systems told me, “Supply for the 5090 (and the 5080) is very limited, and we expect that to be the case for at least through March. Puget Systems has a good number of cards in hand at the moment because of our OEM relationships, but we appear to be somewhat unusual in that respect. Overall, we are filling orders, but expect our lead times to be affected until supply improves."
The creative crowd will love the RTX 5090 as it obliterates absolutely everything in its path but at a price. Puget Systems and Storagereview benchmarked it across a wide range of AI and creative tests and found that it performed significantly better than previous generations (and AMD’s finest cards) albeit with a much higher power station.
TechRadar’s John Loeffler published a review of the RTX 5090 recently, calling it the supercar of graphics cards and asking whether it was simply too powerful, suggesting that it is an absolute glutton for wattage. He continues, “It's overkill, especially if you only want it for gaming, since monitors that can truly handle the frames this GPU can put out are likely years away.”
This, of course, will be irrelevant to Nvidia’s plans to launch an even more powerful version of the RTX 5090, one with a rumored 96GB GDDR7 memory which will replace the RTX 6000 ADA in due time. If this card follows the same inflationary trajectory as its consumer version then I won’t be surprised if its ticket price reaches $15,000, making it the most expensive graphics card of all time.
You might also likeApple has delivered a new patch on Xprotect, its on-device malware removal tool, intended to block several variants of the macOS ‘Ferret’ family of threats.
As reported by AppleInsider, the new update will counter several issues, including Ferret variants FRIENDLYFERRET_SECD, FROSTYFERRET_UI, and MULTI_FROSTYFERRET_CMDCODES.
These malware variants are reportedly used by North Korean hackers in what has been dubbed the ‘Contagious Interview’ campaign, in which criminals would create fake job openings, primarily targeting software developers or high–profile industries like defense, government departments, or aerospace. The new updates to Xprotect will help block this family of malware from Mac devices, here’s everything we know so far.
The Ferret FamilyThese fresh Ferret family variants have been observed by researchers to be associated with the ‘Contagious Interview’ campaign. This attack prompts targets to communicate with an interviewer through a link which would show an error message - urging victims to install or update a communication software for virtual meetings.
These ‘updates’ would be disguised as Chrome or Zoom installers, like ChromeUpdate and CameraAccess persistence modules (really FROSTYFERRET_UI). These apps install a malicious persistence agent which runs in the background and steals sensitive data from the victim.
The latest Xprotect update will block most known variants which are disguised as macOS system files - including com.apple.secd (FRIENDLYFERRET). However, not all FlexibleFerret variants can be detected, as the malware landscape evolves so quickly.
The campaign has been observed as far back as 2023, and has been attributed to the well known Lazarus hacking group, which has been observed running several malicious job campaigns to trick jobseekers into downloading malware or trojanized remote access tools.
The data these attackers can access is dependent on the device they infect. Aaron Walton, Threat Intelligence Analyst at Expel points out anyone who falls victim to an attack using their work device, unwittingly puts their organization at risk.
"Though these bad actors typically target people through job offers, it’s fairly common that the individual will run the malware on a corporate device," he notes. "The attackers often know this and use it as a means to gain information from their target organization."
Malware protectionAt its origin, this is a social engineering campaign, so staying safe from these attacks is much easier if you can spot the signs. Social engineering attacks like phishing are often personalized, sometimes using information obtained from the dark web - obtained in a data breach, for example.
In this instance, the victims handed their information over as part of the ‘job application’ process, so thoroughly vetting any sites and companies you submit job applications to is really important.
Companies can't stop phishing attacks, and human error will always put organizations at risk, so to mitigate the risks every company, no matter what size, needs a robust cybersecurity strategy. Take a look at our SMB cybersecurity checklist to make sure you're covered.
"For organizations, it is important to have a strong defense-in-depth strategy—think of it as a multi-layered security fortress, where if one defense fails, another may stop the activity. That is, to defend the environment from many different angles. Employ endpoint detection, monitor networks, and empower employees to report suspicious activities", Walton comments.
As with most cyberattacks, vigilance is key. New malware threats are rising faster than ever, so being able to spot the signs can help limit the damage. If your device is suddenly much slower than normal, frequently crashes, or randomly reboots those are all signs that your device may be infected.
Another tell-tale sign is persistent pop-ups. These often bogus ads are pretty harmless themselves, but clicking on them might take you to a malicious site, and the ads are often a sign your device is infected. For a more detailed explanation of what to look for, check out our guide here.
For anyone who thinks this may apply to them, check out our list for the best antivirus software, which can be really helpful in locating and removing malware, as well as protecting against repeat infections.
If you do find malware on your device, make sure to remove the infected program immediately. Alongside this, it’s a good idea to disconnect from the internet to prevent the malware from spreading.
You might also likeChina is advancing its broadband infrastructure with its rollout of 50G-PON, a next-generation fiber technology capable of delivering speeds of up to 50Gbps (50,000 Mbps) downstream.
A newly published report by Dell’Oro Group, which gathers information from conversations with equipment vendors and publicly released tender award notifications, projects that PON equipment revenue will grow from $10.5 billion in 2024 to $12.1 billion by 2029.
While this growth will be driven largely by 10Gbps XGS-PON deployments in North America, EMEA, and CALA, China’s 50G-PON deployments place it ahead of the rest of the world. Last year, Omdia forecast that China will be the only commercial market for 50G-PON in 2024 and 2025, accounting for 93 percent of the global market and generating $1.55 billion in revenue by 2027.
Fiber to the RoomPON, or Passive Optical Network, is a fiber-optic technology that enables multiple users to share a single fiber connection using passive optical splitters. This design reduces the need for active electronic components between the provider and end users, lowering infrastructure costs, reducing power consumption, and improving network efficiency.
The 50G-PON ITU-T standard supports theoretical speeds of up to 50 Gbps downstream and up to 25 Gbps upstream, though current real-world deployments in China - led by China Telecom, its regional branch Shanghai Telecom, and ZTE - typically provide 10 Gbps all-optical access.
Beyond 50G-PON, China is also expanding Fiber to the Room (FTTR), which extends fiber-optic connectivity to individual rooms within homes and businesses. Unlike traditional fiber-to-the-home (FTTH) setups, which typically deliver fiber to a central modem and then rely on Ethernet or Wi-Fi for distribution, FTTR brings fiber-optic cables directly to each room, ensuring faster speeds, lower latency, and more stable connections.
Other highlights from Dell’Oro Group’s report include that cable distributed access equipment revenue will peak at $1.3 billion in 2028 as operators continue DOCSIS 4.0 and early fiber deployments.
Fixed wireless CPE is expected to reach its highest revenue in 2025 and 2026, driven by 5G sub-6 GHz and millimeter wave units, while Wi-Fi 7 residential routers and broadband CPE with WLAN are projected to generate $8.9 billion by 2029 as adoption grows among consumers and service providers.
“Quietly, broadband access networks are evolving into large-scale edge compute platforms, with the ability to enable service convergence far more quickly and easily than ever before,” said Jeff Heynen, Vice President at Dell’Oro Group.
“This evolution means that the revenue mix for broadband equipment is shifting over the next five years, with spending on traditional hardware and software now being supplemented by spending on AI and machine learning tools to facilitate convergence and service reliability.”
You might also likeChinese hackers have been seen targeting network appliances with malware which gave them persistent access and the ability to run all sorts of actions.
A new report from cybersecurity researchers Fortiguard (part of Fortinet) dubbed the campaign “ELF/SShdinjector.A!tr”, and attributed the attack to Evasive Panda, also known as Daggerfly, or BRONZE HIGHLAND, a Chinese advanced persistent threat (APT) group active since at least 2012.
The group primarily engages in cyberespionage, targeting individuals, government institutions, and organizations. In the past, it was seen running operations against entities in Taiwan, Hong Kong, and the Tibetan community. We don’t know who the victims in this campaign were.
Analyzing with AIFortiguard did not discuss initial access, so we don’t know what gave Evasive Panda the ability to deploy malware. We can only suspect the usual - weak credentials, known vulnerabilities, or devices already infected with backdoors. In any case, Evasive Panda was seen injecting malware in the SSH daemon on the devices, opening up the doors for a wide variety of actions.
For example, the hackers could grab system details, read sensitive user data, access system logs, upload or download files, open a remote shell, run any command remotely, delete specific files from the system, and exfiltrate user credentials.
We last heard of Daggerfly in July 2024, when the group was seen targeting macOS users with an updated version of their proprietary malware. A report from Symantec claimed the new variant was most likely introduced since older variants got too exposed.
In that campaign, the group used a piece of malware called Macma, a macOS backdoor that was first observed in 2020, but it's still not known who built it. Being a modular backdoor, Macma’s key functionalities include device fingerprinting, executing commands, screen grabbing, keylogging, audio capture, and uploading/downloading files from the compromised systems.
Fortiguard also discussed reverse engineering and analyzing malware with AI. While it stressed that there were usual AI-related problems, such as hallucinations and omissions, the researchers praised the tool’s potential.
"While disassemblers and decompilers have improved over the last decade, this cannot be compared to the level of innovation we are seeing with AI," the researchers said. “This is outstanding!”
Via BleepingComputer
You might also likeThe big features of Sonos' upcoming streaming box have leaked, and they sound pretty damn great, actually. The key elements are that it will have multiple HDMI passthrough ports and will act as an HDMI switch, that it will have a comprehensive range of streaming services in a unified interface, and that it will be able to send wireless audio to Sonos speakers in home theater configurations that don't involve a soundbar at all (or can still include one, but also wirelessly).
The downsides are that the software is being developed by an ad-tech company (and Sonos has a rocky reputation around software over the last year), and that it's predicted to cost $200-$400, which is a lot if you're looking at a living-room setup, since you then need to add all those speakers, and probably stands for the speakers, and all that jazz.
But there's a very interesting potential use case where the price downside really goes away, and the new speaker system could come into its own even more: custom home theater installs.
Not everyone realizes that Sonos is actually a significant name in the world of in-wall speakers and in-ceiling speakers. These aren't wireless, alas, but they're designed to work seamlessly with the Sonos Amp, which knows exactly how to drive them for peak output, and can drive six speakers (three pairs) per Amp box. In the past, their use for home theater has been limited to Sonos regular Amp options: they can act as stereo TV front speakers only, or rear TV speakers when paired with a soundbar.
But if the new streaming box enables more flexible speaker configurations, and can work with Sonos' in-wall speakers connected to multiple Sonos Amps, things could get interesting.
The Sonos Amp could be a secret weapon for the streaming box.Imagine one of the best projectors (which probably won't have its own streaming tech built in) connected to a Sonos streaming box, which wirelessly sends audio out to two Sonos Amps. One is powering four in-ceiling speakers and a pair of front left and right in-wall speakers; the other is powering a pair of side in-wall speakers and a pair of rear ones. Hopefully the streaming box could also wirelessly connect to a pair (at least!) of Sonos Subs at the same time. That would be quite the Dolby Atmos setup.
Is this superior to connecting an AV receiver to a load of in-wall speakers? Perhaps not, but the installation might be easier if you only need to run cabling a shorter distance to a nearby box, rather than all the way to wherever your AV receiver is.
And you have the ease of use of Sonos' TruePlay tuning, which works excellently to get everything calibrated for your room.
To be clear, this is all speculation on my part – the original leaks about the ability to use speakers for wireless home theater sound said that Sonos is still evaluating exactly which options to include, and we don't know what configurations will be available. But if Sonos makes the Sonos Amp part of the system, the Sonos streaming box could be popular for installations, where price is way less sensitive a topic than most living-room setups.
But what about DTS?However, there's something else that might put home theater enthusiasts off this whole project, and that's Sonos' on-going rejection of the DTS sound format. The only real competitor to Dolby (sorry Eclipsa Audio, call me when you're supported by some actual movies!) is a big deal to home theater enthusiasts, because it's the format of choice for so many 4K Blu-rays, and it's also now featured on the Disney Plus streaming service.
If you've gone to the effort of outfitting a projector and all these in-wall speakers, are you going to risk hearing the Oppenheimer soundtrack in anything less than full-power, maximum-impact DTS-HD? No, of course not, you're not a barbarian.
My Sonos Arc Ultra soundbar review would have scored it higher if it supported DTS; when it's so common among the competition, it's so frustrating that it's missing. And it lacking from that soundbar worries me that it's not coming to the streamer either.
I'm worried that the Sonos streaming box could end up falling into a valley between the two different sets of people who might love it: living-room users might be put off by the price; home theater users might be put off by the lack of DTS support.
Fortunately, everything we know so far is based on leaks. Perhaps the price will be a bargain in the end, perhaps it'll support DTS and every wireless configuration known to humankind, perhaps it'll be a total dud. I'm hoping Sonos will realize its potential for custom installs, at the very least.
You might also like…