Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
Updated: 1 day 10 hours ago

Nothing Phone 3 teaser reveals likely launch date – and an intriguing camera mystery

Mon, 01/27/2025 - 12:45
  • Nothing has released another cryptic teaser, probably linked to the Nothing Phone 3
  • The X post suggests a reveal date of March 4
  • We're not sure if the teaser shows off a new camera system or new buttons

Nothing has released another cryptic teaser that likely hints at the upcoming Nothing Phone 3, which a leaked internal memo called the company’s first flagship.

The company shared a video to X (formerly Twitter) showing off... well, it doesn’t show much. We’ve been treated to an image of a circle above an elongated rounded rectangle, forming a lowercase “i” shape on a black background.

Looking closer reveals a darker strip running from the top to the bottom of the square image, potentially suggesting these circular features are mounted on the phone's side. As the video continues, the edges of the assembly light up in a way reminiscent of the company’s iconic glyph designs.

Furthermore, the post’s caption could suggest a reveal date for the new Nothing flagship. The caption reads: “Power in Perspective. 4 March 10:00 GMT."

This suggests a reveal date of March 4 for the new phone, but beyond that, what this means is currently anyone’s guess. However, the image shown in the video looks like a typical smartphone camera housing, similar to the one found on the Google Pixel 9 Pro, only rotated 90 degrees.

Tentatively, we could read this teaser as suggesting the Nothing Phone 3 will ship with three cameras rather than the two-camera system found on the Nothing Phone and Nothing Phone 2.

This would give some more weight to that “first flagship” comment found in the leaked internal memo we previously reported on, as triple-camera systems are generally only found on higher-spec or pro-level flagship phones.

A third camera would likely be an ultrawide lens, in keeping with other premium flagships like the iPhone 16 Pro Max, Google Pixel 9 Pro, and Samsung Galaxy S25 series.

However, a closer look at the teaser reveals that this glyph appears to be mounted on the side of something – it’s backgrounded by a thin strip of dark gray against the totally black frame.

This would suggest either a side-mounted camera system (as weird as it is unlikely) or a new pair of buttons for the supposedly upcoming phone.

Perhaps Nothing is following the trend of the iPhone 16's Camera Control by adding a shutter button to its next flagship phone. Equally, we could be looking at something much more mundane, like the volume rocker and power button. It seems we have plenty of time for Nothing to reveal more before March 4 rolls around.

Nothing recently released two other teasers, one using the company's NothingOS widgets and the other depicting the Pokémon character Arcanine.

Our Nothing Phone 2 review found the phone to be “cool enough to be different, and powerful enough to recommend”. If the Nothing Phone 3 can do the same while elevating the phone to true premium standards, I wouldn’t be surprised to see it join our list of the best Android phones.

You might also like
Categories: Technology

A self-destructing, 3D printed fungi-based battery could one day power sensors all around you by feasting on sugar

Mon, 01/27/2025 - 12:34
  • Biodegradable battery invented by scientists in Switzerland
  • Fungi, which is the building block of mushrooms, is the core material used
  • The fungi-powered battery generates enough electricity to power sensors

Fungi have fascinated scientists for decades - centuries, probably. There are roughly 200,000 known species across the planet, they are more closely related to animals than plants, the largest organism in the world is a fungus, and some can glow in the dark. If you’ve watched or played The Last of Us, you’ll know the parasitic Cordyceps fungus infects its host by colonizing and consuming its body (admittedly, in the real world, it takes over insects and won’t be invading humans any time soon).

Through a three-year project supported by the Gebert Rüf Stiftung’s Microbials funding program, researchers at Empa (Swiss Federal Laboratories for Materials Science and Technology) have found a novel use for fungi - as they’ve developed a 3D-printed, biodegradable fuel cell that requires feeding rather than charging.

Although the fungal battery (technically it’s a microbial fuel cell rather than a battery per se) produces only modest amounts of electricity, Empa says it can sustain devices such as temperature sensors for several days.

3D printed battery

Microbial fuel cells work by harnessing the metabolism of living organisms to produce electricity. In the past, this was done with bacteria. Empa’s breakthrough combines two fungi species: a yeast fungus on the anode side, which releases electrons, and a white rot fungus on the cathode side, which produces an enzyme that captures and conducts these electrons.

"For the first time, we have combined two types of fungi to create a functioning fuel cell," Empa researcher Carolina Reyes explains.

Rather than adding fungi to a pre-assembled battery, researchers integrated fungal cells into the 3D-printed battery structure itself. Electrodes were carefully designed to provide nutrients to the fungi while remaining biodegradable and conductive.

Traditional battery disposal poses environmental challenges, as many contain toxic materials that can contaminate soil and water if not properly managed. Empa’s living batteries don't have that problem as they cleverly self-digest - by consuming the cellulose-based ink the fungal cells are embedded in - once their purpose is fulfilled.

For the main nutrient source, the researchers add simple sugars to the battery cells. "You can store the fungal batteries in a dried state and activate them on location by simply adding water and nutrients," says Reyes.

Although it’s a promising idea, the project faces challenges due to the complexity of working with living materials, blending microbiology, materials science, and electrical engineering. Empa plans to experiment with different forms of fungi going forward in the hope of finding combinations that will make the fungal battery more powerful and longer-lasting.

You might also like
Categories: Technology

Apple just unveiled its 2025 Black Unity Collection, including an Apple Watch band that honors Black History Month

Mon, 01/27/2025 - 12:15
  • Apple just launched its 2025 Black Unity Collection.
  • There is a new Black Unity Sport Loop for the Apple Watch with a matching watch face.
  • This year's collection was inspired by the “rhythm of humanity.”

Ahead of Black History Month, which kicks off on February 1, Apple just unveiled its 2025 Black Unity Collection, consisting of a special-edition Apple Watch band, a watch face, and a wallpaper fit for the iPhone and iPad.

The collection honors Black History Month and celebrates black culture. According to Apple, this year’s drop is inspired by the “rhythm of humanity.” It’s aptly named Unity Rhythm and features black, green, and red, the colors of the Pan-African flag.

As with years past, Apple is supporting several organizations with grants alongside this collection dropping. Those include the Ellis Marsalis Center for Music in New Orleans, the Battersea Arts Centre in London, the Music Forward Foundation in Los Angeles, the Art Gallery of New South Wales in Sydney, and The National Museum of African American Music in Nashville, Tennessee. The 2025 Black Unity Collection was designed by black creatives as well as allies at Apple.

The Black Unity Sport Loop for Apple Watch. (Image credit: Apple)

First up is the new Black Unity Sport Loop band for Apple Watch that’s up for order now at $49 in the United States, £49.00 in the UK, or AUS $69 in Australia. It also comes in 42mm and 46mm sizes to make it fit most Apple Watches. The Black Unity Sport Loop is a woven watch band that boasts a lenticular effect thanks to raised and recessed loops, some with green and others with red. When you’re wearing the band, the idea is that the colors will blend and move together, switching from green to red, with yellow appearing in the middle.

Arriving in a forthcoming software update for the Apple Watch Series 6 or newer is the Unity Rhythm watch face, which takes some cues from the Black Unity Sport Loop with numerals for the time appearing constructed from red, green, and yellow loops. Like other Apple Watch watch faces, when you raise the animated effect, the strands will come together to form the time, and Apple notes that “distinctive, rhythmic chimes” will mark every hour and a half hour.

A look at the Unity Rhythm watch face for Apple Watch. (Image credit: Apple)

A matching wallpaper for the iPhone and iPad, aptly named Unity Rhythm, will launch with a future software update, likely iOS 18.3 and iPadOS 18.3, which is currently in beta. The wallpaper uses the red, green, and yellow digital strands from the Unity Rhythm watch face to write out Unity.

So, while there is a bit of a wait to access the watch face and wallpapers, Apple is already taking orders for the Black Unity Sport Loop for Apple Watch at its online store. It will also begin arriving in Apple retail locations this week as well.

You might also like
Categories: Technology

Hisense's mini 4K projector changes my mind about the laser TV revolution in 2 key ways

Mon, 01/27/2025 - 11:49

One of the coolest things I saw at CES 2025 was Hisense's mini laser TV projector, which is an interesting prototype that's not like the many other mini options among the best portable projectors we've seen recently.

I got to see it away from the CES show floor during a trip to Hisense's headquarters recently, and to speak to the team about it – and it's winning me over to the idea of having a laser TV in my home instead of a regular TV.

I'm a huge fan of seeing movies at the theater, and so obviously I'm a big home theater advocate – but my own home doesn't have the space to go all out on a huge screen and speakers, and I'm far from alone there.

It's one reason why Hisense has been pushing the idea of its laser TVs, which are basically ultra-short throw projectors with streaming tech built in, that come with a matching ambient light-rejecting screen packaged.

But I haven't been convinced that this will work for me because they're big units that still need to sit some distance from the wall – it just didn't seem like enough of a trade up.

@techradar

♬ original sound - TechRadar

The Hisense mini projector is so much smaller, and so the surface it needs to sit on can be much smaller, making the UST projector-and-screen combo way more tempting as something that won't stick a way out into the room. Now I can imagine not only going for a laser TV as my main way of watching, but it's also making me think I can swap one of the best soundbars for something more meaty at the same time.

The Hisense mini laser TV is basically the specs of a Hisense PX3-Pro UST projector packed into a way smaller body, thanks to a next-gen laser projection tech platform, but still hits over 100 inches.

We rate this model as the best ultra-short throw projector, and you can read our Hisense PX3-Pro review for why – but the main things are that it's bright, colorful and natural.

Hisense says that this projector beamed onto the company's new-gen ambient light rejection screen should be capable of creating an image that can hit around 750 nits of peak brightness at 100 inches when you're actually watching, putting it in line with budget options among the best OLED TVs for brightness.

When I originally saw the mini projector at Hisense's HQ, it was listed as projecting 2,100 lumens, though at CES Hisense said it should match the PX3-Pro's specs, putting it at 3,000 lumens. Either one is far beyond the small 4K projector competition – the LG CineBeam Q is 500 lumens, for example…

And not only is it bright, but it's 4K and it's ultra short throw, which other small projectors generally are not.

(Image credit: Future)

The small size doesn't just tempt me because the whole setup can stick less far out from the wall (which is essential for me), but the much smaller design leaves more space for other things… such as proper speakers.

I can far more easily imagine connecting a pair of Kanto Ren speakers, or Technics' very cool new active speakers, in a setup like this – the small projector leaves more space and a gap between it and the screen that the speakers would fit in perfectly. It is ideal for moving away from the limited acoustic of the best soundbars and into big, meaty left and right power.

What are the downsides? Well, as you might have guessed from the fact that I haven't mentioned this thing's name, it's really a prototype right now, so there's no price or release date – though the tech inside fully exists, so it's probably just a matter of time.

(Image credit: Future)

However, I can tell you that the current version is also notably loud. Those who saw it on the CES show floor would have no idea, but seeing it in a separate showcase, that fan was really working hard.

It's not a surprise – that's a lot of heat for a small box, and if I commit to my beefy speakers next to it, they'll probably manage to drown it out. But I would definitely need to see how any real product handles that before I committed to going for something like this.

Still, this is the first projector that got me thinking excitedly about the idea of switching to a projector in my current home, rather than the best projectors being just something for something closer to real home theater that I want one day. It feels like a small game-changer, in that way.

You might also like...
Categories: Technology

DeepSeek hit by outages – plus all the latest news about the ChatGPT rival

Mon, 01/27/2025 - 11:18

DeepSeek is the most popular app in the world right now but the AI chatbot is struggling to meet demand with reports of outages and errors from users around the globe.

It's no surprise to see DeepSeek is already down, considering it's the number one app on the App Store in the US and UK. The new ChatGPT competitor from a Chinese start-up has taken the world by storm thanks to its incredible reasoning power without any cost.

Unfortunately, it appears that DeepSeek has encountered malicious attacks as of Monday night in China and we're monitoring the situation to see what happens. Read on to stay up-to-date with all the latest DeepSeek outage information.

DeepSeek limits new users

(Image credit: Future)

Hello, TechRadar's Senior AI Writer John-Anthony Disotto here, and welcome to our DeepSeek liveblog.

DeepSeek, an AI chatbot from a Chinese start-up has exploded in popularity over the last few days and is now the most popular app in the US and UK App Store on iOS.

Following this overnight success, the AI tool has experienced issues with outages and reports of errors intermittently throughout the day.

At the time of writing, the latest status update on DeepSeek's website reads: "Due to large-scale malicious attacks on DeepSeek's services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual. Thanks for your understanding and support."

Is DeepSeek even good?

There's a reason DeepSeek has seen worldwide success almost overnight, that's because it's a completely free-to-use app that has reasoning capabilities as good, if not better, than OpenAI's ChatGPT o1.

Earlier today I experimented with DeepSeek and pitted it against ChatGPT's reasoning model. You can read all about DeepSeek vs ChatGPT here.

What is DeepSeek?

(Image credit: Apple/DeepSeek)

If you want to know more about DeepSeek and everything it has to offer, we've got an excellent article titled "What is DeepSeek?"

There you'll find all the information you need on the new AI chatbot and everything it has to offer, including quick comparisons with ChatGPT.

Live updates on outage as they happen

(Image credit: DeepSeek)

DeepSeek appears to be working fine for me on my iPhone but I've noticed issues on my Google Pixel 8a. We're going to continue to update this live blog with updates throughout the day, so stay tuned to know more about what's going on with DeepSeek as it happens.

As it stands, the company is investigating the reported issues but "large-scale malicious attacks" definitely don't sound like something with an easy fix.

Hello there, Jacob Krol – Managing Editor News US – stepping in here for our live updates on the DeepSeek outage. The company has not provided any further updates since the last one regarding "large-scale malicious attacks," and as my colleague John-Anthony described, there's no telling how long those could take to fix.

DeepSeek's own status page still shows "Degraded Performance" for the website and the API. Further, if you're trying to sign up for the service – especially considering its still number one on the Apple App Store in the US and UK – you're still likely in limbo, waiting to get through that process.

Categories: Technology

What is DeepSeek? Everything you need to know about the new ChatGPT rival that's taken the App Store by storm

Mon, 01/27/2025 - 11:04

Despite being in development for a few years, DeepSeek seems to have arrived almost overnight after the release of its R1 model on Jan 20 took the AI world by storm, mainly because it offers performance that competes with ChatGPT-o1 without charging you to use it. Its app is currently number one on the iPhone's App Store as a result of its instant popularity.

DeepSeek is a Chinese-owned AI startup and has developed its latest LLMs (called DeepSeek-V3 and DeepSeek-R1) to be on a par with rivals ChatGPT-4o and ChatGPT-o1 while costing a fraction of the price for its API connections. And because of the way it works, DeepSeek uses far less computing power to process queries.

Some security experts have expressed concern about data privacy when using DeepSeek since it is a Chinese company. Obviously, given the recent legal controversy surrounding TikTok, there are concerns that any data it captures could fall into the hands of the Chinese state.

DeepSeek has already endured some "malicious attacks" resulting in service outages that have forced it to restrict who can sign up. Keep up to date on all the latest news with our live blog on the outage.

What is DeepSeek?

DeepSeek is the name of the Chinese startup that created the DeepSeek-V3 and DeepSeek-R1 LLMs, which was founded in May 2023 by Liang Wenfeng, an influential figure in the hedge fund and AI industries.

The first DeepSeek product was DeepSeek Coder, released in November 2023. DeepSeek-V2 followed in May 2024 with an aggressively-cheap pricing plan that caused disruption in the Chinese AI market, forcing rivals to lower their prices.

The company's current LLM models are DeepSeek-V3 and DeepSeek-R1. Both have impressive benchmarks compared to their rivals but use significantly fewer resources because of the way the LLMs have been created. DeepSeek-V3 is a general-purpose model, while DeepSeek-R1 focuses on reasoning tasks.

DeepSeek has been able to develop LLMs rapidly by using an innovative training process that relies on trial and error to self-improve. So, in essence, DeepSeek's LLM models learn in a way that's similar to human learning, by receiving feedback based on their actions. They also utilize a MoE (Mixture-of-Experts) architecture, so they activate only a small fraction of their parameters at a given time, which significantly reduces the computational cost and makes them more efficient.

Both DeepSeek-V3 and DeepSeek-R1 are available to use for free using its chatbot. (Image credit: Apple/DeepSeek) TL;DR – What is DeepSeek?

DeepSeek offers AI of comparable quality to ChatGPT but is completely free to use in chatbot form. It lacks some of the bells and whistles of ChatGPT, particularly AI video and image creation, but we'd expect it to improve over time.

To use DeepSeek as a chatbot you can simply head over to DeepSeek.com and click on Start Now. You'll need to create an account to use it, but you can login with your Google account if you like.

Alternatively, you can download the DeepSeek app for iOS or Android, and use the chatbot on your smartphone.

DeepSeek price: how much is it and can you get a subscription?

You don't need to subscribe to DeepSeek because, in its chatbot form at least, it's free to use. The DeepSeek chatbot defaults to using the DeepSeek-V3 model, but you can switch to its R1 model at any time, by simply clicking, or tapping, the 'DeepThink (R1)' button beneath the prompt bar.

If you want to use DeepSeek more professionally and use the APIs to connect to DeepSeek for tasks like coding in the background then there is a charge. Currently, it is just $0.55 per mission input tokens and $2.19 per million output tokens. This compares very favorably to OpenAI's API, which costs $15 and $60.

DeepSeek and ChatGPT: what are the main differences?

While its LLM may be super-powered, DeepSeek appears to be pretty basic in comparison to its rivals when it comes to features. In terms of chatting to the chatbot, it's exactly the same as using ChatGPT – you simply type something into the prompt bar, like "Tell me about the Stoics" and you'll get an answer, which you can then expand with follow-up prompts, like "Explain that to me like I'm a 6-year old". The answers you'll get from the two chatbots are very similar.

What you'll notice most is that DeepSeek is limited by not containing all the extras you get withChatGPT. For instance, you'll notice that you can't generate AI images or video using DeepSeek and you don't get any of the tools that ChatGPT offers, like Canvas or the ability to interact with customized GPTs like "Insta Guru" and "DesignerGPT".

If you are a ChatGPT Plus subscriber then there are a variety of LLMs you can choose when using ChatGPT. In DeepSeek you just have two – DeepSeek-V3 is the default and if you want to use its advanced reasoning model you have to tap or click the 'DeepThink (R1)' button before entering your prompt.

ChatGPT comes with AI image generation abilities. DeepSeek does not. (Image credit: Apple/OpenAI/DeepSeek)

There are also fewer options in the settings to customize in DeepSeek, so it is not as easy to fine-tune your responses. In short, DeepSeek feels very much like ChatGPT without all the bells and whistles. We tested both DeepSeek and ChatGPT using the same prompts to see which we prefered.

DeepSeek and ChatGPT: key differences

DeepSeek: free to use, much cheaper APIs, but only basic chatbot functionality.
ChatGPT: requires a subscription to Plus or Pro for advanced features.

One of the best features of ChatGPT is its ChatGPT search feature, which was recently made available to everybody in the free tier to use. This allows you to search the web using its conversational approach. DeepSeek also features a Search feature that works in exactly the same way as ChatGPT's.

Finally, you can upload images in DeepSeek, but only to extract text from them. ChatGPT on the other hand is multi-modal, so it can upload an image and answer any questions about it you may have.

DeepSeek search and ChatGPT search: what are the main differences?

AI search is one of the coolest uses of an AI chatbot we've seen so far. It enables you to search the web using the same sort of conversational prompts that you normally engage a chatbot with.

Just like ChatGPT, DeepSeek has a search feature built right into its chatbot. Just tap the Search button (or click it if you are using the web version) and then whatever prompt you type in becomes a web search. It couldn't get any easier to use than that, really.

Once you've performed your search, say for Pizza restaurants in your city, you can ask follow-up questions, like "From those, if you had to choose one restaurant, which one would it be?"

Using DeepSeek's search capabilities. (Image credit: DeepSeek)

DeepSeek will respond to your question by recommending a single restaurant, and state its reasons. It's this ability to follow up the initial search with more questions, as if were a real conversation, that makes AI searching tools particularly useful.

Both ChatGPT and DeepSeek enable you to click to view the source of a particular recommendation, however, ChatGPT does a better job of organizing all its sources to make them easier to reference, and when you click on one it opens the Citations sidebar for easy access. In contrast, DeepSeek is a bit more basic in the way it delivers search results.

How to use DeepSeek-R1 for deeper reasoning

For harder questions you might want to switch to the R1 LLM. (Image credit: DeepSeek)

DeepSeek-R1 is an advanced reasoning model, which is on a par with the ChatGPT-o1 model. These models are better at math questions and questions that require deeper thought, so they usually take longer to answer, however they will present their reasoning in a more accessible fashion.

To use R1 in the DeepSeek chatbot you simply press (or tap if you are on mobile) the 'DeepThink(R1)' button before entering your prompt. The button is on the prompt bar, next to the Search button, and is highlighted when selected.

When you ask your question you'll notice that it will be slower answering than normal, you'll also notice that it appears as if DeepSeek is having a conversation with itself before it delivers its answer. That is so you can see the reasoning process that it went through to deliver it. It's quite fascinating!

How to switch to DeepSeek from ChatGPT

One thing to bear in mind before dropping ChatGPT for DeepSeek is that you won't have the ability to upload images for analysis, generate images or use some of the breakout tools like Canvas that set ChatGPT apart.

If all you want to do is ask questions of an AI chatbot, generate code or extract text from images, then you'll find that currently DeepSeek would seem to satisfy all your needs without charging you anything.

We also found that we got the occasional "high demand" message from DeepSeek that resulted in our query failing. However, DeepSeek is currently completely free to use as a chatbot on mobile and on the web, and that's a great advantage for it to have.

You might also like
Categories: Technology

The weirdest omission from the Samsung Galaxy S25 launch? Samsung and Google's new Dolby Atmos-busting sound tech

Mon, 01/27/2025 - 10:55
  • Eclipsa Audio is a Dolby Atmos rival from Samsung and Google
  • It's in Samsung's 2025 TVs and soundbars, Chrome and YouTube
  • But it isn't in Samsung's earbuds or phones yet…

Imagine you've created an amazing new platform for audio, and you want the world to use it. And imagine that you've also got the world's eyes on you because you're launching one of the world's most desirable smartphones. Would you:

(a) Use the phone launch to promote your amazing new audio?

or (b) Not do that?

Surprisingly, Samsung chose (b) for its launch of the Samsung Galaxy S25. We really thought we'd be seeing (and hearing) support for Eclipsa Audio, Samsung and Google's rival to Dolby Atmos. But no. And that's really weird.

A total Eclipsa

It's really weird because we know that Eclipsa is coming to Android. It's in a coming-soon version of the Android Open Source Project (AOSP). But software support is only part of what you need to launch a new format. You need people to know about it too, and most of all you need people to be excited about it. And the best way to do that is to let people listen to it.

The Samsung Galaxy S25 launch would have been a great opportunity to make the hype train at least start to choo-choo – and Samsung has already started to talk about Eclipsa in its other products, because it's coming to its 2025 soundbars and TVs. But the Android audio market is potentially much bigger than the soundbar one, and there's still no sign of Eclipsa's arrival.

You could say – and I'm sorry for what I'm about to type – that there's been a total Eclipsa so far.

We really thought Samsung would use the Unpacked event to talk about Eclipsa, and to announce an update for the Samsung Galaxy Buds 3 Pro to support it, since you also need something to listen on.

I suspect one of the reasons Samsung didn't do that is because there isn't much to listen to that uses the format. There isn't support for it on the best streaming services, other than YouTube in the future.

As my colleague Matt Bolton wrote earlier this month, even if Samsung had announced Eclipsa Audio support it still needs more: "Samsung's support alone won't be enough to build momentum for Eclipsa – it really needs to get the hottest headphones makers for all budgets on board to make it feel like a must-have feature."

But in the phones and earbuds world right now, Eclipsa doesn't even seem to have Samsung (or Google, for that matter). At least not yet. Perhaps the inevitable August Samsung Unpacked will see the planets align for Eclipsa.

You might also like
Categories: Technology

AMD exec hints that discrete RDNA 4 GPUs won’t be in gaming laptops anytime soon, leaving Nvidia’s RTX 5000 cards unchallenged

Mon, 01/27/2025 - 10:53
  • AMD’s Ben Conrad was interviewed by Notebookcheck.net
  • The exec was asked about the prospects for RDNA 4 laptops in the future
  • Conrad’s reply was vague, but clearly hints that we shouldn’t expect anything on the RDNA 4 mobile front in the near future

AMD’s RDNA 4 graphics cards are coming to desktop PCs soon, in March 2025, but if you were hoping these next-gen GPUs could be in one of the best gaming laptops in anything like the near future, well, you can seemingly forget about that idea.

This nugget of news comes from an interview that Notebookcheck.net conducted with AMD’s Ben Conrad, who is Director of Product Management for Premium Mobile Client at the company (via VideoCardz).

The tech site asked the following question: “Do you see prospects for RDNA 4 laptops going ahead? Unfortunately, the number of AMD dGPU-based laptop SKUs have been pretty anemic.”

Conrad replied: “Our current graphics strategy is focused on the desktop market with RDNA 4. So, I think you’ll see those types of products first in the future. Certainly, RDNA 4 and future graphics technologies will make it into mobile, whether they be on APUs or future products.”

To clarify, dGPU-based laptops means notebooks with discrete graphics cards, meaning a separate GPU, rather than integrated graphics (built into the processor, which is the solution a good number of laptops run with, due to space constraints and thermal factors).

So, the idea of a discrete mobile RDNA 4 graphics card – the laptop equivalent of the RX 9070 desktop card, say – is not something on AMD’s nearer-term radar. We may get RDNA 4 products for mobile eventually, but Conrad is pretty vague about when that might happen, which makes it sound like it’s something that’s on the backburner for now.

For 2025, then, it looks like the beefy GPUs for gaming laptops are going to be Nvidia’s RTX 5000 mobile graphics cards, and they won’t be challenged by any discrete RDNA 4 offerings.

(Image credit: Future / John Loeffler) Analysis: Not a surprise, really

Is this a big surprise? Not exactly, because Radeon hasn’t been a huge presence in discrete laptop GPUs anyway, and on top of that, RDNA 4 has been a weird generation from AMD. By which I mean on the desktop, it was purportedly cut back to mid-range GPUs as the fastest offerings (rumors point to a high-end solution being on the table initially), with Team Red seemingly focusing on making bigger moves with the next generation. (That could be RDNA 5, or maybe UDNA instead, which is thought by some to be the next step on AMD’s graphics roadmap).

There have also been rumors from some time ago that RDNA 4 is not planned for laptops, even in the form of APUs with integrated graphics, for quite some time. AMD’s big new APUs (including Strix Point Halo, which there’s some major excitement around) are using an integrated GPU that’s RDNA 3.5 (a refresh of RDNA 3, somewhat honed), not RDNA 4.

Indeed, the rumor mill has previously put forward the idea that RDNA 3.5 (or RDNA 3+ as it’s alternatively known) will be used in integrated GPUs in AMD’s APUs for this year, and in 2026.

Of course, all this is very much deep into the realm of speculation, but Conrad’s comments here certainly fit with the idea that RDNA 4 is going to be desktop GPUs only for the foreseeable future.

That doesn’t mean nothing exciting is happening with AMD on the laptop front, of course, because Strix Halo is certainly a huge development, but for thinner gaming laptops with integrated graphics, which are very different beasts to larger notebooks with discrete GPUs. Still, the claim is that the integrated graphics in the Ryzen AI Max+ 395 (Strix Halo) APU outdoes the RTX 4070 laptop graphics card, an eyebrow-raising assertion.

You might also like...
Categories: Technology

These retro headphones and speaker designs really bring the ’80s cool – AIAIAI and design agency Brain Dead went old-school for this one

Mon, 01/27/2025 - 10:49
  • Limited edition Tracks headphones in collaboration with Brain Dead
  • Even more limited edition Unit-4 wireless studio monitors
  • Headphones are $70 / £60; speakers are $400 / £330

Looking at the image above you might think we've been hurled into a DeLorean and taken back to a sci-fi themed '80s club. But these headphones are as contemporary as headphones get.

The headphones are a new, limited edition version of AIAIAI's Tracks headphones, made in collaboration with creative fashion brand Brain Dead – a firm AIAIAI has worked with previously, with their collaboration selling out in minutes. And there's a matching speaker, a "highly limited edition" of the AIAIAI Unit-4 studio monitor.

(Image credit: AIAIAI / Brain Dead) What does this creative collaboration deliver?

As with a lot of Brand X Brand team-ups, these are existing products given a makeover. And that's no bad thing. While the Tracks headphones may look like the orange-felt horrors that came with early '80s Walkman cassette players, they won't sound like them: their 40mm drivers have been praised for their surprisingly bold bass.

It's the same story with the Unit-4, which has been praised by audio experts: they're excellent near-field studio monitors whose low-lag wireless capability make them stand out from the crowd (this is something of a specialism for AIAIAI, which makes wireless low-lag DJ headphones too).

The collaboration was intended to "highlight the rich music culture of Los Angeles", and the launch event this past weekend gave all its proceeds to the California Fire Fund. There's also a compilation tape for people further away to contribute to the cause.

The Brain Dead versions of the Tracks headphones and Unit-4 speaker will be available from AIAIAI, Brain Dead and selected retailers from January 28. The Tracks are $70 / £60 / about AU$111 and the Unit-4 are $400 / $330 / about AU$635.

You might also like
Categories: Technology

I'm absolutely sick of Microsoft's Windows 11 24H2 update, as it's now causing Bluetooth and webcam issues

Mon, 01/27/2025 - 10:41
  • Microsoft's Windows 11 24H2 update continues to affect PC users, with new Bluetooth and webcam issues
  • Windows is failing to detect webcam and has audio playback issues on Bluetooth audio devices
  • The latest KB5050009 (24H2) and KB5050021 (23H2) builds appear to be responsible

It's not a secret that Microsoft's Windows 11 24H2 has been a massive problem for plenty of PC users, with bugs and functionality issues present across the board, which has more recently affected gamers - and now, it looks like a new set of issues have arrived to cause more frustrations.

As reported by Windows Latest, the most recent 24H2 KB5050009 build is leaving users with webcams that aren't being detected and Bluetooth audio devices no longer working even when connected to a system. I can corroborate this last point, as Windows 11 shows Bluetooth audio devices as connected, but without any audio, leaving headphones and more useless for the time being.

Windows Latest claims states that this audio issue occurs on both KB5050009 (24H2) and KB5050021 (23H2), along with USB DAC ports not working with the latter.

The only fix for the Bluetooth issue (as of now) is uninstalling the recent update, while the webcam problems reportedly require a reinstallation of Windows 11. Considering the effort it takes for both supposed fixes, it might be better to wait for Microsoft to address the matter with a patch - but who knows if that won't introduce new issues as well.

The biggest problem here is that it can be hard to diagnose some of the reported issues - there’s an abundance of different PC setups, with different applications installed which could be responsible for some bugs. In this case, it appears that it's indeed a widespread issue (notably with webcams), with users of Microsoft's Feedback Hub voicing their complaints.

(Image credit: Shutterstock / Melnikov Dmitriy) Alright, it's time to stick with SteamOS via Bazzite for now...

If Windows 11 24H2's update breaking multiple games wasn't bad enough already, the Bluetooth issues have taken matters to a new level of frustration. Since I use Windows 11 primarily for gaming, this has hit me - and audio playback on Bluetooth devices isn't the only issue I've run into, as the taskbar and quick settings both become unresponsive (even after pressing the Windows key), forcing me to restart the system.

This isn't ideal for gaming or even general usage, especially when coupled with random slowdowns and stutters in multiple games - Ubisoft's hand was forced to update its Assassin's Creed games that were facing constant crashes. It's already a pain having to deal with PC ports and figuring out what might be the cause behind occasional stutters and more, and 24H2 has added to this.

Until these problems are fixed, I feel like I need another operating system to run my games on. Since Bazzite (and Valve's SteamOS) are far more suitable to use on a handheld gaming PC, I would be cautious about installing it on a desktop PC - and I've only tested Bazzite on the Asus ROG Ally, but if it means I have to use my handheld over my powerful desktop machine while Microsoft puts these issues to bed, then so be it.

You may also like...
Categories: Technology

This ransomware gang is using SSH tunnels to target VMware appliances

Mon, 01/27/2025 - 10:29
  • Researchers find hackers using VMware ESXi's SSH tunneling in attacks
  • The campaigns end up with ransomware infections
  • The researchers suggested ways to hunt for indicators of compromise

Cybercriminals are using SSH tunneling functionality on ESXi bare metal hypervisors for stealthy persistence, to help them deploy ransomware on target endpoints, experts have warned.

Cybersecurity researchers from Sygnia have highlighted how ransomware actors are targeting virtualized infrastructure, particularly VMware ESXi appliances, enterprise-grade, bare-metal hypervisors used to virtualize hardware, enabling multiple virtual machines to run on a single physical server.

They are designed to maximize resource utilization, simplify server management, and improve scalability by abstracting the underlying hardware. As such, they are considered essential in data centers, cloud infrastructures, and virtualization solutions, and offer a tunneling feature, allowing users to securely forward network traffic between a local machine and the ESXi host over an encrypted SSH connection. This method is commonly used to access services or management interfaces on the ESXi host that are otherwise inaccessible due to network restrictions or firewalls.

Attacking in silence

The researchers say ESXi appliances are relatively neglected from a cybersecurity standpoint, and as such have been a popular target for threat actors seeking to compromise corporate infrastructure. Since they’re not that diligently monitored, hackers can use it stealthily.

To break into the appliance, crooks would either abuse known vulnerabilities, or log in using compromised admin passwords.

“Once on the device, setting up the tunneling is a simple task using the native SSH functionality or by deploying other common tooling with similar capabilities,” the researchers said.

“Since ESXi appliances are resilient and rarely shutdown unexpectedly, this tunneling serves as a semi-persistent backdoor within the network.”

To make matters worse, logs (the cornerstone of every security monitoring effort) are not as easy to track, as with other systems. According to Sygnia, ESXi distributes logs across multiple dedicated files, which means IT pros and forensic analysts need to combine information from different sources.

That being said, the researchers said IT pros should look into four specific log files to detect possible SSH tunneling activity.

Via BleepingComputer

You might also like
Categories: Technology

Meta Llama LLM security flaw could let hackers easily breach systems and spread malware

Mon, 01/27/2025 - 10:08
  • Security researchers find way to abuse Meta's Llama LLM for remote code execution
  • Meta addressed the problem in early October 2024
  • The problem was using pickle as a serialization format for socket communication

Meta's Llama Large Language Model (LLM) had a vulnerability which could have allowed threat actors to execute arbitrary code on the flawed server, experts have warned.

Cybersecurity researchers from Oligo Security published an in-depth analysis about a bug tracked as CVE-2024-50050, which according to the National Vulnerability Database (NVD), carries a severity score of 6.3 (medium).

The bug was discovered in a component called Llama Stack, designed to optimize the deployment, scaling, and integration of large language models.

Meta issues a fix

Oligo described the affected version as “vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized."

NVD describes the flaw like this: “Llama Stack prior to revision 7a8aa775e5a267cf8660d83140011a0b7f91e005 used pickle as a serialization format for socket communication, potentially allowing for remote code execution”.

“Socket communication has been changed to use JSON instead,” it added.

The researchers tipped Meta off about the bug on September 24, and the company addressed it on October 10, by pushing versions 0.0.41. The Hacker News notes the flaw has also been remediated in pyzmq, a Python library that provides access to the ZeroMQ messaging library.

Together with the patch, Meta released a security advisory in which it told the community it had fixed a remote code execution risk associated with using pickle as a serialization format for socket communication. The solution was to switch to the JSON format.

LLaMA, or Large Language Model Meta AI is a series of large language models developed by social media giant, Meta. These models are designed for natural language processing (NLP) tasks, such as text generation, summarization, translation, and more.

More from TechRadar Pro
Categories: Technology

Apple's banking on camera-equipped AirPods to trounce Meta smart glasses,but I'm not sure it's enough

Mon, 01/27/2025 - 09:50
  • Mark Gurman's latest Power On newsletter notes 'camera-equipped AirPods'
  • AirPods are just one wearable product Apple allegedly wants to fit with cameras
  • The question is, are IR cameras enough to beat Meta and Samsung?

Like the idea of your AirPods seeing what you see – even if it's just the same tired faces on your daily commute? Apple CEO Tim Cook clearly does, although AirPods are just one Apple product the Cupertino giant is reportedly looking to equip with cameras.

That's according to Mark Gurman's latest Power On newsletter (Sunday, January 26), anyway. The report states that lukewarm demand for Apple's bulky Vision Pro has led Apple execs to concentrate their efforts on AR glasses as the "superior" option, but that those same people in the know at Apple "don’t think a (glasses) product will be ready for three years or more."

So, in the meantime, the company is exploring other types of wearable products that could benefit from cameras, including – but not limited to – camera-equipped AirPods.

This isn't the first time Gurman's mentioned it, either. In February 2024, the noted Apple analyst reported that Apple was exploring AirPods with cameras. Also, six months ago, fellow analyst Ming-Chi Kuo claimed future additions to the best AirPods could be Apple earbuds that include infrared cameras to be paired with your Vision Pro headset and thus create a spatial audio experience to trounce any Meta Orion rivals. And as we know, in the world of rumors and predictions, two noted tipsters saying the same thing is infinitely better than one…

We were warned not to get too excited, though; IR-enabled AirPods aren't going to be available imminently, with mass production not expected to start in 2026 (which had us wondering in October whether Apple can still beat Meta's smart glasses by adding cameras and AI to AirPods Pro).

And just five days ago (January 22), as if to add insult to injury, it was reported that Meta and Samsung are also looking into earbuds with cameras, following Apple's lead with AirPods.

Can Apple win the race to put its eyes in your ears – and will it matter?

With camera-equipped AirPods possibly two to three years away, it feels like the door's wide open for rivals to step in and further dominate the AR space. Even if Meta and Samsung are playing catch-up on actually putting IR cameras in earbuds, the Meta Orion has been slated for a 2027 release (alongside Oakley smart glasses for athletes), and that's just for starters.

Another camera-enhanced rival product that jumps to mind is the hotly-anticipated improved Meta Ray-Ban smart glasses, which could land in 2025 – with a "single small in-lens screen." No, they won't be the full-ticket (and very impressive) Meta Orion AR glasses prototype, but Meta's next step will almost certainly have Apple execs a little rattled.

What about Apple's other plans for its AirPods? Well, the long-promised heart-rate monitor could be one step closer as the company seeks to make its earbuds the most capable earbuds for tracking your health on the market. But it's unlikely that heart-rate monitoring will arrive in AirPods Pro 3 because (also according to Gurman late last year) although Apple's made great strides in this area particularly, the accuracy isn't quite there yet.

Back to the notion of cameras in your AirPods quickly, though, and – because it may not be immediately apparent why you'd want eyes in your ears – it's best to think of them working in conjunction with your other Apple tech rather than simply AirPods that can see.

For example, an IR camera might perform the same function as capacitive sensors for gesture control while offering a wider field of vision for your Vision Pro. Your all-seeing AirPods might also feed data to your Apple Watch or perhaps ping information to your iPhone in future versions of Apple Intelligence – hopefully not just targeted ads about the bar, store, or gym you just glanced at, but let's acknowledge the thought.

Finally, putting cameras in your listening gear could greatly benefit Apple's purported (albeit three years in the pipeline) AR glasses. It goes without saying that they'll need to be light and comfortable to compete. One way of shaving a few grams off the frame? Put the camera in your ears.

Will it be enough to beat the competition, given AirPods undeniable popularity – and most importantly, should we all take a leaf out of the great David Bowie's book and sit right down, waiting for the gift of sound and vision? I'm not so sure, but then again, it wouldn't be the first time Apple's arrived late to the party and then walked all over it…

You may also like
Categories: Technology

One of the biggest flaws exploited by Salt Typhoon hackers has had a patch available for years

Mon, 01/27/2025 - 09:30
  • A security vulnerability in Microsoft Exchange servers remains largely unpatched
  • A fix was issued four years ago, but some users clearly didn't update
  • This flaw may have aided the hacking group Salt Typhoon

Critical security vulnerabilities seem to be a regular occurrence in technology reporting, with countless patches and updates to keep track of - but this Microsoft Exchange Server flaw might be one to take very seriously.

Most of us will be familiar with the major incident in which 9 US telecom giants were breached in what appeared to be a Chinese state sponsored cyber-espionage campaign. The attack, attributed to hacking group Salt Typhoon, is said to have, at least in part, exploited a known critical security flaw in Microsoft Exchange Server.

The vulnerability, nicknamed ProxyLogon, was disclosed by Microsoft in 2021, and a patch has been available for 4 years. Despite this, cyber-risk management company Tenable has calculated in nearly 30,000 instances affected by ProxyLogon, 91% remain unpatched.

CISA guidance

The US Cybersecurity and Infrastructure Security Agency (CISA) previously released in-depth guidance on strengthening visibility and hardening systems and devices in response to the breach, and have emphasized end-to-end encryption for secure communications.

The ProgyLogon is one of five commonly exploited vulnerabilities used by Salt Typhoon. Others include Ivanti Connect Secure Command Injection and Authentication Bypass vulnerabilities, as well as a Sophos Firewall Code Injection Vulnerability.

In light of this, the recommendation and advice for any security teams out there is to always patch where available, and keep as up to date as possible on any software for potential vulnerabilities or fixes.

“In light of the vulnerabilities exposed by Salt Typhoon, we need to take action to secure our networks” said Federal Communications Commission Chairwoman Jessica Rosenworcel.

“Our existing rules are not modern. It is time we update them to reflect current threats so that we have a fighting chance to ensure that state-sponsored cyberattacks do not succeed. The time to take this action is now. We do not have the luxury of waiting.”

You might also like
Categories: Technology

Juniper VPN gateways targeted by stealthy "magic" malware

Mon, 01/27/2025 - 09:25
  • Security researchers spot new piece of malware called J-Magic
  • It listens to traffic in anticipation of a "magic package"
  • Once detected, J-Magic initiates the deployment of a backdoor

Hackers have been found targeting companies in the semiconductor, energy, manufacturing, and IT sectors, with a unique piece of malware called J-magic, experts have warned.

A new report from the Black Lotus Team at Lumen Technologies revealed unnamed threat actors repurposed cd00r - a stealthy, backdoor Trojan designed to provide unauthorized access to a system, initially designed as an open source proof-of-concept for educational and research purposes in cybersecurity.

The repurposed Trojan, dubbed “J-magic”, was being deployed to enterprise-grade Juniper routers serving as VPN gateways. The researchers don’t know how the endpoints got infected, but in any case, the Trojan was sitting silently until the attackers sent it a “magic” TCP package.

SeaSpy2 and cd00r

“If any of these parameters or “magic packets” are received, the agent sends back a secondary challenge. Once that challenge is complete, J-magic establishes a reverse shell on the local file system, allowing the operators to control the device, steal data, or deploy malicious software,” the researchers explained.

The campaign was first spotted in September 2023, and lasted roughly until mid-2024. Black Lotus could not say who the threat actors were, but said that elements of the activity “share some technical indicators” with a subset of prior reporting on a malware family named SeaSpy2.

“However, we do not have enough data points to link these two campaigns with high confidence,” they said.

In any case, SeaSpy2 is also built on cd00r, and works in similar fashion - scanning for magic packets. This persistent, passive backdoor, masqueraded as a legitimate Barracuda service called "BarracudaMailService," allows threat actors to execute arbitrary commands on compromised Barracuda Email Security Gateway (ESG) appliances.

SeaSpy was apparently built by UNC4841, a Chinese threat actor.

Via BleepingComputer

You might also like
Categories: Technology

The future of mobile browsers: time for a new model?

Mon, 01/27/2025 - 09:19

As the CMA scrutinizes Apple's WebKit restrictions and Google's revenue-sharing deals in the UK, the DOJ proposes Chrome's sale in the US. Could other browsers finally get their chance to innovate?

The mobile browser market isn't working as well as it could for businesses and millions of phone users – that's the verdict not only from the UK's Competition and Markets Authority's (CMA) recent investigation, but also from the US Department of Justice (DOJ), which has proposed forcing Google to sell off Chrome as part of antitrust remedies. As the founder of a privacy and security-focused browser, I've watched this space evolve, and these findings reflect many of the challenges we've observed firsthand.

The numbers tell a striking story. In the UK alone, Safari commands an 88 per cent share of mobile browsers on iOS, while Chrome holds 77 per cent on Android. This dominance mostly stems from two significant market dynamics. The first of these is Apple's requirement that all iOS browsers must use its WebKit engine. This limits what developers can achieve – a restriction that notably doesn't exist on Apple's desktop operating system macOS, or on Android devices.

It's important to note that these limitations can be major, leading to one of the CMA's main points – the stifling of mobile browser innovation. It directly impacts what we can offer users. When browsers want to implement additional security features like 'Safebrowsing mode' for warning users about dangerous sites, or 'site isolation' for protecting against malicious attacks, for example, they're constrained by WebKit's limitations. Even large companies like Meta face these restrictions – the social media giant is unable to build its own customized in-app browser experience on iOS, despite having millions of users through apps like Facebook and Instagram.

To be clear, WebKit is an undoubtedly capable engine, and is one of several excellent options alongside other offerings like Chromium and Mozilla's Gecko. But the issue isn't about WebKit's quality. Rather, it's about the lack of choice. Apple maintains that this requirement is necessary for security, which is a valid concern. Nobody wants malicious browsers accessing banking apps or stealing data, after all. However, Apple already has robust certification processes in place for browsers, including special certificates and extensive checks. As browser developers, we're open to additional security measures – whether that's source code audits or Apple employees visiting our offices to verify our practices. The mechanisms for ensuring browser security already exist – they just need to be applied to non-WebKit browsers as well.

Stifling innovation

The second, and perhaps more fundamental issue, involves the revenue-sharing arrangement between Google and Apple for mobile browsers on iOS. According to the CMA's findings, Google pays Apple a significant share of the search advertising revenue earned from traffic on both Safari and Chrome on iOS. This creates an unusual competitive dynamic – the revenue share that each company earns from their competitor's product is lower, but similarly significant to what they earn from their own browsers.

Or to put it another way – the current arrangement fundamentally reduces the financial incentives for browser competition on iOS. The motivation to compete aggressively with innovative features is diminished when the financial gain from winning new users is minimal. This helps explain why, despite the mobile web being such a crucial part of our daily lives, we've seen relatively limited innovation in core browsing functionality. The CMA's investigation marks the first time a major regulator has specifically scrutinized these revenue-sharing arrangements, and their impact on browser competition. With the UK's Digital Markets, Competition and Consumers Act coming into force in 2025, these findings could lead to significant changes in how mobile browsers operate and compete in the UK.

This landmark legislation will establish a new digital competition regime similar to the EU's Digital Markets Act, while also expanding consumer protection through updated commercial practice regulations and strengthened consumer rights. Crucially, it will also give the CMA direct enforcement powers, including the ability to fine businesses for breaches. Ultimately, this could finally address the competition issues that have long concerned smaller browser developers, creating an environment more conducive to innovation.

The impact of these revenue-sharing arrangements has caught regulators' attention beyond the UK. In the United States, the Department of Justice has proposed that Google sell Chrome as part of remedies following an antitrust ruling. The DOJ's proposal would prohibit Google from offering money to make its search engine the default on any platform – a move that could dramatically reshape the browser market's economics. This aligns with growing global scrutiny of how search revenue influences browser competition. The DOJ's proposal goes even further than the CMA's investigation, suggesting Google should be required to share its search index with competitors at marginal cost. Such changes could accelerate the transition toward new business models in the browser market.

For browsers, the impact of potential future legislation changes could be huge. Consider Mozilla Firefox, which has been in the market for decades. Its 2021-2022 financial statements reveal the sheer scale of these arrangements – of Mozilla's $593 million in revenue, $510 million came from its Google search deal. This stark reality shows how browser companies could lose around 80 per cent of their revenue overnight if these arrangements were to end. Such a change would force a fundamental rethinking of how browsers operate – they would need to either adopt subscription models, aggressively increase donations, or resort to advertising, which could compromise user privacy. With that in mind:

What might a more competitive future look like?

I envision a landscape where browser choice is more transparent and accessible to users. When someone gets a new phone, they should be presented with clear, unbiased options for their default browser. Android already demonstrates that this is possible, allowing browsers to use different engines and giving users more control over their browsing experience.

With regulators on both sides of the Atlantic targeting search revenue arrangements, the shift toward subscription-based models may come sooner than expected. While this might initially raise eyebrows given the prevalence of free browsers, it's worth considering the broader context. Many users already pay separately for VPNs, password managers, and privacy tools. A browser subscription wouldn't necessarily mean yet another monthly payment – instead, it could consolidate these existing services into one comprehensive package. This model enables browsers to focus purely on user interests rather than advertising revenue. Innovation, in short, becomes more feasible when browsers can invest in development without relying on search engine revenue sharing.

We're already seeing examples of what's possible when browsers prioritize user needs. Our browser, for example, recently introduced a Cookie Consent Management feature which automatically handles cookie preferences based on user settings. This addresses a real pain point – the constant interruption of cookie pop-ups – while maintaining true privacy protection rather than just hiding the prompts. The future of mobile browsing isn't just about technical capabilities however – it's about creating an environment where innovation can flourish while respecting user privacy and choice. Whether through regulatory changes or evolving business models, the goal should be to enable browsers to compete on their merits, offering users genuine alternatives rather than variations on the same theme.

We're at a pivotal moment where the decisions made by regulators and industry players will shape how we access the internet for years to come. The CMA and DOJ’s investigations have highlighted the challenges, but it's up to browser developers, platform holders, and users to work together to create a more open, innovative, and user-focused mobile web. And here’s to hoping they do.

We've featured the best secure browser.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Centralize your risk response – the need for a Risk Operations Center

Mon, 01/27/2025 - 09:12

In The Boscombe Valley Mystery by Arthur Conan Doyle, Sherlock Holmes comments that, “There is nothing more deceptive than an obvious fact.” When it comes to risk, it’s obvious that companies should want to remove or reduce risk as much as possible. But the process - how you actually carry out the actions to eliminate risk, and how you collaborate to make that risk reduction work across the business - is not obvious. To improve this, we have to look at how we consider risk across the whole organization. This requires a Risk Operations Center, or ROC.

What's in a name?

When CISOs hear the phrase “Risk Operations Center” they invariably ask, “How is a ROC different from a Security Operations Center?” Let’s begin answering this question with a concise definition for what a ROC aims to achieve: A ROC orchestrates risk elimination.

I can hear risk purists objecting, “You can never eliminate risk – only control it!” I have two responses. First, I was being purposefully terse as a means of easing readers into a fuller definition. Second, I suspect objections stem from not completely aligning on terms. Let’s fix that by defining what I mean by risk and elimination.

For the definition of risk I will turn to How To Measure Anything In Cybersecurity Risk: “Risk is a state of uncertainty where some of the possibilities could lead to loss, catastrophe, or some other undesirable outcome.” Here is a thought experiment: If you completely remove your uncertainty, have you eliminated risk?

Imagine I am driving an SUV. I’ve just been told there is a small tunnel ahead. I don’t know what small means in this context. I just know I’m driving a high occupancy vehicle full of children and my spouse. I’m now uncertain if my vehicle will fit. As I approach the tunnel I see a sign overhead that says the tunnel is twenty feet by twenty feet – and I can also clearly see that my SUV will fit. Using “measurement” I just eliminated my state of uncertainty about possible future “loss, catastrophe or some other undesirable outcome.”

Risk measurement moderates our uncertainty. We define risk measurement as, “A quantitatively expressed reduction of uncertainty based on one or more observations.” In the case of tunnel versus SUV, my state of uncertainty was reduced 100%. Unfortunately, in business environments, not all risks are this cut and dry or easy to understand with one fact. This is why we need to clarify what elimination means.

For elimination I’m using its “risk oriented” word origin. In Latin, elimination is “ex limine.” The “ex” means “off or out.” And “limine” is limit or boundary. In short, to eliminate risk is to set a boundary or limit that should not be exceeded. This ties nicely with the concept of a cyber insurance limit and risk tolerance. Indeed, a limit is a mathematically unambiguous and contractually binding expression of business risk tolerance.

With our terms defined I can expand my overly pithy ROC definition to something acceptable even by the most ardent of risk purists: “A ROC continuously orchestrates the remediation, mitigation and or transfer of cybersecurity risk that may exceed business tolerance.”

What differentiates a ROC to a SOC is that the SOC is specifically focused on security alerts and managing responses to issues within the technology stack. A ROC, conversely, takes all that information and provides it to the whole business, including finance and compliance leaders, so the organization can manage and understand risk mitigation in that wider context.

What's a ROC platform?

I have the unique pleasure of engaging with CISOs and their teams around the globe on all things cybersecurity risk management. A growing majority of enterprise level CISOs are attempting to stand up DIY ROCs, because they have to put risk into that wider business context. Perhaps you are one of these CISOs. How might you know?

One of the first tells for this approach is that you are aggregating comprehensive “Risk Data” into a data lake, so you can make sense of that data for risk purposes. This includes a complex approach to handling data so that it can be used, from de-duplicating IT asset records to consuming full-stack vulnerability data and rationalizing disparate scores for those assets over time. This includes integrating multiple threat intelligence feeds and correlating compensating controls for risks that cannot be fixed immediately. Alongside this, you may look at how to use this data to trigger automated mitigation and remediation actions, whether that is through patching or deploying best practice deployment frameworks.

If you answered yes – even in part – then you are likely embarking on your own ROC journey. It’s a non-trivial DIY proposition. Consider that the average enterprise level firm has 76 security tools deployed, according to Panaseer. Getting each stage of this process right with each investment would most likely be out of reach, even for those with outsized budgets.

It is also essential to distinguish risk data from security data, as they should be purposefully thought of differently. I distinguish “risk data” from threat oriented “event data” that materializes in your SOC. SOC event data consists of streams of arrival time stamps with light weight meta-data. Due to its volume and millisecond velocity it’s invariably light on context. Event data is best persistent and modeled via time series data structures and related analysis. This is similar to what is used in real-time trading and network analysis. It’s for specific IT security decision making around threats.

Risk data is the other end of the spectrum when it comes to context. Consider all the rich content and context that you are putting in place around those IT security signals so that other teams can use it, and how it turns into understandable metrics for the board.

At the same time, all this analysis and reporting is usually done in some form of OLAP structure enriched with high context graph connected data. Indeed, graph comprehension is a must as cloud native data and other ephemeral “assets” aren’t IP addressable. The days of first and third party assets always being tied to a machine – and its IP – are fading. It would seem that the only thing the industry can agree on when it comes to assets is that they are probably nouns.

You can still do time series analysis with ROC data. You would do this to baseline metrics and do other forms of change analysis. The event grain for ROC data is not meant to duplicate log aggregation and/or observability solutions that back-end SOC systems create and consume. The ROC - and the data it provides - will carry out sophisticated analysis around potential risks and responses, but that complexity will be hidden from view so that the emphasis is on protecting value and reducing potential loss.

What's different around risk?

The ROC actually sits at the nexus of value and loss exposure. Consider that a successful business is in the business of exposing more value, to more people, through more channels with higher velocities. In other words, businesses want to make more revenue and more profits. You can call this “digital and AI transformation,” but it is a process that every business will go through in the pursuit of growth. At the same time, any new venture or investment increases the potential risk back to the organization. In this sense, successful businesses are risk exposure machines.

The ROC is in the center of risk flowing into and out of your “risk surface” where value flows in and losses flow out. The ROC controls the “loss exposure” portion of that flow. It does that using both sentient and or artificially intelligent means of risk analysis. That analysis in turn automates actions (or enables workflows) for remediation, mitigation and risk transfer. Remediation and mitigation are controlled within the attack surface domain.

Alongside these technology elements, there are other controls that you would implement, like cyber insurance to transfer potential risk response outside the business, which is within the broader risk surface domain. This combination of security measures and cyber insurance for response is where you can take practical, proactive steps as a defender and invest in capabilities for controlling loss.

Are you ready to ROC?

The ROC is not your SOC. They work together but at different levels of your overarching risk surface. The SOC exclusively operates on event data within the attack surface domain. And the ROC? It continuously orchestrates the remediation, mitigation and or transfer of cybersecurity risk that may exceed business tolerance.

The interesting news is that enterprises are already feeling their way towards this concept of the ROC as they try to implement more effective risk controls. The challenge is that implementing a ROC is still at the early stages of development, where DIY approaches are still nascent and partial compared to what companies actually need. Assembling ROC will depend on collaboration between those within companies - CISOs, CFOs and compliance in particular - but also between peers and vendors.

According to the Risk Management Association, cybersecurity risk is the number one issue that companies face in the coming years. The rise of cyber risk quantification continues to help in this process, yet many of these projects will fail because they do not get the right support or deliver effective risk data that the business can use. To overcome this challenge, ROC deployments ensure that risk data can be used to control and respond to risk as part of that wider business approach. Where the SOC should deliver insight for security operations, the ROC should deliver risk operations that cover the whole business.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Global data consumption hit a major new high in 2024 - here's why

Mon, 01/27/2025 - 09:00
  • Global data consumption is up 15% YoY, 113% in five years, DE-CIX report claims
  • 2024 traffic stood at 68 exabytes, a staggering amount of data
  • Sporting events and video gaming are responsible

Global data traffic hit a new record of 68 exabytes in 2024, marking a considerable 15% jump from 2023’s 59 exabytes, a report from a major data handler has claimed.

New figures from DE-CIX show overall traffic has more than doubled since 2020, when the pandemic caused millions of workers to be sent home and adopt new hybrid and remote working patterns.

In 2020, global data traffic stood at just 32 exabytes – now, DE-CIX says five years later, we’ve seen 113% growth, as consumption subsequently rose to 38 exabytes in 2021, 48 exabytes in 2022, and 59 exabytes in 2023.

Global data traffic rise

To put it into perspective, 2024’s 68 exabytes of data, exchanged across 3,400 global networks, would equate to a stack of paper 20 times higher than Mount Everest if printed. The same amount of data equates to streaming a high-definition video for two million years continuously.

The company also noted the impact of the UEFA Champions League and video gaming on internet traffic, with 2024 peaking at 24.92 terabits/second on November 20 to coincide with multiple game launches.

Although gaming accounted for the largest share of traffic in 2024, peaking in the third and fourth quarters, video conferencing also saw a post-summer uptick with hybrid working as the 'new normal' despite tech giants’ best efforts to bring people back into the office.

The data comes specifically from DE-CIX, the world’s leading Internet Exchange operator, which operates 60 locations across Europe, North America, South America, Africa, the Middle East, and Asia.

The news comes shortly after Cloudflare announced similar findings, claiming global internet traffic rose 17% year-over-year. Its study emphasized Google’s dominance of both the browser and the search markets, while also exploring the prevalence of artificial intelligence and social media.

You might also like
Categories: Technology

Is DeepSeek already down? ChatGPT rival and App Store king struggling to cope with demand

Mon, 01/27/2025 - 08:53
  • DeepSeek is now the most popular free app in the US and UK Stores
  • But the ChatGPT rival appears to be struggling to cope with demand
  • The Chinese start-up is experiencing server outages and login issues

DeepSeek is the most popular app in the world right now and the AI chatbot might be struggling to meet demand.

The new ChatGPT competitor created by a Chinese start-up is experiencing service outages and the company's status page claims it is investigating possible causes.

The AI chatbot has gained worldwide acclaim over the last week or so for its incredible reasoning model that's completely free and on par with OpenAI's o1 model. According to the company's status page, there's an issue that is preventing users from signing up and accessing DeepSeek and its DeepThink R1 AI model.

While I've not experienced any issues with the app or website on my iPhone, I did encounter issues on my Pixel 8a when writing a DeepSeek vs ChatGPT comparison earlier today. It appears that creating new accounts is causing DeepSeek issues, often showing errors when a user attempts to send a prompt.

The latest incident appears to be resolved as of 9:32pm Chinese local time (8.32am ET / 1.32pm GMT). But considering the huge influx of users, there could be further issues throughout Monday and the rest of the week.

Teething issues

(Image credit: Future)

DeepSeek has taken the world by storm by offering an AI chatbot that's as good, if not better, than OpenAI's class-leading ChatGPT.

While that's excellent for people looking to get their hands on a free AI with immense capability, it could lead to issues and outages more frequently as the servers struggle to cope with demand.

We'll be monitoring this outage and potential future ones closely, so stay tuned to TechRadar for all your DeepSeek news.

You may also like
Categories: Technology

'There's a lot of content coming out': Your Friendly Neighborhood Spider-Man producer explains why the Marvel TV show has such a 'unique' release schedule

Mon, 01/27/2025 - 08:24
  • One of Your Friendly Neighborhood Spider-Man's producers has revealed why it has a "unique" release schedule
  • Brad Winderbaum says it's because there's "a lot of Marvel content in 2025"
  • He's denied that Daredevil: Born Again's forthcoming release is to blame for YFNSM's compressed release format

Your Friendly Neighborhood Spider-Man's "unique" release schedule isn't a byproduct of Daredevil: Born Again's forthcoming launch, one of its executive producers has said.

Brad Winderbaum, who's also a producer on Daredevil's upcoming standalone series, told TechRadar that the latter isn't to blame for its sibling series' compressed release date format. Instead, Marvel's head of TV and streaming revealed that Your Friendly Neighborhood Spider-Man's (YFNSM) is simply down to the company's ongoing "experimentation of how episodic shows are released".

Ever since WandaVision, the first Marvel Cinematic Universe (MCU) show, debuted on Disney Plus, the comic book giant has trialled new ways of delivering its small-screen content to fans. So far, it's tested launching new shows with one-episode or two-episode premieres, and releasing new installments of series on a daily basis, which it did with What If...?'s second and third seasons.

A post shared by Spider-Man (@spiderman)

A photo posted by on

YFNSM's release schedule is another example of that experimentation. Confirmed on Instagram (see above), Spider-Man's latest animated series will arrive in four, multi-episode parts. The Marvel Phase 5 project's first two entries arrive on launch day (January 29), chapters three to five will debut one week later on February 5, episodes six and eight land on February 12, before the final two installments air on February 19.

With Daredevil: Born Again due out on one of the world's best streaming services just two weeks after YFNSM's first season ends, some fans have theorized that Marvel is pushing the latter out of the door early to make room for the highly-anticipated former's arrival. However, Winderbaum suggests that, while YFNSM's multi-episode schedule differs from the studio's traditional release format, this couldn't be further from the truth.

Your Friendly Neighborhood Spider-Man's first season will be released in four parts (Image credit: Marvel Studios)

"I think that you can see with many streamers, including Disney Plus, there is a lot of experimentation going on with how episodic shows are released," he told me. "I'm a big fan of week over week. Part of that [reason] is that anticipation and the other is having to go through the week between [last episode's] cliff hanger to get the payoff. That's the TV show experience, so I like some sort of week over week cadence.

"But, there's a lot of Marvel content [being released] in 2025," he added. "We want to make sure that everything has room to stand on its own feet, so that was the reason to release it [YFNSM] over four weeks instead of ten. It's also a very unique show that has a high school soap opera element to it. You know, it's kind of our version of [Canadian high school series] Degrassi High in a lot of ways and there's a feeling of wanting to see that next step in the character journey, especially when you get into the into the latter two thirds of the of the season You really start to get used to those characters. You want to start seeing them interact and seeing those character setups pay off, so I'm excited for fans to get a different kind of viewing experience."

I'll be covering YFNSM in more detail throughout its first season. For now, find out why its showrunner has defended its divisive animation style in the wake of fierce fan criticism, or get the lowdown on everything we know about Daredevil's TV return by reading my Daredevil: Born Again guide.

You might also like
Categories: Technology

Pages