A cyberattack against a French hospital has resulted in the theft of sensitive data on almost a million patients.
A threat actor with the alias near2tlg took to the infamous hacking community BreachForums to offer access to “multiple establishments”, including Centre Luxembourg, Clinique Alleray-Labrouste, and a couple of others.
They claimed that the offering granted access to sensitive data belonging to 1.5 million people, including patient records, billing, and other data.
Compromised accountTwo hours later, the same actor posted a new thread, selling “French hospital data”. The compromised information allegedly included people’s names, dates of birth, gender, postal addresses, cities, postal codes, phone numbers, and email addresses. Furthermore, the archive contained information on attending physicians, prescriptions, death declarations, and more. They said that 758,912 users were affected, and that the breach was done through Mediboard.
Mediboard is an Electronic Patient Record (EPR) solution, developed by Softway Medical Group. The company confirmed the breach to local media, but stressed that the attack did not come as a result of a vulnerability, but rather as a result of stolen credentials.
"We want to emphasize that the affected health data were not hosted by Softway Medical Group," they said.
In a statement to BleepingComputer, the company said that the compromised account had elevated privileges: "We can confirm that our software is not responsible, but rather, a privileged account within the client's infrastructure was compromised by an individual who exploited the standard functions of the solution.”
"This hypothesis has been substantiated. It is therefore neither due to improper implementation of the software nor human error."
At press time, there were no confirmed buyers, but healthcare information is usually highly regarded among cybercriminals. They can use it for a wide variety of crime, from phishing, to identity theft, wire fraud, and more.
You might also likeXbox Game Pass Ultimate members can now play "select" games they own via cloud streaming.
Microsoft announced this feature in 2019 and was initially supposed to launch alongside Xbox Cloud Gaming, so it's been a long time coming. The most recent update came in October from The Verge, which reported that the company was preparing to test the new update as part of a project called "Project Lapland" where it will expand its Game Pass Ultimate offering.
The publication also claimed that the work had been "complicated by having to prepare key infrastructure for thousands of games, instead of the hundreds that currently exist on Xbox Game Pass."
Now that the feature is finally here, players subscribed to Game Pass Ultimate can stream select cloud-playable games they own on various devices, even if they're not included in the service.
This includes Samsung Smart TVs, Amazon Fire TV devices, the Meta Quest, and other browser-supported devices like PCs, smartphones, and tablets.
Microsoft has released the list of 50 available games that are available to stream, and it includes some of the biggest titles like Baldur's Gate 3, Cyberpunk 2077, Balatro, Final Fantasy 14 Online, The Witcher 3: Wild Hunt, and more.
The company has confirmed that this list will continue to grow as it works with more studios. You can take a look at the full list below.
The 50 games available to cloud stream:PlayStation's official Black Friday sales are starting soon, and it's arguably the company's best yet especially if you're in the market for a certain VR headset.
Announced via the official PlayStation Blog, its Black Friday sales kick off tomorrow (November 22) and will last all the way up to Cyber Monday on December 2. They'll include a wide range of savings on hardware, software, and even PS Plus subscription time. The deals also seem to line up with a Dealabs post from reliable leaker billbil-kun which went up a few days ago.
The bulk of hardware deals will be available to browse and purchase at PlayStation Direct and a number of participating retailers. That'll likely include Amazon as well as Best Buy and Walmart in the US as well as Argos and potentially Currys in the UK.
Undoubtedly the headline discount here is for PSVR 2, which the blog post confirms will receive up to a massive 40% off. If that includes the Horizon: Call of the Mountain bundle then it'll be unmissable for PlayStation stalwarts who've yet to pick up the headset for themselves.
There's also going to be up to 25% off DualSense Wireless Controller colorways, the PlayStation Pulse Elite headset, Pulse Explore buds, and even PS5 console covers. Naturally, the PlayStation Store on console will also host a tirade of excellent savings on video games. It'll likely be the best time of year to purchase some of the best PS5 games for much, much less.
PS Plus is also set to receive a huge 30% discount on 12-month memberships - massive news for anyone who's wanted to get into that Premium tier for less. Existing members can also save up to 30% when upgrading their PS Plus tier from Essential to Extra, or Extra to Premium.
At TechRadar Gaming, we'll be keeping you updated with all the best Black Friday PS5 deals and Black Friday PS5 Pro deals as they happen. We have a feeling that PSVR 2 will absolutely fly off the shelves now that it'll be available at a significantly reduced price. So be sure to keep PlayStation Direct and your favorite retailers bookmarked if you want to get your hands on one.
You might also like...Eighteen months into the generative AI boom, some may wonder if the shine is wearing off. In April, Axios called gen AI a “solution in search of a problem.” A month later, a Gartner survey across the U.S., U.K. and Germany found that about half of all respondents expressed difficulty assessing AI’s organizational value despite generative solutions being the No. 1 form of deployment. Meanwhile, Apple and Meta are reportedly withholding key AI features from Europe over compliance concerns.
Between the regulatory hangups and ROI questions, it’s tempting to wonder whether generative AI may turn out to be the tech industry’s latest bauble– more NFT than Netflix, if you will. But the problem isn’t the technology; it’s the mindset. What we need is an alternative approach.
Not all AI is the same. We have a bandwagon problem with companies jumping on the AI train, particularly for generative use cases. Practitioners will only unlock the true potential of AI – including generative applications – when they prioritize an engineering-first mindset and cultivate the expertise to add domain knowledge. Then, and only then, can we build a roadmap for concrete, long-term value.
Not all AI is the sameBroadly speaking, Enterprise AI splits into generative and analytical applications. Generative has received all the recent attention thanks to its uncanny ability to create written content, computer code, realistic images, and even video in response to user prompts. AI for analytics meanwhile, has been commercialized for far longer. It’s the AI that enterprises use to help run operations, drawing trends and informing decisions based on large pools of data.
Analytical and generative AI can overlap, of course. Within a given stack, you might find all sorts of integrated usages – a generative solution on the front end, for example, that surfaces ‘traditional’ AI-powered analytics to provide data visualization for the answer. Still, the two sides are fundamentally different. Analytics AI helps you operate. It’s reactive. Generative AI helps you create. It’s proactive.
Too many stakeholders gloss over this bifurcation, but it matters in the all-important value conversation. AI-powered analytics have long proven their ROI. They make sense of how we assemble data, and the outputs – from customer segmentation to predictive maintenance to supply-chain optimization – drive well-established value.
Generative AI? That’s a different ballgame. We see lots of experimentation and capex, but not necessarily commensurate output. A firm’s engineers might be 30% more effective by using a generative AI tool to write code, for example, but if that doesn’t drive shorter product-to-market cycles or higher net-promoter scores, then it’s difficult to quantify real value. Leaders need to break the value chain into its modular components and ask the hard questions to map generative use cases to real value.
The bandwagon problemThe ROI problem for gen AI is just as much a bandwagon problem, with many stakeholders starting their search for an AI solution with only a generative implementation in mind. Business leaders are trying to force AI – and generative solutions especially – onto problems they don’t have. They’re inventing use cases just to get in the game, often at the urging of their boards, because they don’t want to be left behind.
It’s time to take a step back. Leaders need to remember two things.
First, it’s important to separate the use cases. Is this push for a generative solution best served by an analytical one, either in whole or in part? Often an organization just needs pure-play AI – for fraud detection or risk management, for example – and not a GPT that turns it into the latest prompt wizard.
Second, it’s just as important to integrate AI only where it makes sense. It should solve acute problems that the business can realize value by solving. Otherwise, it represents a solution without a problem. You gave the orchestra drums for an arrangement with no percussion.
Why domain knowledge is keyBandwagon skeptics who appreciate the nuances of AI can adopt a pragmatic approach that delivers honest value by taking an engineering-first perspective. The biggest problem with AI, whether generative or analytical, is a lack of understanding for the context or business domain that practitioners are working in.
You can generate a block of code, but without an understanding of where that code fits, you can’t solve any challenges. Consider an analogy: An enterprise might have let an AI model onto its street, but the engineers know the neighborhood. The firm needs to invest significant resources into training its latest resident. After all, it’s there to solve an acute problem, not to just go knocking on every door.
Done correctly, generative models can deliver substantive long-term value. AI can generate code against considerable requirements and context – guardrails built as part of a broader investment in domain knowledge – while engineers have the context to tweak and debug the outputs. It can accelerate productivity, make practitioners’ jobs easier and, if clearly mapped to the value chain, drive quantifiable ROI.
That’s why it’s essential to have the discipline to invest in this domain knowledge from the outset. Leaders need to build that into any AI investment plan if they want useful, long-term results. Sacrificing depth for speed can drive patchy solutions that don’t ultimately help, or only help for a short amount of time. Those who want AI for the long haul need to invest the effort to build context from the bottom up.
A roadmap for disciplineFor business leaders, the roadmap to value-driven AI starts by asking the right question: What problem in my enterprise do I really need AI to solve? Disciplined practitioners bring an engineering mindset that asks the right questions, considers deeper problems and seeks targeted solutions from the very start. Done right, analytical or generative AI can accelerate a team’s effectiveness by removing the mundane, boring parts of their roles. But the generative intelligence must have proper guidelines and industry-specific training, lest the implementations stray from their lanes.
Approached this way, gen AI won’t go the way of the metaverse. Its primitive beginnings can mature from superficial use cases to actual value because enterprises invested the resources to build context. If not, the cost of failure is already becoming clear. Firms will pile up additional computing, storage and network costs, only to find that they haven’t delivered any determinable cost savings or revenue gains.
But for those who adopt an engineering mindset and don’t take shortcuts, this alternative approach can indeed deliver. A pragmatic approach to AI starts by asking the right questions and committing to an investment of domain knowledge. It ends with targeted solutions that deliver quantifiable long-term value.
We list the best Large Language Models (LLMs) for coding.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
While VMware has long been a cornerstone in IT infrastructure, it’s increasingly clear that it comes with several challenges we can’t afford to overlook. Let’s start with the financial implications. VMware’s licensing costs and subscription fees are significant, to say the least, and the complexity of its licensing structure doesn’t help. It often feels like navigating a maze to find the right package, and the recurring maintenance costs only add to the burden. Then there’s the issue of vendor lock-in. When we commit to VMware, we’re committing to its entire ecosystem, which limits our flexibility. As multi-cloud strategies and open source solutions become more prevalent, the risk of being boxed in by a single provider’s roadmap grows. The dependency is real, and so is the challenge of migrating to other platforms—it's complex and expensive.
From a performance standpoint, VMware’s architecture is beginning to show its age. It may not be the best fit for modern cloud-native workloads like containerized environments or latency-sensitive applications such as AI. The overhead and scalability constraints inherent in VMware’s setup mean that we’re not always optimizing every byte of memory or every watt of power, which is a concern in today’s performance-driven world. Additionally, when we consider innovation, we must acknowledge that VMware, despite its dominance, has lagged in adopting new technologies such as edge computing, containerization, and advanced AI automation. It feels like the market is moving faster than VMware’s ability to keep pace.
Risk of exposureOperationally, VMware introduces meaningful complexity. Managing and maintaining its environment often requires highly specialized skills, and the ecosystem’s fragmentation—where each product has its own management interface—can lead to unnecessary administrative overhead. Version updates require operators to maintain elaborate dependency graphs and matrix diagrams to ensure changes to one part of the system don’t crash another. This complexity also extends to security. VMware has faced its share of vulnerabilities, and slow patch deployment increases the risk of exposure. Integrating third-party cybersecurity tools isn’t always straightforward, which leaves us with systems that aren’t as secure as they need to be in an era where cyber threats are at an all-time high. We’re faced with swallowing the bitter pill of potential downtime from exposure or potential downtime from installing the patch to fix the exposure.
The lack of a cloud-native focus is another concern. VMware’s traditional VM-centric architecture feels misaligned with modern cloud-native and DevOps approaches, where containers, microservices, and automation are the driving forces. While VMware offers solutions like Tanzu, they aren’t as efficient or deeply integrated as competitors built from the ground up for these purposes. This disconnect also complicates multi-cloud strategies—despite VMware’s efforts, achieving true flexibility and integration across different cloud platforms remains challenging. For the undetermined future, “legacy” software will remain in our data centers; this is a given. However, the reality is we need the best of both worlds: the ability to administer, secure, and scale these older workloads while designing, developing, and implementing more resilient cloud-native solutions, pushing availability and recovery into the application layer.
Problematic deploymentDeployment and scalability can also be problematic. VMware deployments can take significant time and effort, and scaling often demands precise planning and excessive hardware investments—something cloud-native platforms handle with much more flexibility. It’s particularly challenging to manage dynamic, ephemeral workloads in a VMware environment, which is at odds with modern IT practices where agility is key. Energy efficiency is another factor; VMware environments are not always optimized for power use, leading to increased operational costs, especially in large data centers.
Migration paths away from VMware can be costly and complex, further enforcing the sense of being locked into its ecosystem. Smaller implementations (tens to hundreds of VMs) aren’t incredibly difficult to move, but when you’re looking at moving thousands of VMs, all with interconnected dependencies across applications and hardware, mapping these to a new platform is what horror movies are made from. Even with solutions like Tanzu, VMware’s capabilities for managing containerized environments are fragmented, requiring additional licenses and tools to operate efficiently. This lack of native integration with modern DevOps methodologies, infrastructure-as-code practices, and agile development processes is increasingly apparent. VMware may have been a leader in the past, but as our IT strategies drive towards automation and flexibility, it feels like VMware is struggling to keep pace.
While it’s easy to scrutinize technology decisions, it’s equally important to acknowledge where a solution has excelled, and VMware certainly has its strengths. For years, VMware has been the gold standard in virtualization, providing rock-solid stability and reliability that IT departments have come to depend on. Regardless of the nuances, VMware has paved the way for some foundationally pivotal advancements in technology. vSphere has been instrumental in maximizing hardware utilization, enabling us to run multiple workloads efficiently and securely on a single host. This has not only reduced physical server sprawl but also significantly cut down on data center costs—an area where VMware’s impact has been undeniably positive.
We've featured the best virtual machine software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Reddit suffered through its second major outage in as many days on Thursday, November 21. The homepage and subreddits were unreachable for some on desktop and mobile, including us here at TechRadar, and there were numerous reports on social media.
Reddit confirmed to TechRadar that this outage was caused by an update that it quickly rolled back. Similarly, yesterday's more extended outage was the result of some bad code that introduced a bug. One thing that is not going on here, at least according to Reddit, is any outside force tampering with the platform. Instead, it's just a run of bad coding luck, and maybe a call for Reddit to have a word with its developers.
By 11:18AM ET, Reddit appeared mostly functional, quickly loading new pages but with some still featuring an error bar at the top.
While Reddit publicly apologized on X (formerly Twitter) for the outage, it has so far made no similar posts for this second go-round.
(Image credit: Future)Reddit appeared to be recovering by 4:13PM but the instability remains.
(Image credit: Future)Some pages, like the all-important Popular, remain blank.
(Image credit: Future)On the bright side, you can still peruse some of Reddit's homepage.
Today, perhaps coincidentally, was also the day of US News Anchor Katie Couric's first-ever Reddit AMA. It launched at 2PM ET, though we're not pointing any fingers.
Post by @katiecouric View on ThreadsWhile the Reddit homepage appears to be sort of functional, Reddit's own status checker is reporting degraded performance for most of its services. No word, yet, on what's causing the issues. and, yes, "popular" remains offline.
(Image credit: Future)As of 4:36PM ET, Reddit's homepage was failing again. Clearly, the issue, whatever it is, is significant enough that Reddit's developers are likely playing a game of whackamole with critical failures.
(Image credit: Future)While the function for adding a new Reddit post appears, the system is still reporting multiple errors, and ultimately nothing gets posted. Guess all those anxious Reddit users will have to head over to Discord to post updates.
(Image credit: Future)As is usually the case, people on other social media platforms are enjoying a little schadenfreude over Reddit's ongoing problems.
I hear Reddit is down Y’all know what that means #GME pic.twitter.com/ghFDPuTkoPNovember 20, 2024
Post by @davedigscars View on ThreadsEven upstart social media platform Bluesky was paying attention though it, too, was suffering through a series of small outages on Wednesday. There is no indication they are connected. Bluesky keeps going down because of its sudden and almost unprecedented popularity.
(Image credit: Future)Some parts of Reddit appear to be performing normally except for this annoying error message banner that appears at the top of almost every page. It's another sign that all is not well in Reddit Land.
(Image credit: Future)If Down Detector is any measure, Reddit appears to recovering. It's obviously not out of the woods yet and Reddit's status page has yet to show an update, but all signs point to an imminent return to normal Reddit up- and down-voting.
(Image credit: Future)Reddit's Popular section remains mostly down and while the other subreddits are live, Reddit's response time when you click on any of them is quite slow. This is still not the Reddit you know and love.
(Image credit: Future)Perhaps we spoke too soon. As of 5:00PM ET, Reddit's homepage was again mostly empty. The error message at the top is now the most interesting thing to read. Reddit has not updated its status report since 3:38PM ET.
(Image credit: Future)Reddit's last public comment on the outage was on X (formerly Twitter) in response to someone asking if Reddit was down. They posted, "We're working on it."
Yes. We're working on it. https://t.co/8MZVlusmD7November 20, 2024
In the first sign of truly good news, Reddit's crucial Popular subreddit returned, though not without the big, red error bar. We, however, consider this progress. Full Reddit and posting should be back in no time and then we can all get back to laughing at silly road and store signs and cats telling us ironically to "hang in there."
(Image credit: Future)The platform is healing. From the looks of things, Reddit is returning to form. It's still a bit slow and there are those occasional big, red issue bars but we're seeing more and more complete pages without that blemish. Some of them are even loading quickly.
(Image credit: Future)Hold on a second. Just as Reddit appeared to be truly recovering, we saw a new and far more dire message when we tried to access Reddit.com
A CDN going down or being unable to communicate with servers is far more serious and could indicate some kind of backbone outage. Reddit is a big system and probably needs a lot of back-end server support to keep delivering all those hot memes. Some part of that infrastructure could be down.
Our colleagues over at Tom's Guide got this statement from Reddit: "The teams are working on it but we don't have a timeline to share."
(Image credit: Future)Oh, Reddit. Even when you're down, you know how to reach us.
The discussion platform appears almost fully healed and Reddit has been mostly mum about the cause except for a brief comment from a spokesperson. Now, though, we have the "full story" and in completely Reddit fashion.
In truth, it's just a post on X, but it's also quite brilliant, being both a meme and an explanation. Reddit's X's post says the cause was a bug in a recent update that's now been fixed. Reddit is ramping back up. Next to it the explanation, though, is an image from a classic Real Housewives meme. That is just so Reddit.
Welcome back, Reddit, and thanks for keeping it saucy.
pic.twitter.com/4YjwVdWRKONovember 20, 2024
All's well that ends well, right? On Thursday, Reddit was back to full operation
Down Detector tells the taleDown Detector is showing a major spike in Reddit outage reports. And Reddit's homepage is not loading. This issue looks, perhaps, worse than yesterday's.
(Image credit: Future)Social media immediately took notice but Reddit, which acknowledged yesterday's outage on X (formerly Twitter) and later posted a humorous update and apology when the discussion platform returned to normal has yet to comment.
Reddit is reportedly down for thousands of users currently. Are you one of them? #RedditDown https://t.co/seLUFviBwGNovember 21, 2024
Reddit's status page appears unaware of the issue, showing "All systems operational." If this outage continues, that's sure to change.
(Image credit: Future)You might wonder why we're reporting on a Reddit site outage, it's just another place where people chat, post memes, and run celebrity AMAs (As Me Anything). Reddit is more than that. It's increasingly become a place where people find answers hidden in an untold number of topic-based subreddits (note how often it appears at the top of your Google Search results).
Reddit is also incredibly popular with over half a billion users, many of them between 18 and 29 years old. One report found that the majority of people come to Reddit for entertainment and "r/funny is the most popular subreddit."
(Image credit: Future)If I'm being honest, I'm not the biggest Reddit user. I visit sometimes and even end up going down a few subreddit rabbit holes. I've read some memorable AMAs – Nic Cage's comes to mind – but I'm not embedded in the platform.
So I asked someone who is to tell us why Reddit is so important. Hamish Hector is TechRadar's Senior Staff Writer for News and when I asked him why Reddit is his platform of choice, he sent me this:
Since abandoning Twitter and Facebook (for the most part) Reddit has become my social media platform of choice because it’s so convenient to find the niche communities I’m interested in. Rather than a timeline filled with (let’s face it) random garbage with the occasional relevant post, Reddit feeds me curated news, guides, and memes for just the topics I care about.
Reddit is aware of the issue and a company spokesperson told us this morning, "An update we made caused some instability. We reverted and are seeing Reddit ramp back up."
That's good news and might mean we're back to checking the latest dank memes in no time.
Reddit is already showing some signs of life. As of 11AM ET, it was back to serving pages but with a large, red error banner at the top. This is progress. We may be seeing the promised reversion taking effect.
(Image credit: Future) Success subredditReddit's reversion had the desired effect and Reddit appears to be back online. Down Detector reports have dropped precipitously and we're now able to navigate the discussion site.
(Image credit: Future)Imagine a 3D printer that can print creations exactly where you need them. That’s the idea behind MobiPrint, a mobile 3D printing robot developed by Daniel Campos Zamora at the University of Washington.
This innovative device autonomously navigates a room, printing designs directly onto floors or other surfaces, offering “a new system that combines robotics and 3D printing that could actually go and print in the real world,” as Campos Zamora told IEEE Spectrum.
Unveiled at the ACM Symposium on User Interface Software and Technology, MobiPrint introduces a new level of flexibility to 3D printing. The system consists of a modified Prusa Mini+ 3D printer mounted on a Roborock S5 vacuum robot, using the open source software Valetudo to map its environment autonomously, allowing users to view and control its path locally without cloud dependency.
Park and printCapable of printing on carpet, hardwood, and vinyl with dimensions up to 180 x 180 x 65 mm, MobiPrint has already been used to produce objects like pet food bowls, signage, and tactile markers, demonstrating its practical versatility.
The inspiration for MobiPrint came from accessibility needs. Campos Zamora’s lab is focused on creating tools to assist visually impaired users. “One of the things that really inspired this project was looking at the tactile surface indicators that help blind and low vision users find their way around a space,” he explained to IEEE Spectrum. By printing these indicators directly on-site, MobiPrint could make navigation easier in indoor spaces that frequently change.
Currently, MobiPrint operates in a “park and print” mode, requiring it to stay stationary while printing, limiting its ability to create larger designs. However, Campos Zamora envisions expanding its functionality to print larger, continuous objects, follow users to print accessibility markers along their path, or even use AI to suggest print locations.
Though the robot may seem unconventional, and it certainly looks rudimentary in its current form, the technology has the potential to make 3D printing more accessible and versatile, directly shaping spaces with designs customized to the user’s needs. You can see it in action in the video below.
You might also likeAudio-Technica has issued a warning for its new SQ1TW2 wireless earphones, with some versions suffering from a fault with the battery which can overheat with alarming consequences.
In an email, Audio-Technica advised TechRadar that one batch of stock of these earphones are affected by a problem which means a “few of the charging cases are faulty due to an overheating battery that can produce smoke.”
If you want to check, the potentially affected models have serial numbers between 2322 and 2426 – you can see the number on the inside of the charging case, as shown in the image below. Also, if your case has no serial number, then it could be hit by the issue, too.
If you have one of the affected models of SQ1TW2 earphones, you should contact Audio-Technica, and the company will arrange a replacement, and for the safe disposal of the faulty product.
As you might guess, you should also not use the charging case with the earphones while you’re waiting for your faulty model to be swapped out.
(Image credit: Audio-Technica) A precautionary recallClearly, this is an unfortunate affair, so be sure to check the serial number if you have bought the SQ1TW2 earphones.
They’re likely to have been a popular budget model, given that they’re the sequel to the original SQ1TW, earphones that we heaped tons of praise on in our five-star review (sporting a sound with a far higher quality than others in its price bracket). The SQ1TW2 was launched in August 2024, at an even cheaper price point than the original earphones, with a more compact nature.
So, this is a rather unsightly blot on an otherwise exciting budget pair of earphones, but of course, the impact is limited to a (hopefully) small number of models, going by the communication from the company. Audio-Technica makes it clear that no one has been harmed by this issue to date, and the recall is a precautionary measure.
Still, it isn't the first issue it's had of this nature – a couple of years ago, it issued a product safety notice for its ATH-CK3TW earbuds, due to a similar overheating problem with its charging case. We've asked Audio-Technica if it's going to publish a similar notice for its SQ1TW2 earphones and will update this story if we hear back.
You might also likeNot all premium VPN services protect your privacy equally, with over half of the most popular services suffering some form of data leak. At least three apps also shared your personal information "in a way that put user privacy at risk."
These are the main findings from new research conducted by Top10VPN based on the 30 most popular premium providers for Android devices. These include some of the best VPN apps on the market, such as NordVPN, ExpressVPN, Proton VPN, and Surfshark.
"I don’t want to exaggerate the level of risk. For most users, it is fairly low, but it does depend on your threat model," Simon Migliano, Head of Research at Top10VPN, told TechRadar, noting Avira Phantom VPN and FastestVPN as the paid Android VPNs to "absolutely" avoid.
Paid Android VPN apps' privacy failsAs mentioned, Migliano conducted testing on the 30 most popular paid Android VPNs to identify potential safety issues within the apps – you can find the full list of services analyzed here.
These tests focused on different areas, namely DNS and other data leaks, VPN encryption, VPN tunnel stability, risky app permissions, risky use of device hardware features, and data collection and sharing.
The most surprising result for Migliano was that half of the top paid VPNs tested (15) failed to ensure SNI (Server Name Indication) was encrypted for all server connections the apps make. SNI is an extension to the TLS protocol that a client needs to indicate the hostname of the server it’s trying to connect to during the handshake process.
While this leak may be relatively minor for most people, "It’s an oversight that could land someone in trouble with their school or workplace if VPNs aren’t allowed on the network, or even in legal trouble somewhere like Turkey or China, where VPNs are heavily regulated," he added.
According to Migliano's data, Surfshark, Private Internet Access (PIA) and PrivadoVPN were some of the apps still overlooking SNI encryption.
Did you know?(Image credit: Shutterstock)A virtual private network (VPN) is security software that encrypts your internet connection to prevent third parties from accessing your data in transit and snooping on your online activities. At the same time, it also spoofs your real IP address location for maximum anonymity, granting you access to otherwise geo-restricted content.
At least seven Android VPNs also leaked DNS requests – meaning the device's request to a Domain Name System server to provide an IP address for a given hostname.
Again, these data leaks aren't critical and happen only under very specific circumstances, so it won't be a big issue for most users. That said, Migliano believes that "a properly configured VPN should terminate all existing network connections to prevent this from happening."
This is why, if private browsing is crucial for you, he suggests avoiding the VPNs impacted by this issue, namely HMA!, Private VPN, Mozilla VPN, Privado, VyprVPN, X-VPN, and Avira Phantom.
FastestVPN was another big no for Migliano on this front. He said: "I could never recommend FastestVPN after it exposed my email address in clear text in the headers of a server request to a geolocation API, which is unforgivable."
While way better than free VPN apps, data collection and sharing may also be an issue for some providers. Migliano found seven apps out of 30 analyzed to pose a potential privacy risk due to embedded tracking code from advertisers and data brokers. Yet, only two VPNs (VPN Unlimited and Hotspot Shield) were found guilty of actually sharing data in practice, while X-VPN employed poor data-sharing practices.
VPN encryption for paid services was good overall. Yet, while seven apps failed to use the latest version of TLS to establish the VPN tunnel (AES-256), Avira Phantom made use of the deprecated SSLv2 protocol which, Migliano noted, has long been considered insecure.
Pakistan's top religious body has said that using a VPN service to access blocked content goes against Shariah, the Islamic law.
The statement from the country's constitutional body for legal advice on Islamic matters described their responsibility to prevent the "spread of evil", according to the report from the Associated Press.
Pakistan's residents have increasingly turned to virtual private network (VPN) software as a way to access X which has been blocked since February.
Authorities announced plans to regulate the use of VPNs back in August. While the debate is still ongoing on whether or not commercial VPNs should also be blocked – the Pakistani English-speaking publication Dawn reported – businesses and freelancers have time until November 30, 2024, to register their service and avoid disruptions.
The Pakistan VPN debate"Using VPNs to access blocked or illegal content is against Islamic and social norms, therefore, their use is not acceptable under Islamic law," reads the official statement released on Friday, November 15, quoting the Council of Islamic Ideology’s chairman Raghib Naeemi – Voice of America reported.
The statement also notes that any technology used to access "immoral or illegal activities is prohibited according to Islamic principles," the internet included. Illegal content includes "immoral and porn websites or websites that spread anarchy through disinformation."
On the same day, also the Ministry of Interior spoke out against VPN usage.
In a letter sent to the Pakistan Telecommunication Authority (PTA), he calls to block all "illegal" VPNs, claiming that terrorists use these tools "to facilitate violent activities and financial transactions in Pakistan."
Do you know?(Image credit: Getty Images)On Sunday, November 9, 2024, people in Pakistan lamented issues accessing their VPN apps throughout the day, in what looked like the beginning of the crackdown on "unregistered" VPNs. Authorities confirmed this to be a "brief technical glitch," while reiterating the need to register their service to avoid further disruptions.
The best VPN providers have been recording an increase in usage from citizens in Pakistan this year as people try to keep accessing X and other blocked content online. This is because such security software spoofs a user's real IP address location to grant access to otherwise geo-restricted content while encrypting internet connection to boost online anonymity.
At times, VPNs have also become a target as authorities seek to prevent people from using these services to bypass government-imposed restrictions.
As Dawn reported, though, VPN usage is still permitted in Pakistan for legitimate purposes. These include banking, foreign missions, corporate enterprises, universities, IT companies, call centers, and freelance professionals.
Authorities are now urging companies and freelance workers operating in the aforementioned sectors to complete the VPN registration with PTA by the end of the month. Failing to do that could mean further service interruptions in the future.
While it isn't clear yet how the blocking will work in practice, the new legislation aims to curb VPN misuse and security risks. Authorities deemed unregistered VPNs a "security risk" for Pakistan as they can be used to access "sensitive data."
Yet, at the same time, internet experts also believe that the increase in censorship is the main cause of the decline of the country's internet, with VPNs remaining the best tool to keep accessing the free web.
Shares in tax software giants Intuit and H&R Block have fallen after reports claimed Donald Trump’s administration advisory team could be exploring creating a new, free tax-filing app.
The Washington Post reported that Trump’s proposed Department of Government Efficiency (DOGE), an external advisory body led by Elon Musk, could be considering the app as part of plans to streamline government operations.
The two software companies, which currently dominate the tax-filing market, would face major competition if the US government were to introduce a free alternative.
Trump and Musk considering free tax-filing appWhile discussions surrounding the app appear to be in their early stages, the prospect has raised concerns among private tax-preparation firms which make a profit off US citizens filing their taxes.
The IRS currently offers free filing options to eligible taxpayers earning less than $79,000. The agency has also launched Direct File, a pilot program that it’s trialling across 12 states to give 18 million taxpayers free access to online tax-filing services.
The DOGE-backed app would build on these existing efforts, which have been driven under the Biden-Harris administration. It’s unclear whether the app would become available for all US taxpayers.
Intuit spokesperson Tania Mercado commented (via CNBC): “For decades, Intuit has publicly called for simplifying the U.S. tax code so individuals, families, and small businesses can better understand their finances.”
The Federal Trade Commission recently confirmed it would be taking action against H&R Block for “deceptively marketing their products as ‘free’ when they were not free for many consumers,” among other concerns. The proposed settlement would see H&R Block liable to paying out $7 million.
You might also likeTwo independent audits officially confirm that NordVPN is way more than just the best VPN app on the market.
Experts at AV-TEST, a German cybersecurity testing firm, recently ranked NordVPN's newly launched Threat Protection Pro as the top tool for blocking malicious sites. The feature also received the highest rating in an anti-malware validation conducted by the technical research and product testing organization West Coast Labs (WCL).
While you need to upgrade to a top-tier plan to use NordVPN Threat Protection Pro, you can now save some bucks in the process thanks to its great time-limited Black Friday VPN deal.
Two golden medals for protectionIn October, AV-TEST evaluated the capabilities of five well-known VPN providers in detecting different types of malicious links.
These included three specific categories: phishing links, portable executable (PE) URLs (for example, EXE files), and non-portable executable web addresses (for example, HTML and JavaScript files). Experts also looked at how good the VPN services were in avoiding false positives, meaning flagging legitimate links as malicious.
NordVPN Threat Protection Pro managed to successfully detect and block 83.42% of malicious links, leading in all three categories. In contrast, the second-best result captured only less than half (46.96%).
AV-TEST experts used 3,209 links in total, consisting of 1,050 malicious links to PE files, 1,031 links to other malicious (non-PE) file types, and 1,128 links to phishing sites. (Image credit: Nord Security)WCL's testing was focused on malware protection. Here, NordVPN achieved a staggering 99.8% detection rate for high-threat malware.
The provider earned the highest AAA rating overall thanks to top marks in other categories, too. These include a smooth buying experience, easy and customizable installation, sleek apps, and reliable customer support.
This isn't the first time that NordVPN Threat Protection Pro has proved its effectiveness with third-party observers. Back in August, the feature gained the bronze medal out of 35 competitors as a top tool to avoid online shopping scams.
Yet, Domininkas Virbickas, head of development at Threat Protection, explains that the recent ratings complement the evaluations conducted by AV-Comparatives this summer by providing a broader picture of the tool's capabilities.
Do you know?(Image credit: NordVPN)The latest round of testing carried out by TechRadar's reviewers in September also confirmed how the provider upped the game for malware and phishing protection.
He said: "These results validate our consistent commitment to providing comprehensive protection against a wide range of online threats."
As Virbickas puts it, "The internet is full of scammers." A simple click on a wrong link is all it takes for attackers to steal your account, money, or identity.
It's with this in mind that the team at NordVPN decided to give a boost to its tracker blocker tool back in June.
What used to be called Threat Protection Lite – now simply known as Threat Protection – is based on DNS filtering and is still available to all NordVPN customers using Android, iOS, Linux, Windows, macOS, and browser extensions.
By contrast, Threat Protection Pro works at the URL and Javascript levels to help you avoid tracking, phishing, scams, malware, and annoying ads and is exclusive for Standard, Plus, Complete, Ultimate, and Ultra subscribers on Windows and macOS only at the time of writing.
There appears to be a new ransomware player in town, exploiting vulnerabilities in Zyxel firewalls and IPSec access points to compromise victims, steal their data, and encrypt their systems.
The group is called Helldown, and has been active since summer 2023, a new report from cybersecurity researchers has revealed Sekoia, noting the group most likely uses a previously undisclosed vulnerability in Zyxel’s firewalls for initial access.
Furthermore, the group seems to be exploiting CVE-2024-42057, a command injection bug in IPSec VPN that, in certain scenarios, grants unauthenticated users the ability to run OS commands.
Dozens of victimsWhen they breach a target network, they steal as many files as they can, and encrypt the system. For encryption, they seem to be using a piece of software developed from the leaked LockBit 3 builder. The researchers said the encryptor was relatively basic, but also probably still under development.
As basic as it is, the encryptor still locked down at least 31 organizations, as that’s the number of victims listed on the group’s data leak site. According to BleepingComputer, between November 7 and today, the number dropped to 28, which could be a hint that some organizations paid the ransom demand. We don’t know who the victims are, or how much money the crooks demanded in return for the decryption key and for keeping the data secure.
Most of the victims seem to be small and medium-sized organizations in the United States and Europe.
If the researchers are indeed right, and Helldown does use flaws in Zyxel and IPSec instances to breach the networks, the best way to defend would be to keep these devices up to date, and limit access to trusted accounts only. CVE-2024-42057 that plagues IPSec was fixed on September 3, and the earliest clean firmware version is 5.39. For Zyxel, since the vulnerability is still undisclosed, it would be wise to keep an eye on upcoming advisories and deploy the patch as soon as it’s published.
Via BleepingComputer
You might also likeAWS Identity and Access Management is helping businesses boost multi-factor authentication (MFA) adoption and organizational security by introducing a centrally managed security feature.
The tool will help organizations and security teams manage root credentials and root sessions through AWS Organizations.
AWS hopes the tool will help reduce the risk of lateral movement and privilege escalation in the event of a cyberattack, while also making day to day security easier and scalable.
Boosting MFA and account securityAWS has taken several steps recently to enhance account security, initially introducing MFA for management account root users before launching FIDO2 passkey support which resulted in a 100% increase in MFA adoption for AWS Organizations users with more than 750,000 AWS root users enabling the phishing-resistant authentication method.
Now, security teams will also be able to remove long-term root credentials to prevent them from being abused, and will also stop them from being recovered and used maliciously.
“This will improve the security posture of our customers while simultaneously reducing their operational effort,” the blog post stated.
The centralized management tool will also allow security teams to create accounts without root credentials, making them secure-by-default and removing the need for additional security measures. The tool will also assist with compliance-related issues by allowing security teams to closely monitor and remove long-term root credentials.
As an additional preventative measure against the misuse of root credentials, AWS is also introducing ‘root sessions’ that provide short-term access for specific tasks and actions, relying on the principle of least privilege to minimize the possibility of malicious use.
Root sessions will also reduce the burden on security teams by helping them adhere to AWS best practices, and perform privileged root actions from a single central dashboard, rather than having to manually log in to each user account.
Central root account management is available through IAM console, AWS CLI or AWS SDK, with additional details for obtaining root credentials on the AWS blog.
Are you ready to be terrified by xenomorphs and face huggers galore again? You better be, because Alien: Earth – the first-ever TV series set in the sci-fi horror franchise's universe – has secured a mid-2025 release window.
Announced in a Disney press release, the show, which is being helmed by Fargo creator Noah Hawley, will officially make its debut on Hulu (US) and Disney Plus (internationally) sometime between June and September next year. Indeed, confirmation comes by way of a brand-new teaser for the Alien franchise's inaugural small screen project, which revealed Alien: Earth will emerge from its ovomorph in summer 2025 (that's winter for southern hemisphere dwellers).
Unfortunately, the series' latest teaser doesn't contain any new footage for fans to pore over. That'll be a grave disappointment to many people, myself included, too, especially after Alien: Earth's first teaser was the most underwhelming one I'd seen in a long time.
To be fair to this newest trailer – if it can be labeled as such – there are some quick-flash clips that appear around its midway point. The blurry nature of these snippets, though, mean it's incredibly difficult to determine what's being shown. Indeed, the only thing I could pick out was a person screaming at around the 0:24 mark. It's a bizarre way to market what's like to be one of the best Hulu shows and best Disney Plus shows of 2025, too, especially after a 'new on Disney Plus in 2025' trailer, which arrived in mid-November, actually showed some proper footage from Alien: Earth. Why not include those clips in this new teaser, then?
What is Alien: Earth about? Sydney Chandley's Wendy will be the latest in a long line of female heroes who'll face off against a xenomorph or two (Image credit: FX/Hulu/Disney Plus)But enough of my complaining. You want to know what to expect from Alien: Earth's plot, don't you? Lucky for you, FX/Disney has provided new details concerning its story.
"When a mysterious space vessel crash-lands on Earth," the plot synopsis reads, "a young woman and a ragtag group of tactical soldiers make a fateful discovery that puts them face-to-face with the planet’s greatest threat in the sci-fi horror series Alien: Earth.
"As members of the crash recovery crew search for survivors among the wreckage, they encounter mysterious predatory life forms more terrifying than they could have ever imagined. With this new threat unlocked, the search crew must fight for survival and what they choose to do with this discovery could change planet Earth as they know it."
Alien: Romulus, the sci-fi horror franchise's latest entry, was released in theaters in August (Image credit: 20th Century Studios)Sydney Chandler (Sugar) leads an all-star cast as Wendy, a woman with the consciousness of a child who'll likely be part of the aforementioned crash recovery team. She'll be joined by numerous other recognizable faces, including Alex Lawther (Andor), Timothy Olyphant (The Mandalorian), Essie Davis (One Day), Samuel Blenkin (The Sandman), Babou Ceesay (Into the Badlands), David Rysdahl (Oppenheimer), Adrian Edmondson (3 Body Problem), Adarsh Gourav (The White Tiger), Jonathan Ajayi (Vigil), Erana James (The Wilds), Lily Newmark (Sex Education), Diem Camille (Psychosia), and Moe Bar-El (Honour).
As well as creating the show, Hawley is also on lead scriptwriting and directing duties. Ridley Scott, who created the Alien franchise, joins Hawley on the executive producing front, too.
It's a somewhat busy time for the Alien series. A brand-new film entry, titled Alien: Romulus, was one of 2024's many new movies and, after its solid box office performance, a sequel is believed to be in the works. Alien: Romulus is set to make its streaming debut on Hulu this Thursday (November 21), too, but there's no word on when it'll arrive on Disney Plus in overseas territories.
You might also likeDell Technologies and Iron Bow Technologies have agreed to pay more than $2 million each to resolve allegations that they overcharged the US Army under a government computing contract.
The settlements, confirmed in an announcement by the US Department of Justice, address claims of “non-competitive bids” submitted by the companies to secure army contracts at overinflated charges.
Dell will pay out $2.3 million, with Virginia-based Iron Bow set to pay $2.05 million, to settle the claims.
Dell and Iron Bow settlementsAccording to the DOJ, Dell operated a deal registration program that gave Iron Bow preferential pricing for Dell computer hardware. This subsequently let Iron Bow offer lower bids to the Army, while Dell submitted higher bids to guide the army towards Iron Bow.
Principal Deputy Assistant Attorney General Brian M. Boynton, head of the Justice Department’s Civil Division, stated: “The United States relies on competition to get the best value and price for the American taxpayers.”
US Attorney Prim F. Escalona for the Northern District of Alabama added: “Fraud in the government contracting process costs taxpayers untold dollars each year… We will continue to work with our federal law enforcement partners to investigate and pursue those who commit government contracting fraud.”
The settlement also aims to resolve a whistleblower lawsuit filed by Brent Lillard, an executive of another IT reseller, under the False Claims Act. A $345,000 slice of Dell’s $2.3 million payout is destined for Lillard.
This isn’t the first time that software and hardware providers have been accused of overcharging the US government and its agencies. Earlier this month, the DOJ shared two instances of fraudulent IT contracts which resulted in six individuals being charged or indicted.
German company SAP was also raided by the FBI amid a longstanding investigation into allegations that the company had overcharged the US government and military for use of its software.
You might also likeRecently, DevOps professionals were reminded that the software supply chain is rife with risk, or as I like to say, it’s a raging dumpster fire. Sadly, this risk now includes open source artificial intelligence (AI) software. Especially after further investigations into Hugging Face (think GitHub for AI models and training data) uncovered up to one hundred potentially malicious models residing in its platform, this incident is a reality check regarding the ever-present vulnerabilities that can too easily catch unsuspecting dev teams by surprise as they work to acquire machine learning (ML) or AI models, datasets, or demo applications.
Hugging Face does not stand alone in its vulnerability. PyTorch, another open-source ML library developed by Facebook's AI Research lab (FAIR), is widely used for deep learning applications and provides a flexible platform for building, training, and deploying neural networks. PyTorch is built on the Torch library and offers strong support for tensor computation and GPU acceleration, making it highly efficient for complex mathematical operations often required in ML tasks.
However, its recent compromise raises specific concerns about blindly trusting AI models from open-source sites for fear the content has been previously poisoned by malicious actors.
This fear, while justified, is starkly contrasted with the long-standing belief in the benefits of open-source platforms, such as fostering community through collaboration on projects and cultivating and promoting other people's ideas. Any benefits to building secure communities around large language models (LLMs) and AI, seem to evaporate with the increased potential for malicious actors to enter the supply chain, and corrupt CI/CD pipelines or change components that were believed to have initially come from trusted sources.
Software security evolves from DevOps to LLMOpsLLMs and AI have expanded concern over supply chain security for organizations, particularly as interest in incorporating LLMs into product portfolios grows across a range of sectors. For cybersecurity leaders whose organizations are looking to adapt to the broad availability of AI applications, they must stand firm against risks introduced by suppliers not just for traditional DevSecOps, but now for ML operations (MLOps) and LLM operations (LLMOps) as well.
CISOs and security professionals should be proactive about detecting malicious datasets and responding quickly to potential supply chain attacks. To do that, you must be aware of what these threats look like.
Introduction to LLM-specific vulnerabilitiesThe Open Worldwide Application Security Project (OWASP) is a nonprofit foundation working to improve the security of software, through community-led open-source projects including code, documentation, and standards. It is a true global community of greater than 200,000 users from all over the world, in more than 250+ local chapters, and provides industry-leading educational and training conferences.
The work of this community has led to the creation of the OWASP Top 10 vulnerabilities for LLMs, and as one of its original authors, I know how these vulnerabilities differ from traditional application vulnerabilities, and why they are significant in the context of AI development.
LLM-specific vulnerabilities, while initially appearing isolated, can have far-reaching implications for software supply chains, as many organizations are increasingly integrating AI into their development and operational processes. For example, a Prompt Injection vulnerability allows adversaries to manipulate an LLM through cleverly crafted inputs. This type of vulnerability can lead to the corruption of outputs and potentially spread incorrect or insecure code through connected systems, affecting downstream supply chain components if not properly mitigated.
Other security threats are caused by the propensity for an LLM to hallucinate, causing models to generate inaccurate or misleading information This can lead to vulnerabilities being introduced in code that is trusted by downstream developers or partners. Malicious actors could exploit hallucinations to introduce insecure code, potentially triggering new types of supply chain attacks that propagate through trusted systems. This can also create severe reputational or legal risks if these vulnerabilities are discovered after deployment.
Further vulnerabilities involve insecure output handling and the challenges in differentiating intended versus dangerous input to an LLM. Attackers can manipulate inputs to an LLM, leading to the generation of harmful outputs that may pass unnoticed through automated systems. Without proper filtering and output validation, malicious actors could compromise entire stages of the software development lifecycle. Implementing a Zero Trust approach is crucial to filter data both from the LLM to users and from the LLM to backend systems. This approach can involve using tools like the OpenAI Moderation API to ensure safer filtering.
Finally, when it comes to training data, this information can be compromised in two ways: label poisoning which refers to inaccurately labeling data to provoke a harmful response; or training data compromise, which influences the model's judgments by tainting a portion of its training data, and skewing decision making. While data poisoning implies that a malicious actor might actively work to contaminate your model, it’s also quite possible this could happen by mistake, especially with training datasets distilled from public internet sources.
There is the possibility that a model could “know too much” in some cases, where it regurgitates information on which it was trained or to which it had access. For example, in December of 2023, researchers from Stanford showed that a highly popular dataset (LAION-5B) used to train image generation algorithms such as Stable Diffusion contained over 3,000 images related to “child sexual abuse material.” This example sent developers in the AI image generation space scrambling to determine if their models used this training data and what impact that might have on their applications. If a development team for a particular application hadn’t carefully documented the training data they’d used, they wouldn’t know if they were exposed to risks that their models could generate immoral and illegal images.
Tools and security measures to help build boundariesTo mitigate these threats, developers can incorporate security measures into the AI development lifecycle to create more robust and secure applications. To do this, they can implement secure processes for building LLM apps, identified in five simple steps:
1) foundation model selection; 2) data preparation; 3) validation; 4) deployment; and 5) monitoring.
To enhance the security of LLMs, developers can leverage cryptographic techniques such as digital signatures. By digitally signing a model with a private key, a unique identifier is created that can be verified using a corresponding public key. This process ensures the model's authenticity and integrity, preventing unauthorized modifications and tampering. Digital signatures are particularly valuable in supply chain environments where models are distributed or deployed through cloud services, as they provide a way to authenticate models as they move between different systems.
Watermarking is another effective technique for safeguarding LLMs. By embedding subtle, imperceptible identifiers within the model's parameters, watermarking creates a unique fingerprint that traces the model back to its origin. Even if the model is duplicated or stolen, the watermark remains embedded, allowing for detection and identification. While digital signatures primarily focus on preventing unauthorized modifications, watermarks serve as a persistent marker of ownership, providing an additional layer of protection against unauthorized use and distribution.
Model Cards and Software Bill of Materials (SBOMs) are also tools designed to increase transparency and understanding of complex software systems, including AI models. A SBOM is essentially a detailed inventory of all software product components and focuses on listing and detailing every piece of third-party and open-source software included in a software product. SBOMs are critical for understanding the software's composition, especially for tracking vulnerabilities, licenses, and dependencies. Note that AI-specific versions are currently in development.
A key innovation in CycloneDX 1.5 is the ML-BOM (Machine Learning BOM), a game-changer for ML applications. This feature allows for the comprehensive listing of ML models, algorithms, datasets, training pipelines, and frameworks within an SBOM, and captures essential details such as model provenance, versioning, dependencies, and performance metrics, facilitating reproducibility, governance, risk assessment, and compliance for ML systems.
For ML applications, this advancement is profound. The ML-BOM provides clear visibility into the components and processes involved in ML development and deployment, to help stakeholders grasp the composition of ML systems, identify potential risks, and consider ethical implications. In the security domain, it enables the identification and remedy of vulnerabilities in ML components and dependencies, which is essential for conducting security audits and risk assessments, contributing significantly to developing secure and trustworthy ML systems. It also supports adherence to compliance and regulatory requirements, such as GDPR and CCPA, by ensuring transparency and governance of ML systems.
Finally, use of strategies that extend DevSecOps to LLMOps are essential like model selection, scrubbing training data, securing the pipeline, automating the ML-BOM build, building an AI Red Team, and properly monitoring and logging the system with tools to help you. All of these suggestions provide the appropriate guardrails for safe LLM development while also embracing the inspiration and broad imagination for what is possible using AI, with an emphasis for maintaining a secure foundation of Zero Trust.
We've featured the best network monitoring tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Whether you work in the mobile world or not, you probably heard of the term “super app” and that these apps are very popular in certain markets, especially Asia. These all-in-one platforms integrate multiple functionalities beyond a company’s core offering. They usually have a complete, all-encompassing user experience that normally would require multiple smaller apps.
While in the West this type of app is not as popular as in other markets, there is a growing appetite with apps such as Uber and Revolut at the vanguard. It is unsurprising as the potential is massive with the super app market projected to reach a staggering $714.73 billion by 2032. Our own research at SplitMetrics has also found that in 2024 (until the end of September) super apps added 370 million new users globally, with a global lifetime user base of 4.7 billion.
That is super competitive and can be daunting for anyone launching a new app but with the right user experience and marketing, you can still make your new super app a success.
A crowded market - how do you disrupt the established?You’ve got to start by identifying your app’s Unique Selling Proposition (USP). This differentiation doesn’t have to be a radical change; it can be an improvement in functionality, cost, or user experience.
To differentiate effectively, new super apps should:
Identify a clear USP: Highlight how your app improves upon existing solutions, whether through innovative features or better value.
Combine services creatively: Offer a unique mix of services that aren’t currently bundled together by competitors.
Invest in marketing: Launch a well-resourced and optimized marketing campaign to gain visibility and enter the conversation.
Target untapped markets: Consider focusing on regions outside of Asia where super apps are gaining popularity but competition is less intense. Building a sizable user base in these markets can provide a strong foundation before entering the more competitive Asian landscape.
Adapt to changeThe app industry is among the most competitive and dynamic sectors around. Consumer preferences are in a constant state of flux and there are literally thousands of new apps launched every single day. This means that even very successful apps can’t stay still - they need to be constantly reevaluating both their offering and their marketing initiatives. What is important to remember is that the average consumer does not think about using a Super App or a specialized app as a binary choice. Most wouldn’t even recognize an app like Uber as a super app.
The critical factor is that every aspect of the app works as well as it can because if it doesn’t there are plenty of competitors that a user can switch to. The key to ensuring the best user experience is responding to change. This means knowing what new innovations can enhance your app or how it is marketed, what your competitors are doing and how market conditions and consumer demands are changing. If you stay ahead of the curve, you will be able to stay competitive.
Do your market researchAlthough we often talk about Asia or Western markets as a homogeneous block, the reality is that there are huge variations between each country. What may work well in France, might not appeal to an audience in Australia. Knowing the market and tailoring the app offering and how it is marketed to the demands of consumers in each country is fundamental.
App developers will naturally know their home market the best, so that’s the most obvious place to start. From there, it is about identifying the next most similar market and modifying the app and how you promote it to that audience. It may seem appealing to go after the most lucrative markets first, for example, the US. However, not only will they be the most competitive, they will also likely be the most expensive places to do business. It is better to take an incremental, pragmatic approach to growth - learning lessons on the way - and build up your audience and capabilities on this journey.
Don’t forget the app marketing basicsOnce your app is launched, the journey is far from over. While organic growth is valuable, a robust paid user acquisition (UA) strategy is essential to maintain competitiveness in today's bustling app market. A strong foundation in App Store Optimization (ASO), coupled with strategic paid UA, can drive quality conversions and optimize costs.
Remember, these two strategies are interconnected. A well-optimized app store listing can amplify the impact of your paid campaigns, leading to increased organic visibility and downloads. By analyzing performance metrics, making data-driven decisions, and staying attuned to market trends, you can ensure your app's continued success.
Super apps offer a unique opportunity to disrupt the app landscape. By focusing on a strong USP, creative service bundles, and strategic marketing, you can carve out your niche in this booming market. But remember, it's not just about launching an app – it's about constant adaptation, innovation, and a deep understanding of your target market. Stay ahead of the curve, and you can build a super app that stands the test of time.
We've featured the best business intelligence platform.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
1. US 30th Anniversary Collection quick links
2. UK 30th Anniversary Collection quick links
3. Complete US stock checks
4. Complete UK stock checks
5. Live updates
The PS5 30th Anniversary Collection release date is nearly here and we're quietly hopeful - though not expectant - of seeing some PS5 30th Slim or PS5 30th Anniversary DualSense stock appearing.
As a result, we're starting early today by providing some information on what the 30th Anniversary Collection is, what products might be available tomorrow, providing all the best stock check links, and then keeping track of all the major retailers tomorrow when stock might go live.
It's been tough going since the PlayStation 30th Anniversary pre-orders opened, with the PS5 Slim 30th Anniversary pre-orders being particularly popular along with folks really wanting to know where to buy the PS5 30th Anniversary DualSense since too. While we have seen a restock for both since the initial pre-orders sold out, it's been barren ever since, and stock goes very quickly.
The 30th Anniversary Collection launched for pre-orders the same day as the PlayStation 5 Pro too - but the latter has proved easy to get, and anyone can buy a PS5 Pro right now from pretty much any retailer of their choosing.
Anyway, getting back to it: with us only less than 12 hours from release day in the UK, there's going to be nothing stopping retailers from going live with any launch day stock they may have been saving. If we see anything at all, then I think we're most likely to see PS5 30th Anniversary restocks and stock drops on the PS5 Slim console bundle, and the DualSense controller. However, it could also be a total no-show for restocks - we just don't know, but we're here to save you the work anyway, and make it as easy as possible for you to check quickly.
US 30th Anniversary Collection Quick LinksPS5 Slim Digital Edition 30th Anniversary Edition bundle ($499.99): Check stock at Target
We're leading with Target here for the PS5 Slim, as Target's listing page for the PS5 Slim 30th Anniversary Edition says to 'Check back on release date' which we will most gladly do in the hope of seeing more stock. Best Buy also has a 'coming soon' label, and Amazon never went live with its stock despite having a live listing page back on pre-order day, so check those two next for sure.
Check for stock at: Best Buy | Amazon | Walmart | PlayStation DirectView Deal
DualSense 30th Anniversary Edition ($79.99): Check stock at Target
It's the same with the Dualsense 30th Anniversary Edition controller: Target's listing page has got 'check back on release date' on it, seemingly hinting at more stock. Elsewhere, again, Best Buy has 'coming soon', and we have seen the controller crop up since pre-orders started at Amazon and Walmart too.
Check for stock at: Amazon | Best Buy | Walmart | GameStop | PlayStation DirectView Deal
PS5 Pro 30th Anniversary Edition bundle ($999.99): Check stock at PlayStation Direct
Perhaps the longest shot for 30th Anniversary Collection release day stock, the PS5 Pro truly is a limited edition thing with only 12,300 models being made by Sony (inexplicably). If this does come up for a restock it'll only be at PS Direct.View Deal
DualSense Edge 30th Anniversary Edition ($219.99): Check stock at PlayStation Direct
The limited-edition DualSense Edge proved very popular with the premium pad selling out when pre-orders came up. I haven't seen a restock of this since then either, but if that does happen on launch day, it'll only be at PlayStation Direct.View Deal
PlayStation Portal 30th Anniversary Edition ($219.99): Check stock at PlayStation Direct
I love the PlayStation Portal, and so did a whole heap of other folks too by the looks of it given the 30th Anniversary Edition Portal sold out. Again, this will only pop up at PS Direct if we do see some restock action on launch day.View Deal
PS5 Slim Digital Edition 30th Anniversary Edition bundle (£429.99/£433.99): Check stock at Amazon
The PS5 Slim 30th Anniversary Edition has not appeared anywhere properly since the pre-orders opened and sold out at other retailers on October 10. Amazon's stock only lasted 8 minutes then so we reckon it might be the best-placed retailer to offer launch-day stock if we see any at all. Check the other go-to retailers below too.
Check for stock at: Argos | Very | Currys | PlayStation Direct | EE Store (£539.99 bundle) | GameView Deal
DualSense 30th Anniversary Edition (£69.99): Check stock at Amazon
The 30th Anniversary DualSense popped up a couple of times briefly at Amazon since pre-orders opened there, so we think it might be as good a place as any to start. The controller has also appeared at Argos and Currys since so those are great places to look too.
Check for stock at: Argos | Currys | Very | Game | PS DirectView Deal
PS5 Pro 30th Anniversary Edition bundle (£959.99): Check stock at PlayStation Direct
The PS5 Pro bundle was barely visible to anyone when it went up for pre-order and we haven't seen any since. It's a long shot like it is in the US, but if it's going to come up anywhere it'll be at PS Direct.View Deal
DualSense Edge 30th Anniversary Edition (£219.99): Check stock at PlayStation Direct
Another exclusive-to-PS-direct-prosduct, the DualSense Edge pro controller will only appear at PlayStation Direct UK if it pops up on launch day we are almost certain of that.View Deal
PlayStation Portal 30th Anniversary Edition (£209.99): Check stock at PlayStation Direct
In the UK, the PlayStation Portal 30th Anniversary Edition proved exceptionally popular too - I'm going to be jealous when a colleague's unit arrives. It'll only be at PlayStation Direct, once again, if it does pop up.View Deal
Hello and welcome to our live build-up coverage to the PlayStation 30th Anniversary Collection release day!
We're here to look forward to and celebrate the collection's release date and cheer on fans who finally get their hands on the gear. But we're also here with fingers crossed that we might see some launch day stock drops or restocks at retailers who may have kept their powder dry on some inventory, or have specific release day units to sell.
It's by no means guaranteed but that's not going to stop us hoping and doing constant checks for you nonetheless./. We've got all the best and most important links to check above, and will be keeping this post as up to date as we can with restock news and links to stock itself should any pop up.
(Image credit: Sony/PlayStation)So what's all the fuss about then?
Well, back in September, along with the official announcement of the PS5 Pro, Sony revealed its pretty expansive 30th Anniversary Collection. This is a collection of limited edition PS5 hardware that's all styled in the original PS1 aesthetic and all looks excellently slick.
The grey-color stylings are perfect for igniting that nostalgia in long-term PlayStation fans and celebrating the original PlayStation made for perfect PS5 consoles and accessories.
The collections covered the PS5 Pro, PS5 Slim, DualSense, DualSense Edge, and PlayStation portal specifically, though you did get a DualSense charger in the same style with the PS5 Pro bundle - the latter only being made to the tune of 12,300 units in honor of the PS1'sd December 3 release date.
The Collection was extremely popular with plenty of hype building before pre-orders went live. When the opportunity to bag a pre-order did come for fans, it was all over in a matter of minutes, however, and many have been left disappointed since. Which is part of the reason for our live coverage starting today.
(Image credit: PlayStation/Sony)With only 12,300 units on offer, anticipation was intense and the PS5 Pro bundle barely stuck around for a few minutes when fans got past the virtual waiting room at PlayStation Direct.
The other bits of hardware soon followed suit, and all sold out pretty quickly indeed. When other retailers could release their own pre-orders two weeks later on October 10 (but only for the PS5 Slim and DualSense controller), the action was equally intense and stock flew off the virtual shelves. We have seen some fleeting restocks at retailers like Amazon and Target in the US, and Argos and Currys in the UK, but there's been nothing substantial or long-lasting.
Which brings us to today, 30th Anniversary Collection release date eve. I think that release date might well be a good bet for some stock drops or retailer restocks for the 30th Anniversary Collection. It's not guaranteed by any means, and is based largely on my own hunch and experience - there was indeed PS5 stock released on launch day after all, when few people actually expected it then - but we also have a couple of retailer-specific reasons to hold out hope. Let me explain...
If you like the idea of having a wall-mounted smart display for video calls, photo sharing, and family organization, Amazon has just launched two tempting new options – an upgraded Echo Show 15 and its biggest-ever smart screen, the new Echo Show 21.
The Echo Show series is already available in bewildering array of screen sizes, starting with the Echo Show 5. But these two new models are unique – not only are they Amazon's largest options, with 15-inch and 21-inch screens respectively, they're also designed to be wall-mountable. That makes them particularly suitable for busy kitchens with cluttered worktops.
The original Amazon Echo Show 15 launched in September 2021, but it's been updated with several upgrades, which are also in the new Echo Show 21. The first of those directly addresses one of our main criticisms of the first Echo Show 15, which is improved audio quality. Rather than two 1.6-inch tweeters, the displays combine dual 2-inch woofers with two 0.6-inch tweeters for a bassier sound.
Combined with some new noise reduction tech and improved auto-framing skills, this should make the Echo Show 15 and 21 much better for video calls. Amazon also says that both displays now offer double the field-of-view compared to the first Echo Show 15 and "65% more zoom", although this is presumably still just digital zoom.
The final two improvements should boost Echo Shows' streaming and smart home powers. They now have Wi-Fi 6E, which offers better speeds and lower latency compared to Wi-Fi 6. Depending on your router, that should boost the streaming experience from YouTube, Netflix, and Prime Video.
Image 1 of 2(Image credit: Amazon)Image 2 of 2(Image credit: Amazon)Lastly, the Echo Show 15 and 21 have a built-in smart home hub with Matter support. The original Echo Show 15 already supports Matter, but connecting to your various smart home devices should be easier thanks to direct support for Wi-Fi, Thread, or Zigbee protocols. Both are powered by an octa-core processor with Amazon's AZ2 neural network engine.
On the downside, the price of getting a large, wall-mountable Echo Show has gone up since the original Echo Show 15 was launched. The new Echo Show 15 costs $299 / £299 (around AU$460) – a $50 increase from the original – while the Echo Show 21 will set you back $399 / £399 (around AU$615).
We'll update this story when we get the official Australia pricing – and we'll also be testing both soon to let you know if they're new contenders for our best smart displays guide.
Analysis: The battle for your wall space heats up The new Amazon Echo Show 21's size (above) and improved speakers should make it a good option for video calls. (Image credit: Amazon)We've seen rumors grow in recent months that Apple is planning to launch a wall-mounted smart display in early 2025 – and it seems that Amazon has pre-empted that by launching its new Echo Show 21 and refreshing its Echo Show 15.
Both of these new displays are built on the same idea as Apple's rumored Apple Intelligence-powered screen – namely, letting you easily video call family, watch YouTube, and control your smart home from one unobtrusive kitchen hub.
As our Amazon Echo Show 15 review noted, the original kitchen nerve center wasn't perfect. Our main complaints were the disappointing speakers, average display with weak colors and brightness, lack of resizable widgets, and the fact that the tilt stand was sold separately.
Amazon certainly seems to have addressed the criticisms of audio quality in these new Echo Shows. But while they do both come with an Alexa Voice Remote, you still need to buy the pricey stand separately (for $99 / £99).
The other big unknown is the future state of voice assistants on these devices. The upgraded Alexa AI seems to be perpetually delayed, Apple won't be meaningfully upgrading Siri until iOS 18.4 in 2025, and Google is being characteristically non-committal with its smart home plans, despite our pleas for it to upgrade its aging Nest Hub.
Still, Amazon certainly has the most experience in making big, wall-mountable smart home displays – and on paper, the Echo Show 15 and 21 look like its best efforts yet.
You might also like