Another hardcoded credential for admin access has been discovered in a major software application - this time around it’s Cisco, who discovered the slip-up in its Unified Communications Manager (Unified CM) solution.
Cisco Unified CM is an enterprise-grade IP telephony call control platform providing voice, video, messaging, mobility, and presence services. It manages voice-over-IP (VoIP) calls, and allows for the management of tasks such as user/device provisioning, voicemail integration, conferencing, and more.
Recently, Cisco found login credentials coded into the program, allowing for access with root privileges. The bug is now tracked as CVE-2025-20309, and was given a maximum severity score - 10/10 (critical). The credentials were apparently used during development and testing, and should have been removed before the product was shipped to the market.
Get 55% off Incogni's Data Removal service with code TECHRADAR
Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal
Cisco Unified CM and Unified CM SME Engineering Special (ES) releases 15.0.1.13010-1 through 15.0.1.13017-1 were said to be affected, regardless of the device configuration. There are no workarounds or mitigations, and the only way to address it is to upgrade the program to version 15SU3 (July 2025).
“A vulnerability in Cisco Unified Communications Manager (Unified CM) and Cisco Unified Communications Manager Session Management Edition (Unified CM SME) could allow an unauthenticated, remote attacker to log in to an affected device using the root account, which has default, static credentials that cannot be changed or deleted," Cisco said.
At press time, there was no evidence of abuse in the wild.
Hardcoded credentials are one of the more common causes of system infiltrations. Just recently Sitecore Experience Platform, an enterprise-level content management system (CMS), held a hardcoded password for an internal user. It was just one letter - ‘b’ - which was super easy to guess.
Roughly a year ago, security researchers from Horizon3.ai found hardcoded credentials in SolarWinds’ Web Help Desk.
Via BleepingComputer
You might also likeAs AI models grow larger and more capable, the supporting infrastructure must evolve in tandem. AI’s insatiable appetite has Big Tech going as far as restarting nuclear power plants to support massive new datacenters, which today account for as much as 2% of global electricity consumption, or more than the entire country of Germany.
But the humble power grid is where we need to start.
Constructing the computing superstructure to support AI tools will significantly alter the demand curve for energy and put increasing strain on electrical grids. As AI embraces more complex workloads across both training and inference, compute needs – and thereby power consumption – are expected to increase exponentially. Some forecasts suggest that datacenter electricity consumption could increase to as much as 12% of the global total by 2030.
Semiconductors form the cornerstone of AI computing infrastructure. The chipmaking industry has focused primarily on expanding renewable energy sources and delivering improvements in energy-efficient computing technologies. These are necessary but not sufficient – they cannot sustainably support the enormous energy requirements demanded by the growth of AI. We need to build a more resilient power grid.
Moving from Sustainability to Sustainable AbundanceIn a new report, we call for a different paradigm – sustainable energy abundance – which will be achieved not by sacrificing growth, but by constructing a holistic energy strategy to power the next generation of computing. The report represents the work of major companies across the AI technology stack, from chip design and manufacturing to cloud service providers, as well as thought leaders from the energy and finance sectors.
The foundational pillar of this new strategy is grid decarbonization. Although not a new concept, in the AI era it requires an approach that integrates decarbonization with energy abundance, ensuring AI’s productivity gains are not sidelined by grid constraints. In practical terms, this entails embracing traditional energy sources like oil and gas, while gradually transitioning toward cleaner sources such as nuclear, hydro, geothermal, solar and wind. Doing this effectively requires understanding of the upgrades needed for the electricity grid to enable rapid integration of existing and new energy sources.
Consuming electricity from the grid naturally assumes the emissions profile of the grid itself. It should come as no surprise that emissions related to the grid represent the single biggest component of the emissions bill facing any given company. In the conventional approach to sustainability, companies focused more on offsetting emissions derived from the grid rather than sourcing the grid with cleaner (or carbon-free) energy. To support the coming scale-out of AI infrastructure, access to a clean grid will be one of the most important aspects in reducing carbon footprint.
Strategically selecting locations for datacenters and semiconductor fabs will be critical. Countries and regions have a varying mix of clean energy in the power grid, which impacts their carbon emission profile. For example, the United States and France generate a similar percentage of their overall electricity from renewable sources. However, the United States has a significantly higher country emission factor, which represents the direct carbon emission per kilowatt-hour of electricity generated.
This is because most of the electricity in France is generated through nuclear power, while the United States still gets a significant percentage of electricity supplied through coal and natural gas. Likewise, there could be significant differences within a country such as the United States, with states like California having a higher mix of renewables compared to some other states.
Driving Innovation in Semiconductor TechnologyA truly resilient grid strategy must start with expanded capacity for nuclear, wind, solar, and traditional forms of energy, while driving a mix shift to cleaner sources over time. However, to achieve this enhanced capacity, it will be necessary to invest in disruptive innovations. Transmission infrastructure must be modernized, including upgraded lines, substations and control systems. Likewise, the industry must take advantage of smart distribution technologies, deploying digital sensors and AI-driven load management techniques.
Semiconductors have an important role to play. Continued growth of GPUs and other accelerators will drive corresponding growth in datacenter power semiconductors, along with increasing semiconductor content in other components such as the motherboard and the power supply.
We forecast that the datacenter power semiconductor market could reach $9 billion by 2030, driven by an increase in servers as well as the number of accelerators per server. Approximately $7 billion of the opportunity is driven by accelerators, with the rest coming from the power supply and other areas. As the technology matures, we believe gallium nitride will play an important role in this market, given its high efficiency.
As the grid incorporates increasing levels of renewables, more semiconductors will be needed for energy generation. Silicon carbide will be important for solar generation and potentially wind as well. We estimate that renewable energy generation could grow to more than a $20 billion market for semiconductors by 2030. A similar opportunity exists for smart infrastructure such as meters, sensors and heat pumps.
Shifting Incentives for Sustainable GrowthRestructuring the power grid offers the single biggest opportunity to deliver sustainable, abundant energy for AI. Modernizing the power grid will require complex industry partnerships and buy-in from company leadership. In the past, sustainability initiatives were largely regarded as a compliance checkbox item, with unclear ties to business results. A new playbook is needed to enable the growth of AI while shifting business incentives toward generation, transmission, distribution and storage of clean energy and modernization of the power grid.
To truly harness the transformative productivity and prosperity potential of AI, we need a comprehensive sustainability strategy that expands clean energy capacity, modernizes energy infrastructure, and maintains diverse energy generation sources to ensure stable, abundant power for continued technological innovation. When combined with progress in energy-efficient computing and abatement measures, this holistic approach can realistically accelerate the pursuit of sustainability while mitigating the risk of curtailing growth due to insufficient energy resources.
We list the best IT infrastructure management service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Where we are today is not hybrid cloud rebranded. Hybrid was a transition strategy. Distributed is an entirely new operating environment, where cloud infrastructure and services are physically located in multiple, dispersed environments: on-premise data centers, multiple public clouds, edge locations, and sovereign zones. Yet they are managed as a single, cohesive system. Unlike centralized or hybrid approaches, distributed cloud treats geographic and architectural diversity as a feature, not a compromise.
This shift happened gradually. Organizations reacted to new regulatory frameworks like GDPR and FedRAMP, which enforce data locality and privacy standards that centralized architectures can’t always support. Meanwhile, latency-sensitive applications, like real-time analytics, pulled compute closer to the user, pushing cloud computing infrastructure to the edge. And cost became a concern: 66% of engineers report disruptions in their workflows due to lack of visibility into cloud spend, with 22% saying the impact is equivalent to losing a full sprint.
Distributed cloud addresses all of these challenges, enabling businesses to comply with regulations, improve performance, localize deployments, and maintain operational continuity in one architectural shift. But managing it to ensure that a distributed framework actually reaches its full potential requires serious rethinking. Infrastructure has to be modular and versioned by design, not patched together.
Dependencies need to be explicit, so changes don’t cascade unpredictably. Visibility should extend beyond individual cloud providers, and governance has to follow workloads wherever they run. Yet most organizations today operate without these principles, leaving them struggling with fragmentation, limiting their scalability, opening the door to security and competitive threats, and slowing innovation.
Old Tools, New ProblemsThere’s growing evidence to show just how widespread the shift toward distributed cloud has become: 89% of organizations now use a multi-cloud strategy, with 80% relying on multiple public clouds and 60% using more than one private cloud. The reasons are strategic: reducing vendor lock-in, complying with data localization laws, and improving performance at the edge.
But the consequences are operational. Fragmentation creates chaos. Teams struggle with version control, lifecycle inconsistencies, and even potential security lapses. Infrastructure teams become gatekeepers, and developers lose confidence in the systems they rely on.
Most organizations are still applying traditional centralized cloud management principles to a distributed world. They rely on infrastructure as code (IaC), stitched together with pipelines and scripts that require constant babysitting. This approach doesn’t scale across teams and regions. IaC also introduces new dependencies between layers that are invisible until they break.
All in all, the approach is problematic: 97% of IaC users experience difficulties, with developers often viewing IaC as a “necessary evil” that slows down application deployment. The result is a kind of paralysis: any change carries too much risk, so nothing changes at all.
A New Operating Model for a Fragmented WorldSolving this requires more than another tool. It requires a new operating model and mindset. Infrastructure should be broken into modular, composable units with clear boundaries and pre-defined dependencies. Teams should be able to own their layer of the stack without impacting others. Changes should be trackable, auditable, and safe to automate.
Platforms that offer a single control plane across environments can make this possible. They turn complexity from a liability into a strategic asset: one that offers flexibility without sacrificing control. This is where emerging approaches like blueprint-based infrastructure management offer a compelling path forward. Instead of expecting AI or DevOps teams to connect workflows, infrastructure can be transformed into modular components. Think Lego bricks, except it’s a chunk of code that’s versioned, pre-approved, and reusable.
In this model, automation doesn’t mean giving up control. It means enabling teams to move faster within guardrails that are defined by architecture. The result is a system that scales, even across regulatory zones, business units, and tech stacks. This kind of shift doesn’t eliminate complexity, but it makes it manageable, legible, and strategic.
And we’re already seeing the rise of blueprint-based and modular infrastructure strategies across the industry—from Cisco’s UCS X-Series, which decouples hardware lifecycles for flexible upgrades, to Microsoft Azure Local’s unified control plane for hybrid deployments, and the growing adoption of platform engineering as a discipline for building reusable, scalable systems. It’s an evolution that just makes sense.
The Strategic Advantage of Managing WellDistributed cloud isn’t something organizations opt into. It's the state they're already operating in. The real differentiator now is how well it's managed. Scaling infrastructure has always been achievable; distributing infrastructure, by contrast, demands a different kind of investment: in architecture, workflows, and operational discipline.
Without a system built specifically for elasticity, decoupling, and visibility across environments, complexity quietly erodes both speed and trust. Infrastructure becomes harder to change, risks accumulate invisibly, and innovation slows to a crawl.
The right foundation turns that story around. Distributed infrastructure, when managed deliberately, doesn’t have to become a barrier. It becomes a catalyst.
Elastic systems allow teams to localize deployments without fragmenting control. Decoupled architectures enable parallel innovation across business units, cost centers, and regions without introducing instability. Unified visibility makes governance a continuous function, not an afterthought. Managing complexity isn’t about eliminating it; it’s about structuring it so that distributed systems can scale sustainably, adapt safely, and operate predictably.
In that context, infrastructure becomes a lever for scale instead of a source of drag. Managing it well isn’t just an operational need. It’s a strategic advantage.
We list the best cloud database.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
There are mistranslations, and then there are ChatGPT subtitles that appear to have been deliberately written to upset people. That's what appeared to happen with some of the translated Japanese shown on screen during episodes of anime recently spotted and shared online.
The first example to gain attention online made it clear that ChatGPT was the culprit of awkward and outright wrong translations during an episode of Necronomico and the Cosmic Horror Show, Crunchyroll’s new anime series about occult weirdness and internet brain rot. It literally included the line "ChatGPT said" in both the German and English subtitles.
Fans started posting screenshots of bizarre sentence structures and dialogue that they had spotted, and now had an explanation and a source of blame for. Misspelled character names, inconsistent phrasing, and just outright made-up words and phrases were spotted everywhere.
(Image credit: Pixel/Bluesky)I only watched about two minutes, and was so frustrated at the subs having errors that even a normal machine translation wouldn't have given.
— @hilene.bsky.social (@hilene.bsky.social.bsky.social) 2025-07-03T02:47:11.136ZIn case that wasn't enough, Crunchyroll’s president, Rahul Purini, had told Forbes in an interview only a few months ago that the company had no plans to use AI in the “creative process.” They weren’t going to mess with voice acting or story generation, he said. AI would be restricted to helping people find shows to watch and to recommending new shows based on what viewers had previously enjoyed.
Apparently, ChatGPT translations don't count under that rubric, but localization isn't a mechanical process, as any human translator could explain.
Localization artHey now, show some respect for the most storied of all anime subbers: Translator's name
— @viridianjcm.bsky.social (@viridianjcm.bsky.social.bsky.social) 2025-07-03T02:47:11.132ZLocalizing is a big deal among anime fans. Debates over whether certain subtitles are too literal, too loose, or too limited in their references to be understood outside Japan have raged for decades. But no one on any side of those debates is likely to claim these massive errors by ChatGPT are okay.
Crunchyroll hasn’t officially clarified how this happened, but reports suggest the subtitles came from the company's Japanese production partner. The generated subtitles may have been given to Crunchyroll to air without Crunchyroll being responsible for making them.
As several people pointed out, when you pay to stream anime from a major platform like Crunchyroll, you're expecting a certain baseline of quality. Even if you disagree with a localizer's choices, you can at least understand where they are coming from. The fact that apparently no one read the ChatGPT subtitles before they were uploaded to a global audience is harder to justify.
Translation is an art. Localization isn’t just about replacing Japanese with English. It’s about tone, cadence, subtext, and making a character sound like themselves across a language barrier. AI can guess what words go where, but it doesn’t know the characters or the show. It's like a little translation dictionary, which is fine as far as it goes, but it can't make a conversation make sense without a human piecing together the words. A few fans are outraged enough to call for unsubscribing and going back to sharing fansubs, the homebrewed subtitles unofficially written and circulated back in the days of VHS. In other words, the very thing Crunchyroll once helped make obsolete by offering higher-quality, licensed versions of shows.
At a time when more people are watching anime than ever before, Crunchyroll is apparently willing to gamble that most of us won’t notice or care whether the words characters say make any sense. If Crunchyroll wants to keep its credibility, it has to treat localization not as a tech problem to optimize, but as a storytelling component that requires human nuance and judgment. Otherwise, it might just be "gameorver" for Crunchyroll's reputation.
(Image credit: @pi8you/Bluesky)You might also likeCanada has ordered Chinese surveillance giant Hikvision to cease its operations in the country, citing national security concerns.
The ban follows a formal review conducted under the Investment Canada Act and marks a move against foreign technology firms.
"The government has determined that Hikvision Canada Inc.’s continued operations in Canada would be injurious to Canada’s national security," said Industry Minister Mélanie Joly.
International pressure and rising suspicionHikvision, one of the world’s largest producers of surveillance cameras, has operated in Canada since 2014.
However its expansive global reach and ties to state-linked projects in China have long drawn concern from Western countries.
Although the government has not made public the specific reasons behind its decision, it has stated intelligence and security assessments played a central role.
This silence is likely to fuel speculation, much like in previous crackdowns on Huawei, where classified intelligence was used to justify broad commercial restrictions.
The comparison to Huawei is not unwarranted. Hikvision now finds itself under the same kind of scrutiny that led to Huawei’s ejection from 5G infrastructure projects across the Five Eyes nations.
The US, UK, and Australia have all already taken measures against Hikvision, particularly over claims its cameras have been used to surveil Uyghur Muslims in China’s Xinjiang region, allegations that Beijing denies.
The FBI has also warned about malware targeting webcams, and the Western world often believes that Chinese IoT is arguably more dangerous than TikTok, which is considered spyware.
Unsurprisingly, Hikvision “strongly disagrees” with Canada's decision, saying, “We believe it lacks a factual basis, procedural fairness, and transparency,” the company claims the move appears “to be driven by the parent company’s country of origin.”
With geopolitical tensions continuing to define much of the West’s approach to Chinese firms, decisions like Canada’s risk being seen less as technology-based judgments and more as political posturing.
Hikvision claimed it cooperated fully with authorities and submitted all requested documents, but this did not alter the outcome.
It’s unclear how many public buildings in Canada still use Hikvision devices, but Joly has committed to reviewing and phasing out any remaining equipment.
“I strongly urge Canadians to take note of this decision and make their own decisions accordingly,” she warned.
The Canadian government appears to be focusing on surveillance risks, and this questions the trustworthiness of smart devices, like the webcams or parental control solutions.
As more homes and workplaces adopt smart cameras and monitoring tools, the line between convenience and intrusion becomes thinner.
If bans become more commonplace, vendors may need to prove more than just feature strength to remain competitive.
Whether you're selecting a home monitoring system or seeking the best antivirus software, the politics of hardware and software are becoming harder to ignore.
Via Economic Times
You might also likeRugged devices are typically defined by their ability to survive harsh conditions, not their computing power.
The Getac B360 Plus attempts to challenge this expectation by introducing AI acceleration and high-end specs into a fully rugged form factor, but the practical benefits of this combination may not be as clear-cut as the branding suggests.
At the core of the B360 Plus is Intel’s new Core Ultra series, with options ranging from Ultra 5 to Ultra 7 and up to 32GB of LPDDR5X memory.
AI capabilities meet rugged expectationsBuilt-in AI acceleration through Intel AI Boost claims up to 48 TOPS of performance, paired with Arc integrated graphics.
While these specs may appear impressive, how well such AI capabilities translate to real-world edge computing tasks in harsh field environments remains an open question.
Engineered for physical resilience, this laptop meets MIL-STD-810H, MIL-STD-461G, and IP66 standards, meaning it can handle drops, salt fog, and wide temperature swings.
Getac also offers optional ANSI/UL 121201 certification for hazardous areas, meaning it fits squarely within expectations for a best rugged laptop candidate.
The Getac B360 Plus comes with a 13.3-inch display that supports 1400 nits of brightness and is optimized for outdoor use.
It also features a LifeSupport dual-battery system, which allows hot-swapping without shutting the device down.
Connectivity options include Wi-Fi 7, Bluetooth 5.4, optional 4G and 5G, GPS, and a variety of physical ports including Thunderbolt 4, HDMI 2.0, and even legacy connectors like VGA and serial.
The laptop also comes with dual SIM support and a 1D/2D barcode reader, backed by Getac’s Barcode Manager software.
While the barcode scanner may be convenient, regular rugged tablet users may still prefer dedicated devices with simpler, more focused roles.
Getac also added security features such as TPM 2.0, optional biometric authentication, and enterprise software such as Absolute Persistence and Secure Endpoint.
These additions suggest an IT-centric use case, but again, may be overkill for users who simply need a machine that doesn’t fail in the rain or dust.
The B360 Plus is an ambitious attempt to bridge rugged hardware and high-performance computing, but whether the two belong together remains to be seen.
At the time of writing, there is no word on pricing, but hopefully a unit will be available for review in the coming months.
You might also likeThe new ViewSonic's VG41V Series marks the company’s latest effort to carve out space in the crowded business monitor segment.
These monitors target video conferencing and productivity users by integrating Windows Hello facial recognition, a built-in 5MP webcam, and enhanced ergonomic designs.
On paper, they check a lot of boxes, but in practice, the lineup leaves a few open questions, particularly around display resolution and power delivery.
Productivity perks can’t mask a resolution compromiseThe VG41V Series includes three models: the 24-inch VG2441V and two 27-inch options, the VG2741V and VG2741V-2K.
While the VG2741V-2K supports QHD (2560x1440) resolution, the VG2741V and VG2441V only support FHD (1920x1080) resolution.
None of them reach 4K resolution, which is increasingly expected in higher-end office monitors - but still, the lineup introduces thoughtful touches like infrared-enabled facial recognition via Windows Hello.
The 120 Hz refresh rate and Eye ProTech+ (flicker-free technology and low blue light) make extended sessions more bearable, features that align well with ViewSonic’s productivity branding.
This series enables secure, instant login to digital workspaces, which could appeal to enterprises managing device access without passwords.
It also integrates a 5MP webcam, tiltable by ±5°, and includes a physical privacy cover, while dual microphones and stereo speakers support a more complete conferencing setup.
For remote workers or office-based teams regularly joining Zoom or Teams calls, this package could provide a plug-and-play convenience that some will value.
However, powering the VG41V Series via USB-C introduces a compromise.
While USB-C is undeniably a flexible standard for video, data, and charging, its implementation here feels awkward.
ViewSonic offers dual USB-C ports, one upstream for data and video, and one downstream that supports just 15W charging.
That’s enough to charge a phone or small accessory, but it won’t power a laptop or meet the needs of many desk setups.
This could frustrate users relying on a single-cable solution, especially Apple users searching for the best monitor for Mac Mini or MacBook Pro.
That said, the series performs well on ergonomics, with support for height, tilt, swivel, and pivot adjustments. It also includes a compact stand to maximize desk space.
ViewSonic’s VG41V Series uses FSC-certified, recyclable packaging and meets EPEAT and ENERGY STAR standards.
The VG41V Series will arrive in select markets in North America, Asia, and Europe in the coming months.
Pricing for the VG41V Series remains unknown at the time of writing, making it difficult to judge whether the trade-offs in resolution and power delivery are ultimately justified.
You may also likeSamsung's next Galaxy Unpacked event will take place on July 9 and will mark the company’s big summer showcase.
It’s there we expect to see follow-ups to the Samsung Galaxy Z Fold 6 and Galaxy Z Flip 6, along with other products, likely updates to the Galaxy Watch lineup.
This Unpacked will be the third one of the year; the first was the Unpacked that saw the launch of the Samsung Galaxy S25 phones, then the second was the full launch of the Galaxy S25 Edge.
So read on for how to watch the next Samsung Galaxy Unpacked and a brief overview of what to expect.
How to watch Samsung Galaxy UnpackedThe next Galaxy Unpacked showcase will be held on Wednesday, July 9 at 7am PT / 10am ET / 3pm BST and midnight July 10 in Australia.
You’ll be able to watch a live stream of the launch on Samsung’s own website. However, a simpler route would be to visit the brand’s YouTube channel and watch the showcase there, or use the video embedded below.
TechRadar will also be at the showcase, where you can get updates live from Unpacked on our TechRadar TikTok account. And we’ll be covering Unpacked live as it happens, so make sure to check back with TechRadar for all the news, views, reactions, and more.
What to expect at July 9th‘s Galaxy UnpackedThe ‘summer’ Unpacked events tend to be all about Samsung’s latest foldable phones, and we expect this event to be no different with the reveal of the Samsung Galaxy Z Fold 7, Galaxy Z Flip 7, and perhaps a third phone, say a Galaxy Z Fold Ultra.
In general, the rumors so far are pointing towards iterative design changes and a few spec improvements, but nothing hugely radical in terms of design or performance. Samsung is likely to tout new Galaxy AI features and use the new foldable phones to showcase them on, though such features are very likely to roll out to other Galaxy phones and devices too.
We’re also expecting to see new smartwatches, likely the Galaxy Watch 8 and its ‘Classic’ stablemate, and perhaps a Galaxy Watch Ultra 2.
An evolved design has been tipped for the Watch 8, with a potential ‘squircle’ design reminiscent of the Galaxy Watch Ultra’s square-meets-circle aesthetic, and the potential return of a rotating bezel.
Expect new AI-centric fitness features and tools in the software for these watches, but I’d also expect such features to roll out to other Galaxy Watch models.
You might also likeOWC has announced Guardian, a compact portable SSD focused on delivering strong hardware encryption and fast transfer speeds.
The OWC Guardian connects via USB 3.2 Gen 2 (10Gbps) and delivers up to 1000MB/s in real-world read and write speeds, making it capable of handling 4K video files, media archives, and quick backups.
Designed with 256-bit AES OPAL hardware encryption, the Guardian handles data protection at the hardware level.
Seamless encryption without system slowdownThe encryption process starts automatically when data is written and is reversed (decrypted) when accessed by an authorized user.
This avoids reliance on host system resources, preserving performance even during large data transfers.
It's one of the few devices in its price range that combines both speed and encryption without demanding software installation, which may place it among the best secure drives for routine professional use.
OWC says the device is compatible with macOS, Windows, Linux, and even iPadOS.
It includes a touchscreen interface, which serves as the primary method for user authentication through PIN or passphrase, but also allows access to additional features such as multi-user profiles, read-only mode, auto-timeout, secure erase, and a randomized keypad layout.
Physically, the drive is housed in anodized aluminum for improved heat dissipation and general durability.
However, unlike some of the best rugged hard drives, the Guardian lacks an IP rating for dust or water resistance.
This may limit its reliability in field conditions or outdoor environments, where environmental protection is a priority.
The Guardian comes with a 1TB OWC Aura Pro IV NVMe SSD (960GB usable) but is also available in higher capacities, including a 4TB version.
The internal firmware reserves a portion of space for data correction and redundancy.
It’s formatted in APFS for Apple devices by default, but can be reformatted for Windows or Android using OWC's Drive Guide utility.
However, full cross-platform read/write access requires separate software like MacDrive.
“We designed the OWC Guardian for anyone who needs simple, reliable data protection on the go, but without the typical hassles,” said Larry O’Connor, CEO and Founder, Other World Computing (OWC).
“Whether you're transferring a huge file in the boardroom, backing up data at the local coffee shop, or editing a 4K video for your latest content drop, you shouldn’t have to choose between security, speed, and ease of use. The OWC Guardian delivers all three, in a rugged, intuitive design built to travel.”
Pricing starts at $219.99 for the 1.0TB model while the 2.0TB and 4.0TB models cost $329.99 and $529.99, respectively.
You might also likeHuawei is the latest in a growing list of automakers and tech companies that are exploring the possible benefits of fitting an EV with solid-state batteries, with the likes of BMW, Mercedes-Benz, VW, BYD and Stellantis all publicly touting the tech.
Car News China reports that the tech giant has filed a patent that outlines a solid-state battery architecture with energy densities between 400 and 500 Wh/kg, which is two or three times that of the current EV battery landscape.
Currently, Huawei doesn't manufacture its own branded vehicles in China, but instead works with various automakers to apply some of its existing technologies to vehicles.
According to the patent application, its batteries use a method that ‘dopes’ sulfide electrolytes with nitrogen to address side reactions at the lithium interface. However, it is keeping the remainder of its technology close to its chest, as the race to mass-produce solid-state battery technology safely and at scale is well and truly on.
What’s more, the company theorizes that it is able to eke some 1,864 miles of range from its battery technology, as well as complete the industry standard 10-80% charge in less than five minutes.
However, some industry experts are skeptical of those bold claims, pointing out that it is a leap of more than three times the current range abilities of the most impressive electric vehicles on sale today.
Speaking to Electrek, Yang Min-ho, professor of energy engineering at Dankook University, said that such performance "might be possible in lab conditions" but went on to explain that reproducing the results in the real world, where energy loss and thermal management play a key role, would be "extremely difficult".
The professor was also quick to point out that the nitrogen doping method is a "standard technique" that, again, can be applied in a laboratory environment but is currently difficult to scale to a point where it can be mass produced to meet the demands of global automakers.
Analysis: big headlines, small steps(Image credit: Porsche)Understandably, China is basking in its EV dominance at the moment and it isn’t afraid to publicize innovations that have the potential to change the game.
MegaWatt charging is one of the more recent topics, but solid-state batteries have also been bubbling sway under the surface for some time. Undoubtedly, China will be the first to this technology, but it likely won’t be as soon as many domestic companies make out, nor as impressive.
What’s more, the 1,800-mile figures seem largely pointless, as it would require a huge battery pack that is going to add excess weight and blunt driving dynamics in a vain attempt to dispel notions of range anxiety.
Should Huawei be able to nail energy densities between 400 and 500 Wh/kg, it would be far better placed producing smaller packs that can still offer an impressive range without the need for enormous, expensive batteries.
When an EV can easily cover 600 miles on a single charge, range anxiety largely becomes obsolete, as there are so few drivers that want to sit for hours on end without a break. Plus, with the public charging network expanding and improving year-on-year, it is now arguably easier than ever to find a spot to plug in and stretch the legs.
You might also likeThe Anthropic Model Context Protocol (MCP) Inspector project carried a critical-severity vulnerability which could have allowed threat actors to mount remote code execution (RCE) attacks against host devices, experts have warned.
Best known for its Claude conversational AI model, Anthropic developed MCP, an open source standard that facilitates secure, two-way communication between AI systems and external data sources. It also built Inspector, a separate open source tool that allows developers to test and debug MCP servers.
Now, it was reported that a flaw in Inspector could have been used to steal sensitive data, drop malware, and move laterally across target networks.
Get 55% off Incogni's Data Removal service with code TECHRADAR
Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal
Apparently, this is the first critical-level vulnerability in Anthropic’s MCP ecosystem, and one that opens up an entire new class of attacks.
The flaw is tracked as CVE-2025-49596, and has a severity score of 9.4/10 - critical.
"This is one of the first critical RCEs in Anthropic's MCP ecosystem, exposing a new class of browser-based attacks against AI developer tools," Avi Lumelsky from Oligo Security explained.
"With code execution on a developer's machine, attackers can steal data, install backdoors, and move laterally across networks - highlighting serious risks for AI teams, open-source projects, and enterprise adopters relying on MCP."
To abuse this flaw, attackers need to chain it with “0.0.0.0. Day”, a two-decade-old vulnerability in web browsers that enable malicious websites to breach local networks, The Hacker News explains, citing Lumelsky.
By creating a malicious website, and then sending a request to localhost services running on an MCP server, attackers could run arbitrary commands on a developer’s machine.
Anthropic was notified about the flaw in April this year, and came back with a patch on June 13, pushing the tool to version 0.14.1. Now, a session token is added to the proxy server, as well as origin validation, rendering the attacks moot.
You might also likeVolkswagen has announced that it has expanded its Transporter line-up with battery electric variants of the popular Shuttle and Kombi models of its commercial vehicle range – adding a more practical and robust van option to its existing ID Buzz model.
While the ID Buzz captured the public’s imagination with its funky, retro-inspired looks, it lacked the hauling capabilities of its Transporter cousins. Even the ID Buzz seven-seater variant struggles with room for lugging bulky items.
The Kombi, on the other hand, has historically proven a big hit with professionals and families alike, purely because it can transport a family of five, as well several mountain bikes, a tent and a dog without breaking a sweat.
The electrified iterations add a 65kWh lithium ion battery, which results in either 194 miles of range for the Shuttle and 196 miles of Range for the Kombi model, presumably because the former is heavier, as it can be optioned with an impressive nine seats.
(Image credit: Volkswagen)Both the fully-electric Kombi and Shuttle will be available in either short or long wheelbase versions, with the former able to handle a max payload of 896kg and the latter available with the aforementioned nine seats, as opposed to eight as standard.
The equipment levels are also generous, with things like heated front seats, a 13-inch touchscreen and front and rear-view cameras all coming as standard on the entry-level models.
The marque also announced this week that the AirConsole app, which allows users to play a range of 15 arcade games when parked, is now available as an over-the-air update in current generation ID Buzz, Caddy, Multivan, California and Crafter models.
This will likely roll out across the latest Transporter range in the near future.
Analysis: electric vans still have some way to go(Image credit: Volkswagen)In the UK, the cheapest electrified Transporter Kombi retails at £53,404 (around $73,183 / AU$111,423), undercutting the cheapest ID Buzz, which costs £59,135 (around $81,035 / AU$123,381).
But don't get too excited, because the ID Buzz offers up to 293 miles of electric range and 200kW charging capabilities, thanks to its 77kWh battery in the standard wheelbase version. It can also be optioned in a spicy GTX model that delivers 335bhp for some serious acceleration.
The electrified Transporter Kombi and Shuttle, on the other hand, use a single electric motor that develops 134bhp, which is at the lower end of what the petrol and diesel counterparts offer – even though the electrified versions are heavier.
It can also only charge at speeds of up to 125kW, meaning a 10-80% charge will take at least 39 minutes.
As a long-standing owner of VW’s various ICE Transporter models, very little gets close to the practicality, load-lugging abilities and relaxed drive that the German marque offers.
It’s a controversial opinion, but after living with the ID Buzz for a week, I think it looks a little awkward, especially in the longer wheelbase seven-seater versions, and the interior roominess just isn’t enough to haul motorcycles or masses of camping kit, as well as the entire family.
The latest Battery Electric Vehicle versions of the popular Transporter get close to Vee Dub van perfection for me, it’s just a shame about the limited range, performance and charging speeds. Oh, and that price.
you might also likeIt's no secret that Nintendo has been at the center of controversy since the announcement of $80 game prices, along with recent findings around its new Switch 2 display and its ghosting issues. However, the handheld's problems seemingly don't stop there.
As highlighted by Notebookcheck, multiple users are reporting overheating issues with the Nintendo Switch 2, even while playing less demanding games. Notably, most reports suggest overheating occurs when using the Switch 2's Dock, a vital component that allows users to play on external displays and provides more power for improved performance.
A plethora of Redditors have voiced frustrations with the Switch 2 Dock's lack of ventilation despite featuring a built-in fan. The latter isn't designed to cool the Switch 2, but rather the Dock.
Others also point out higher temperatures while using its Ethernet port, to the point where the handheld and its accessory are too hot to touch, and the display output ends despite the console still running.
Nintendo suggests setting the console aside to cool down and ensuring the handheld's air vents aren't obstructed – but frankly, that's hardly much of a solution to eliminate the supposed overheating.
Switch 2 overheating. from r/SwitchIt's not just the Dock alone; users are also reporting that the device displays a warning message before entering sleep mode when undocked. This appears to be the system's way of throttling, but instead of that, it makes it not playable while resting.
This is a worrying sign for Switch 2 owners. If these issues become more prevalent, it would most certainly supersede the concerns about the display quality.
(Image credit: Nintendo)Analysis: Hopefully this isn't a bigger issue than I think...It's not uncommon to hear reports about overheating with new hardware. I've become quite accustomed to it, with new GPU launches, and (more relevant) the early reports of SD cards overheating in the Asus ROG Ally.
Since this is a similar issue, it seems that Nintendo might be able to address it with a software update, similar to what Asus did. However, there's no guarantee that this will lead to a resolution. It also comes at a bad time for the Switch 2, considering its recent controversies.
It's the absolute worst-case scenario for a new handheld console owner, especially at its $449.99 / £395.99 / AU$699.95 price; it's also worth noting that the original Switch has seen a price hike in some regions, so it's not exactly a great time for these overheating reports to emerge.
If the Dock happens to be a detriment to the Switch 2's cooling, I doubt any software update will be able to fix the issue. There's also a lack of compatibility with third-party docks, so if you're facing these temperature issues, there isn't much of a solution for now.
Let's just hope Nintendo is quick to address this before it gets out of hand...
You might also like...Christopher Nolan's new project, The Odyssey, joins a host of other new movies coming soon, but what's notable about the marketing is that so far, we've only officially been given the release date and a poster.
The good news is, we can confirm The Odyssey is slated for a July 17, 2026, release, so we can start counting down the days to the next big Nolan movie.
We've got a cool new poster, too, which you can take a look at below. It's quite minimalist, which we've come to expect from Nolan movies.
First poster for Christopher Nolan’s ‘THE ODYSSEY’In theaters on July 17, 2026. pic.twitter.com/0utuOcLFlHJuly 2, 2025
The bad news is, there's nowhere online you can officially watch the trailer as it hasn't been released by approved channels online. TechRadar is aware of recent leaks, and we won't be including links to what was posted online.
So, with leaked material being hit with copyright strikes, there's only one way fans can watch The Odyssey's first trailer through official channels.
How can we watch The Odyssey's trailer?The highly anticipated trailer for The Odyssey is playing before another big Universal movie, Jurassic World Rebirth, so cinema goers are in for a treat before they head to see the latest installment of the dinosaur franchise.
Unfortunately, I was on the fence in my Jurassic World Rebirth review, but perhaps the opportunity to see the trailer will be enough to entice people into their local theater this week.
As always, we're in for a huge treat with the new Nolan movie, and the cast list is huge. The epic fantasy movie has an ensemble cast including Matt Damon, Tom Holland, Anne Hathaway, Zendaya, Lupita Nyong'o, Robert Pattinson, and Charlize Theron.
It is based on Homer's epic poem Odyssey, with the plot following Odysseus, the legendary Greek king of Ithaca, on his perilous journey home after the Trojan War.
Throughout the story, we follow his encounters with mythical beings such as Sirens and the witch-goddess Circe, as he heads to a long-awaited reunion with his wife, Penelope.
We have a while to wait for it to arrive in theaters, but with the trailer playing on the big screen, it's the closest we'll get to Nolan's latest blockbuster for now.
You might also likeBuried under city streets, countryside roads and the deep ocean floor lie the glass threads that carry almost everything we do online.
These strands, often no wider than a human hair, already move astonishing amounts of data, and now, Japanese researchers have pushed those limits even further - without changing the shape or size of the cable.
A team led by Japan’s National Institute of Information and Communications Technology (NICT), working with Sumitomo Electric and European collaborators, has achieved a transmission speed of 1.02 petabits per second over 1,808 kilometers.
A new world recordThe test used a 19-core optical fiber with a standard cladding diameter of 0.125 mm, meaning it’s the same thickness as the single-core fibers already deployed in networks around the world.
Instead of requiring entirely new infrastructure, the cable squeezes 19 separate light paths into the space typically used for one.
That allows for a dramatic leap in capacity while staying compatible with existing systems.
It also marks the first time a petabit-class signal has traveled more than 1,000 kilometers in a standard-sized fiber.
The result sets a new world record for capacity-distance product at 1.86 exabits per second-kilometer.
To simulate a long-distance backbone, signals were looped 21 times through 86.1 km spans of the new fiber. Amplifiers boosted the signal at every pass and were carefully tuned to work across both the C and L wavelength bands for all 19 cores.
Using 180 wavelengths modulated with 16QAM, the system was able to handle huge volumes of parallel data streams.
After traveling the simulated route, the signals were separated by a multi-channel receiver using MIMO digital signal processing.
This avoided adding more fiber cores or expanding the cable diameter, which would have made integration with current networks harder.
To put the new achievement in context, the average US broadband speed in early 2025 is around 290Mbps. The new record of 1.02 petabits per second equals 1,020,000,000 Mbps - more than 3.5 million times faster.
The results were presented at OFC 2025 as a post-deadline paper, offering a glimpse at what future optical networks might look like.
Although it won’t transform work or home connections overnight, the research shows how far standard fiber can still go. The team now aims to refine amplifier efficiency and signal processing to move closer to real-world deployment.
With global data traffic continuing to grow, advances like this offer a way to stretch infrastructure further without the need to dig new trenches.
New optical fibers with standard cladding diameter and world records achieved by NICT (Image credit: NICT)You might also likeGoogle has fixed a high-severity Chrome vulnerability which was allegedly being exploited in the wild, possibly by nation-state threat actors.
In a new security bulletin, Google said it addressed a type confusion issue in Chrome V8, tracked as CVE-2025-6554, which allowed threat actors to perform arbitrary read/write operations, potentially giving way to sensitive data theft, token exfiltration, or even malware and ransomware deployment.
The V8 engine is Google’s open source high-performance JavaScript and WebAssembly engine used in Chrome and other Chromium-based browsers to execute web code efficiently. The bug caused V8 to incorrectly interpret data, leading to unintended behavior. In theory, a threat actor could serve a specially crafted HTML page to a target, which could trigger the RCE.
Get 55% off Incogni's Data Removal service with code TECHRADAR
Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal
The bug was given a severity score of 8.1/10 - high, and was addressed in versions 138.0.7204.96/.97 for Windows, 138.0.7204.92/.93 for macOS, and 138.0.7204.96 for Linux, on June 26.
In the advisory, Google confirmed the bug was being actively abused, but decided not to share any details until the majority of the browsers are patched. Usually, Chrome automatically installs the patches, but just in case, you might want to head over to chrome://settings/help and allow Chrome to look for updates.
While Google kept the details under wraps, knowing who blew the whistle tells us a little more about potential abusers. The bug was discovered by Clément Lecigne of Google’s Threat Analysis Group (TAG), a cybersecurity arm that usually investigates nation-state threat actors.
If TAG was looking into this bug, and we know it’s abused in the wild, then it’s safe to assume that it was used by nation-states in highly targeted attacks. Previous V8 flaws have been abused in campaigns against high-profile targets in the past, including journalists, dissidents, IT admins, and similar people.
You might also likeApple could be making a change to an iconic iPhone design element with the release of the rumored iPhone 17 Pro.
According to new rumors from somewhat reliable tipster Majin Bu (via GSMArena), Apple could move its logo to a lower position on the rear panel of the iPhone 17 Pro.
The possible change has been corroborated in an X (formerly Twitter) post from Apple tipster Sonny Dickson, who posited that the move could align the logo with the phone’s ring of MagSafe magnets.
You may be thinking that a logo moving a few centimeters isn’t a huge story on its own – and in all fairness, you’d be right. This alone isn’t too big of a deal.
However, I think this latest design rumor has the potential to say a lot about where Apple’s priorities lie when it comes to the next generation of iPhone.
Aesthetic alterationsThis mock-up, shared by tipster Majin Bu, shows the iPhone 17 Pro with a lower Apple logo. (Image credit: Majin Bu)This logo tipoff is the latest in a pretty long list of redesign rumors concerning the iPhone 17 lineup, specifically the iPhone 17 Pro and iPhone 17 Pro Max.
In fact, it seems to me that the strongest and most repeated rumors surrounding the next Pro-level iPhones have concerned the design and aesthetics of the supposedly-upcoming handsets.
As we previously reported, the iPhone 17 Pro and iPhone 17 Pro Max are rumored to get a Google Pixel-like camera bar, a wild departure from the function-led design philosophy of previous generations. Several separate rumors have given us a look at CAD mock-ups and dummy units that seem to confirm the changed design.
Though we have heard of a possible higher-resolution telephoto camera, as well as murmurs of a unified volume and Action button and under-display Face ID, it seems like we could be in for a lighter year when it comes to new iPhone hardware features.
Considered change, or cover up?It's possible a better telephoto camera could be the only real upgrade to this year's Pro iPhone. (Image credit: Future / Lance Ulanoff)All of this brings new context to the supposed new logo placement – what was previously a pretty innocuous design update begins to look like meddling for the sake of finding something to change.
And while all of this is based on rumors at the moment, if Apple were to launch the iPhone 17 Pro with only iterative or less-impactful hardware improvements, then these design shakeups would start to look like an effort to draw attention elsewhere.
It wasn’t too long ago that Apple faced considerable backlash for launching the iPhone 14 in a very similar state to the iPhone 13 – Cupertino would be wise to avoid a similar situation this year.
With all that said, it’s also possible that Apple is simply looking to refresh the visual identity of its next-gen handsets, to match the new look of the Liquid Glass design language coming with iOS 26.
So, while the shifting of an Apple logo by a few centimeters might not seem the biggest story, it’s worth keeping an eye on these small changes as we get closer to the expected September launch date of the iPhone 17 series. Be sure to let us know what you think of this rumored change in the comments below.
You might also likeExpressVPN has unveiled a new Themes feature for its mobile apps to give users more flexibility in how the interface looks and feels.
Most notably, one of the best VPN providers on the market has finally brought full support for the "much anticipated" Dark Mode to iOS devices, too.
The update, which follows May's revamp of ExpressVPN's mobile apps, reflects a subtle shift in how VPN providers approach design – moving beyond pure function to meet user expectations around comfort and customization, without compromising the core privacy experience.
More than just cosmeticDark Mode has been a consistent request among ExpressVPN's mobile users, offering a more comfortable viewing experience and potential battery savings for phones with OLED screens.
The company says it wanted to take the time to roll this out properly across platforms, ensuring a seamless visual experience that doesn't compromise usability.
For iOS users, the introduction of Dark Mode marks a notable milestone, closing a feature gap that had persisted compared to Android.
You can now find Dark Mode under the new Twilight mode in the Appearance tab within the Account Settings.
The update adds five Themes modes for both iOS and Android devices (Image credit: ExpressVPN)Beyond Twilight, the new Themes interface also includes Sand, Midnight, Sky, and System Default modes. Like with Dark mode, you can pick your favorite one by heading to the Appearance tab in your mobile app's Account Settings and customize your app.
Despite seeming like a small change, interface customization matters especially for apps like virtual private networks (VPN) that are opened multiple times a day.
Commenting on this point, ExpressVPN's Chief Information Officer, Shay Peretz said: "Security and style can – and should – go hand in hand. We remain committed to both, with privacy continuing to be our top priority."
As mentioned earlier, the introduction of Themes follows May's revamp of ExpressVPN's mobile apps, which included improvements like a brand-new speed test tool, design and usability upgrades, a server location map, and more.
What this means for ExpressVPN usersThe rollout may not be headline-making in the traditional sense, but it underscores a subtle shift: even among security-focused apps, user experience is no longer secondary.
With VPN usage becoming more mainstream, particularly on mobile, updates like Themes help bridge the gap between function and form.
Whether you're switching servers, checking your connection, or leaving the app running in the background, a comfortable, customizable interface makes those daily interactions feel smoother. And for users who've been waiting for Dark Mode on iOS, the wait is finally over.
You might also likeInsurance group Kelly Benefits has confirmed suffering a cyberattack in which it lost sensitive information on more than half a million customers.
In a data breach notification published on its website, the company said “suspicious activity” on its network prompted it to bring in third-party forensic specialists for an investigation - and the results showed a threat actor breaching the network between December 12 - 17, 2024, stealing “certain files”.
By early March 2025, Kelly Benefits determined that it lost people’s full names, Social Security numbers, tax ID numbers, dates of birth, medical information, health insurance information, and financial account information. The combination of the data stolen varies from person to person.
Get 55% off Incogni's Data Removal service with code TECHRADAR
Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal
As is usual in these scenarios, the company also filed a new form with the Office of the Maine Attorney General, stating exactly 553,660 individuals were affected by the attack.
Kelly Benefits provides integrated employee benefits administration, payroll processing, insurance brokerage, and HR services.
Its payroll division alone serves north of 2,000 employers, processing around two million paychecks and issuing more than 100,000 W‑2s forms annually. For benefits, it counts more than 10,000 corporate clients, and covers more than 8,000 individuals.
Among the companies using its services (and as such, being affected by the attack) are United Healthcare, OneAmerica Financial Partners, and Humana Insurance ACE.
The organization did not say who the threat actors were, or what they were looking to achieve. At press time, no groups claimed responsibility for this attack, and the data is yet to leak anywhere on the dark web. In the meantime, Kelly Benefits urged its customers to remain vigilant, and be wary of potential phishing attacks, identity theft, or fraud.
Affected individuals are offered 12 months of free credit monitoring and identity theft protection services through IDX.
Via BleepingComputer
You might also likeSome time ago we reported on a new kind of TV tech called eLEAP that could solve the long-running problems with the best OLED TVs – and three years since it was announced, it looks like LG could end up putting it into production.
It's known as eLEAP, and it's an alternative way of manufacturing OLED panels. When it was announced in 2022, its focus was on small phone screens. But LG Display is looking at the tech for significantly larger displays, and that means it could be – oh yes – one giant eLEAP for TV technology.
eLEAP promises brighter, more colorful OLEDs and could potentially banish or at least significantly reduce burn-in (Image credit: JDI)Why eLEAP could transform TVsConventional OLED panels are made with fine metal masks, which are thin metal plates with lots of tiny holes in them. Those masks ensure that organic material is deposited on the display substrate with pixel-perfect precision to ensure that each pixel lights up uniformly without overlapping or being poorly aligned.
eLEAP does things differently. Instead of fine metal masks it uses a lithography process to create the OLED pixels. And according to trade site The Elec, LG Display already has the appropriate equipment to trial eLEAP in its OLED facility in Paju, South Korea – and it's looking to test on TV-sized panels. Samsung Display is also reportedly testing the technology.
This is a trial, not the beginning of production: LG Display and Samsung Display may still decide not to go ahead with the tech. But it does have the potential to transform OLED manufacturing: the promise of eLEAP is that it'll offer far better efficiency for the OLED pixels, because the light-emitting area is more than doubled compared to a pixel of the same size made using the fine metal mask technique.
That means they're much more energy efficient, so you could have higher brightness without increasing power use – or use less power at the same brightness. That energy efficiency also means less heat generation – and heat is a key cause of OLED burn-in, so there'd be little danger of the higher brightness causing a burn-in problem.
There is also the potential for eLEAP to be more efficient to actually produce, which would mean cheaper OLED panels, which may mean cheaper TVs – or, at least, maybe mid-range OLEDs such as the LG C5 could finally get significantly brighter without becoming as expensive as the flagship LG G5.
However, even if the trials are successful it's likely to be some time before we'll see the tech in our TVs: according to The Elec the short-term use case is in "niche OLED panels, such as those 20-inch to 30-inch in size or those used in vehicles." However, the fact that LG is testing it in panels of TV size at all is great news for its potential use in the future for better home entertainment.
You might also like