Rumors around a 360-degree camera from DJI have been swirling since October, and now we have some fresh leaks that supposedly give us a look at the DJI Osmo 360 – as well as hinting at some of the specifications it'll bring with it.
Tipster @GAtamer (via Notebookcheck) has posted some pictures of the DJI Osmo 360, showing off the compact camera, the two lenses on the front and back of the device, the small integrated touchscreen, and what looks like an accessory mount.
According to the same source, the specs of the DJI Osmo 360 are going to be "almost the same as the X5", referring of course to the Insta360 X5 that launched in April – another 360-degree camera that the DJI Osmo 360 will be challenging head on.
Have a read through our Insta360 X5 review and you'll see it's a very, very good 8K camera indeed – one we awarded five stars to. The two cameras have 1.28-inch sensors inside, bigger than those in the X4, so it seems we can expect something similar from DJI.
Coming soon?The technical specifications are almost the same as the X5. pic.twitter.com/7HlC9JQHbPMay 31, 2025
The @GAtamer post was actually a follow-up to another image leaked by @Quadro_News, which seems to show the DJI Osmo 360 in some kind of packaging. Again, we can see one of the camera lenses and the shape of the upcoming gadget.
That's just about all we can glean from these latest DJI Osmo 360 leaks, and we don't get any information here about a launch date or potential pricing. It seems likely that the camera will be appearing sooner rather than later, however.
Just a few days ago we got word that the DJI Osmo 360 would be launching in July 2025, so there's not that much longer to wait. We have already seen leaked images of the camera, which match the pictures that have just shown up.
We've also heard that a super-small DJI Osmo Nano could be launched alongside the DJI Osmo 360. If these new devices are as good as the cameras in the current range, including the DJI Osmo Action 5 Pro, then there's a lot to look forward to.
You might also likeAt 3 a.m. during a red team exercise, we watched customer’s autonomous web agent cheerfully leak the CTO’s credentials - because a single malicious div tag on internal github issue page told it to. The agent ran on Browser Use, the open source framework that just collected a headline-grabbing $17 million seed round.
That 90-second proof-of-concept illustrates a larger threat: while venture money races to make large-language-model (LLM) agents “click” faster, their social, organizational, and technical trust boundaries remain an afterthought. Autonomous browsing agents now schedule travel, reconcile invoices, and read private inboxes, yet the industry treats security as a feature patch, not a design premise.
Our argument is simple: agentic systems that interpret and act on live web content must adopt a security-first architecture before their adoption outpaces our ability to contain failure.
Agent explosionBrowser Use sits at the center of today’s agent explosion. In just a few months it has acquired more than 60,000 GitHub stars and a $17 million seed round led by Felicis with participation from Paul Graham and others, positioning itself as the “middleware layer” between LLMs and the live web.
Similar toolkits - HyperAgent, SurfGPT, AgentLoom - are shipping weekly plug-ins that promise friction-free automation of everything from expense approval to source-code review. Market researchers already count 82 % of large companies running at least one AI agent in production workflows and forecast 1.3 billion enterprise agent users by 2028.
But the same openness that fuels innovation also exposes a significant attack surface: DOM parsing, prompt templates, headless browsers, third-party APIs, and real-time user data intersect in unpredictable ways.
Our new study, "The Hidden Dangers of Browsing AI Agents" offers the first end-to-end threat model for browsing agents and provides actionable guidance for securing their deployment in real-world environments.
To address discovered threats, we propose a defense in depth strategy incorporating input sanitization, planner executor isolation, formal analyzers, and session safeguards. These measures protect against both initial access and post exploitation attack vectors.
White-box analysisThrough white-box analysis of Browser Use, we demonstrate how untrusted web content can hijack agent behavior and lead to critical cybersecurity breaches. Our findings include prompt injection, domain validation bypass, and credential exfiltration, evidenced by a disclosed CVE and a working proof of concept exploit - all without tripping today’s LLM safety filters.
Among the findings:
1. Prompt-injection pivoting. A single off-screen element injected a “system” instruction that forced the agent to email its session storage to an attacker.
2. Domain-validation bypass. Browser Use’s heuristic URL checker failed on unicode homographs, letting adversaries smuggle commands from look-alike domains.
3. Silent lateral movement. Once an agent has the user’s cookies, it can impersonate them across any connected SaaS property, blending into legitimate automation logs.
These aren’t theoretical edge cases; they are inherent consequences of giving an LLM permission to act rather than merely answer, which acts a root cause for the outlined exploit above. Once that line is crossed, every byte of input (visible or hidden) becomes potential initial access payload.
To be sure, open source visibility and red team disclosure accelerate fixes - Browser Use shipped a patch within days of our CVE report. And defenders can already sandbox agents, sanitize inputs, and restrict tool scopes. But those mitigations are optional add-ons, whereas the threat is systemic. Relying on post-hoc hardening mimics the early browser wars, when security followed functionality, and drive-by downloads became the norm.
Architectural problemGovernments are beginning to notice the architectural problem. The NIST AI Risk-Management Framework urges organizations to weigh privacy, safety and societal impact as first-class engineering requirements. Europe’s AI Act introduces transparency, technical-documentation and post-market monitoring duties for providers of general-purpose models rules that will almost certainly cover agent frameworks such as Browser Use.
Across the Atlantic, the U.S. SEC’s 2023 cyber-risk disclosure rule expects public companies to reveal material security incidents quickly and to detail risk-management practices annually. Analysts already advise Fortune 500 boards to treat AI-powered automation as a headline cyber-risk in upcoming 10-K filings. Reuters: “When an autonomous agent leaks credentials, executives will have scant wiggle room to argue that the breach was “immaterial.”
Investors funneling eight-figure sums into agentic start-ups must now reserve an equal share of runway for threat-modeling, formal verification, and continuous adversarial evaluation. Enterprises piloting these tools should require:
Isolation by default. Agents should separate planner, executor and credential oracle into mutually distrustful processes, talking only via signed, size-bounded protobuf messages.
Differential output binding. Borrow from safety-critical engineering: require a human co-signature for any sensitive action.
Continuous red-team pipelines. Make adversarial HTML and jailbreak prompts part of CI/CD. If the model fails a single test, block release.
Societal SBOMs. Beyond software bills of materials, vendors should publish security-impact surfaces: exactly which data, roles and rights an attacker gains if the agent tips. This aligns with the AI-RMF’s call for transparency regarding individual and societal risks.
Regulatory stress tests. Critical-infrastructure deployments should pass third-party red-team exams whose high-level findings are public, mirroring banking stress-tests and reinforcing EU and U.S. disclosure regimes.
The security debtThe web did not start secure and grow convenient; it started convenient, and we are still paying the security debt. Let us not rehearse that history with autonomous browsing agents. Imagine past cyber incidents multiplied by autonomous agents that work at machine speed and hold persistent credentials for every SaaS tool, CI/CD pipeline, and IoT sensor in an enterprise. The next “invisible div tag” could do more than leak a password: it could rewrite PLC set-points at a water-treatment plant, misroute 911 calls, or bulk-download the pension records of an entire state.
If the next $17 million goes to demo reels instead of hardened boundaries, the 3 a.m. secret you lose might not just embarrass a CTO - it might open the sluice gate to poison supplies, stall fuel deliveries, or crash emergency-dispatch consoles. That risk is no longer theoretical; it is actuarial, regulatory, and, ultimately, personal for every investor, engineer, and policy-maker in the loop.
Security first or failure by default for agentic AI is therefore not a philosophical debate; it is a deadline. Either we front-load the cost of trust now, or we will pay many times over when the first agent-driven breach jumps the gap from the browser to the real world.
We feature the best AI chatbot for business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
In an increasingly complex cybersecurity landscape, the concept of "hacking yourself first" is not new as such. Organizations have long been engaging white hat hackers to simulate attacks and identify vulnerabilities before malicious actors can exploit them.
However, the traditional approach to red teaming, which typically involves selecting a few trusted individuals to test a system, is no longer sufficient.
More open and competitive red teamingThe issue lies in scale and diversity. A small, internal team will always be limited by their own experiences and perspectives, while cybercriminals operate in a global, decentralized environment. To stay ahead, security testing has to reflect that same breadth and depth of capability.
We believe that this is where a more open and competitive red teaming model comes into its own. Rather than relying on a fixed set of internal engineers or external consultants, organizations are increasingly turning to decentralized architectures.
These invite skilled professionals from around the world to solve specific, targeted challenges. The best talent is incentivized to respond, and the organization benefits from rapid, high-quality insights tailored to the specific threats it faces.
In practice, this model offers two significant advantages to the ‘standard white hacking’ exercise. First, it ensures that the right expertise is applied to the right challenge. Not every engineer is equipped to uncover flaws in VPN detection or anti-fingerprinting solutions. A decentralized approach enables organizations to source the most relevant skill sets directly, without needing to retrain or reallocate internal teams.
Secondly, the incentive mechanism encourages speed and transparency. Contributors are motivated to share findings immediately so that they can claim rewards. This reduces and even eliminates delays and ensures that critical information reaches defenders quickly.
Traditional methodsThe benefits of this approach are already being realized. In sectors such as fintech and Web3, attacks discovered through decentralized red teaming have been observed in the wild months later. This lead time allows businesses to prepare and adapt before those attacks gain traction in broader markets.
It’s important to recognize that decentralized red teaming is not about replacing traditional methods entirely. Conventional penetration testing still plays a valuable role in improving baseline security. But as threats evolve and attackers become more sophisticated, organizations need a more dynamic and scalable way to test their defenses.
Proactive securityUltimately, the shift from reactive to proactive security cannot be achieved through periodic exercises alone. It requires continuous, adaptive engagement with the threat landscape, and a willingness to invite external expertise into the process. By embracing a more competitive and decentralized approach to red teaming, businesses can significantly improve their resilience and stay one step ahead of attackers.
Cybersecurity is no longer about responding to yesterday’s threats. It is about anticipating tomorrow’s, and making sure your defenses are ready today.
We feature the best business VPNs.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
A quantum computing startup has announced plans to develop a utility-scale quantum computer with more than 1,000 logical qubits by 2031.
Nord Quantique has set an ambitious target which, if achieved, could signal a seismic shift in high-performance computing (HPC).
The company claims its machines are smaller and would offer far greater efficiency in both speed and energy consumption, thereby making traditional HPC systems obsolete.
Advancing error correction through multimode encodingNord Quantique uses “multimode encoding” via a technique known as the Tesseract code, and this allows each physical cavity in the system to represent more than one quantum mode, effectively increasing redundancy and resilience without adding complexity or size.
“Multimode encoding allows us to build quantum computers with excellent error correction capabilities, but without the impediment of all those physical qubits,” explained Julien Camirand Lemyre, CEO of Nord Quantique.
“Beyond their smaller and more practical size, our machines will also consume a fraction of the energy, which makes them appealing for instance to HPC centers where energy costs are top of mind.”
Nord’s machines would occupy a mere 20 square meters, making them highly suitable for data center integration.
Compared to 1,000–20,000 m² needed by competing platforms, this portability further strengthens its case.
“These smaller systems are also simpler to develop to utility-scale due to their size and lower requirements for cryogenics and control electronics,” the company added.
The implication here is significant: better error correction without scaling physical infrastructure, a central bottleneck in the quantum race.
In a technical demonstration, Nord’s system exhibited excellent stability over 32 error correction cycles with no measurable decay in quantum information.
“Their approach of encoding logical qubits in multimode Tesseract states is a very effective method of addressing error correction and I am impressed with these results,” said Yvonne Gao, Assistant Professor at the National University of Singapore.
“They are an important step forward on the industry’s journey toward utility-scale quantum computing.”
Such endorsements lend credibility, but independent validation and repeatability remain critical for long-term trust.
Nord Quantique claims its system could solve RSA-830, a representative cryptographic challenge, in just one hour using 120 kWh of energy at 1 MHz speed, slashing the energy need by 99%.
In contrast, traditional HPC systems would require approximately 280,000 kWh over nine days. Other quantum modalities, such as superconducting, photonic, cold atoms, and ion traps, fall short in either speed or efficiency.
For instance, cold atoms might consume only 20 kW, but solving the same problem would take six months.
That said, there remains a need for caution. Post-selection - used in Nord’s error correction demonstrations, required discarding 12.6% of data per round. While this helped show stability, it introduces questions about real-world consistency.
In quantum computing, the leap from laboratory breakthrough to practical deployment can be vast; thus, the claims on energy reduction and system miniaturization, though striking, need independent real-world verification.
You might also likeAt Seagate’s recent 2025 Investor and Analyst Conference, CEO Dr. Dave Mosley and CTO Dr. John Morris outlined the company’s long-term roadmap for hard drive innovation.
This hinted at the possibility of 150TB hard drives, the largest HDD ever, by groundbreaking 15TB platters, but cautioned that this milestone remains at least a decade away.
The foundation of this future lies in Seagate’s HAMR (Heat-Assisted Magnetic Recording) technology, currently being deployed through the company’s Mozaic platform.
10TB per platter on track for 2028“We have high confidence in our product roadmap through Mozaic 5. And notably, the design space for granular iron platinum media that's in Mozaic 3 looks very viable to get us up to 10 terabytes per disk,” said Dr. Morris
That 10TB-per-disk benchmark is expected to be reached by 2028. “We do have confidence that we can provide a path to 10 terabytes per disk in roughly this time frame,” Morris added, explaining that spin-stand demonstrations of new technologies typically take five years to reach product qualification.
Looking beyond 10TB, Seagate is exploring how to extend the capabilities of its Iron Platinum media.
“We believe that there's another level of extension of that granular iron platinum architecture that could theoretically get as high as 15 terabytes per disk,”
Such an achievement would pave the way for 150TB hard drives by stacking 10 platters per unit. However, he warned, “beyond 15 terabytes per disk is going to require some level of disruptive innovation.”
Seagate’s CEO, Dave Mosley, echoed this long-range vision, noting, “We now know how we can get to 4 and 5 and beyond. As a matter of fact, we have visibility... beyond 10 terabytes of disk with the HAMR technology.”
“It’s not going to be easy, but I’m convinced that’s going to keep us on a competitive cost trajectory that no other technology is going to supplant in the next decade, probably beyond.”
The company’s confidence is backed by recent milestones. Mozaic 3, which delivers 3TB per platter, is now in volume production, and Mozaic 4 (4TB per platter) is scheduled to enter customer qualification next quarter.
Seagate expects to begin volume shipments of Mozaic 4 drives in the first half of 2026. Meanwhile, Mozaic 5, targeting 5TB per platter, is planned for customer qualification in late 2027 or early 2028.
Still, Seagate made it clear that 150TB drives based on 15TB platters are not imminent. As Morris emphasized, “This is just one other element in the work that we do to underpin our strategy... it will take time. There’s still a lot of work in front of us to get there.”
You might also likeIt appears we may soon get a couple of new contenders for our best smartwatches list. HMD (perhaps best known for releasing Nokia-branded phones in recent years) is rumored to be working on two smartwatches, both running Wear OS, and with a camera fitted to one of them.
This comes from tipster @smashx_60 (via Notebookcheck), and while we can't guarantee the accuracy of the claim, smartwatches would be a sensible next step for HMD – which already makes phones, tablets, earbuds, and the HMD OffGrid.
According to the leak, the first smartwatch will be the HMD Rubber 1, with a 1.85-inch OLED screen, a 400 mAh battery, and heart rate and spO2 tracking. There's also, apparently, a 2-megapixel camera on board this model.
Then there's the HMD Rubber 1S, which comes with a smaller 1.07-inch OLED display, a smaller 290 mAh battery, and no camera – though the heart rate and SpO2 tracking features are still included. It sounds as though this will be the cheaper choice.
For adults or kids?HMD RUBBER 1- oled 1.85" display - 5ATM Waterproof - BT5.3, WiFi, NFC, Accelerometer, heart rate, SpO2- 2MP CAM- Wear OS- 400mAh, USB-C, QiHMD RUBBER 1S- oled 1.07" - 5ATM Waterproof - BT5.0, WiFi, Accelerometer, heart rate, SpO2- Wear OS- 290mAh, USB-C, QiMay 29, 2025
The camera on the HMD Rubber 1 is interesting, as this would be something we haven't seen on a Wear OS watch before. While it's not clear how the camera would be integrated, presumably it would allow photos and videos to be captured from your wrist, with or without a phone connected.
There's some speculation in the Notebookcheck article that these smartwatches may be intended for kids to use, rather than adults – something along the lines of the Samsung Galaxy Watch for Kids that launched at the start of the year, perhaps.
The leak also mentions that these smartwatches will come with 5 ATM waterproofing, which is good for depths of up to 50 meters. That suggests they'll have a relatively robust casing around the internal components.
We'll have to wait and see what HMD might have in store, though as yet there's been nothing official from the company. In the meantime, we're patiently waiting for the arrival of Wear OS 6, which is expected to be pushed out in the next month or two.
You might also like