Error message

  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 2 hours 9 min ago

Google is spending $3bn on renewable power for its data centers

Wed, 07/16/2025 - 08:11
  • Google says it will spend £3 billion on 670MW in hydroelectricity PPAs
  • The deal could end up producing up to 3,000MW of hydroelectricity
  • Google data center energy consumption is up 27%, emissions are down 12%

Google has agreed to spend at least $3 billion as part of an agreement to boost its renewable energy portfolio as demands increase in line with demand for artificial intelligence and cloud computing.

The deal with Brookfield Renewable Energy Partners includes 20-year power purchase agreements for 670 megawatts of clean energy via two Pennsylvania hydroelectric plants at Holtwood and Safe Harbor.

Although Google has been bidding big on renewable energy in recent years, this marks the world's largest corporate clean power deal for hydroelectricity.

Google strikes the biggest-ever corporate hydroelectricity deal

Already a considerable starting point, Brookfield noted the Hydro Framework Agreement will support the provision of up to 3,000 megawatts of carbon-free hydroelectric capacity across the United States.

The move aligns with Google's efforts to power its data centers with carbon-free energy around the clock, and comes during an era of increased green energy investments. Hyperscaler rivals like Amazon, Meta, and Microsoft have also been splurging on nuclear, gas and renewables to meet demand.

"Hydropower is a proven, low-cost technology, offering dependable, homegrown, carbon-free electricity that creates jobs and builds a stronger grid for all," Google Head of Data Center Energy Amanda Peterson Corio explained.

Brookfield Asset Management President Connor Teskey welcomed the investment, noting that hyperscalers will need to diversify their energy production to meet demand at scale.

Although surges in AI and cloud computing have resulted in higher demand for data centers, Google's most recent 2025 sustainability report revealed how the company managed to cut data center emissions by 12% despite a 27% rise in energy consumption. In its most recent full year, the company procured more than eight gigawatts of clean energy.

Energy efficiency improvements to its AI systems, including power-hungry GPUs, have also resulted in a reduction in water consumption, typically used for cooling. However, having only replenished 64% of the water it used in 2025, there's still clearly a long way to go.

You might also like
Categories: Technology

Anthropic launches Claude for Financial Services to give research analysts an AI boost

Wed, 07/16/2025 - 04:34
  • Claude for Financial Services launches specifically for the financial industry
  • Users can access powerful Claude 4 models and other Claude AI tools
  • The system integrates with internal and external data sources

Anthropic has launched a special edition of its Claude AI platform designed for the highly regulated financial industry, with a focus on market research, due diligence, and investment decision-making.

The OpenAI rival hopes for financial institutions to use its tool for financial modelling, trading system modernisation, risk modeling, and compliance automation, with pre-built MCP connectors offering seamless access to entperise and market data platforms.

The company boasted that Claude for Financial Services offers a unified interface, combining Claude's AI powers with internal and external financial data sources from the likes of Databricks and Snowflake.

Claude for Financial Services

Anthropic highlighted four of the tool's key benefits: powerful Claude 4 models that outperform other frontier models, access to Claude Code and Claude for Enterprise, pre-built MCP connectors, and expert support for onboarding and training.

Testing revealed Claude Opus 4 passed five of the seven Financial Modeling World Cup competition levels, scoring 83% accuracy on complex excel tasks.

"Access your critical data sources with direct hyperlinks to source materials for instant verification, all in one platform with expanded capacity for demanding financial workloads," the company shared in a post.

Anthropic also stressed user data is not used for training its generative models in the name of intellectual property and client information confidentiality.

Besides Snowflake for data and Databricks for analytics, Claude for Financial Services also connects with the likes of Box for document management and S&P Global for market and valuation data, among others.

Among the early adopters is the Commonwealth Bank of Australia, whose CTO Rodrigo Castillo praised Claude for its "advanced capabilities" and "commitment to safety." The Australian banking giant envisions using Claude for Financial Services for fraud prevention and customer service enhancement.

You might also like
Categories: Technology

The iOS 26 public beta now has a possible release date, and it’s later than usual

Wed, 07/16/2025 - 04:10
  • A reputable source claims the first public iOS 26 beta will land on or around July 23
  • That would be later in the year than usual
  • There's already an iOS 26 developer beta

It’s now over a month since iOS 26 was announced, and although it’s available in developer beta, the public beta is yet to launch. But we do now have a good idea of when the first public beta might land.

According to Apple watcher Mark Gurman in a reply to a post on X by @ParkerOrtolani, the first iOS 26 public beta will probably land on or around July 23.

That’s a bit unusual, as typically we’d have had the first public beta before then. For example, the first public beta of iOS 18 launched on July 15 last year, following its announcement on June 10. So this year, with iOS 26 having been unveiled on June 9, we’d if anything have expected to already have the first public beta.

around the 23rdJuly 15, 2025

A worthwhile wait

Still, if Gurman is right there’s not too much longer to wait, and it should be worth the wait too, as iOS 26 is a significant upgrade for Apple’s smartphone operating system.

It includes a completely new look, with more rounded and transparent elements, plus redesigned phone and camera apps, a new Apple Games app, and more.

Of course, we’d take the claim of it landing on or around July 23 with a pinch of salt, especially with that being later than normal. But Gurman has a superb track record for Apple information, and either way we’d expect it to land soon.

If you can’t wait a little big longer though, you can always grab the developer beta – the next version of which may well even land before July 23. To get that, check out how to install the iOS 26 developer beta.

You might also like
Categories: Technology

Cybersecurity executives love AI, cybersecurity analysts distrust it

Wed, 07/16/2025 - 03:43

AI isn’t just something to adopt; it’s already embedded in the systems we rely on. From threat detection and response to predictive analytics and automation, AI is actively reshaping how we defend against evolving cyber threats in real time. It’s not just a sales tactic (for some); it’s an operational necessity.

Yet, as with many game-changing technologies, the reality on the ground is more complex. The cybersecurity industry is once again grappling with a familiar disconnect: bold promises about efficiency and transformation that don’t always reflect the day-to-day experiences of those on the front lines. According to recent research, 71% of executives report that AI has significantly improved productivity, but only 22% of frontline analysts, the very people who use these tools, say the same.

When solutions are introduced without a clear understanding of the challenges practitioners face, the result isn’t transformation, it’s friction. Bridging that gap between strategic vision and operational reality is essential if AI is to deliver on its promise and drive meaningful, lasting impact in cybersecurity.

Executives love AI

According to Deloitte, 25% of companies are expected to have launched AI agents by the end of 2025, with that number projected to rise to 50% shortly thereafter. The growing interest in AI tools is driven not only by their potential but also by the tangible results they are already beginning to deliver

For executives, the stakes are rising. As more companies begin releasing AI-enabled products and services, the pressure to keep pace is intensifying. Organizations that can’t demonstrate AI capabilities, whether in their customer experience, cybersecurity response, or product features, risk being perceived as laggards, out-innovated by faster, more adaptive competitors. Across industries, we're seeing clear signals: AI is becoming table stakes, and customers and partners increasingly expect smarter, faster, and more adaptive solutions.

This competitive urgency is reshaping boardroom conversations. Executives are no longer asking whether they should integrate AI, but how quickly and effectively they can do so, without compromising trust, governance, or business continuity. The pressure isn’t just to adopt AI internally to drive efficiency, but to productize it in ways that enhance market differentiation and long-term customer value.

But the scramble to implement AI is doing more than reshaping strategy, it’s unlocking entirely new forms of innovation. Business leaders are recognizing that AI agents can do more than just streamline functions; they can help companies bring entirely new capabilities to market. From automating complex customer interactions to powering intelligent digital products and services, AI is quickly moving from a behind-the-scenes tool to a front-line differentiator. And for executives willing to lead with bold, well-governed AI strategies, the payoff isn’t just efficiency, it’s market relevance.

Analysts distrust AI

If anyone wants to make their job easier, it’s a SOC analyst, so their skepticism of AI comes from experience, not cynicism. The stakes in cybersecurity are high, and trust is earned, especially when systems that are designed to protect critical assets are involved. Research shows that only 10% of analysts currently trust AI to operate fully autonomously. This skepticism isn’t about rejecting innovation, it’s about ensuring that AI can meet the high standards required for real-time threat detection and response.

That said, while full autonomy is not yet on the table, analysts are beginning to see tangible results that are gradually building trust. For example, 56% of security teams report that AI has already boosted productivity by streamlining tasks, automating routine processes, and speeding up response times. These tools are increasingly trusted for well-defined tasks, giving analysts more time to focus on higher-priority, complex threats.

This incremental trust is key. While 56% of security professionals express confidence in AI for threat detection, they still hesitate to let it manage security autonomously. As AI tools continue to prove their ability to process vast amounts of data and provide actionable insights, initial skepticism is giving way to more measured, conditional trust.

Looking ahead

Closing the perception gap between executive enthusiasm and analyst skepticism is critical for business growth. Executives must create an environment where analysts feel empowered to use AI to enhance their expertise without compromising security standards. Without this, the organization risks falling into the hype cycle, where AI is overpromised but underdelivered.

In cybersecurity, where the margin for error is razor-thin, collaboration between AI systems and human analysts is critical. As these tools mature and demonstrate real-world impact, trust will grow, especially when their use is grounded in transparency, explainability, and accountability.

When AI is thoughtfully integrated and aligned with practitioner needs, it becomes a reliable asset that not only strengthens defenses but also drives long-term resilience and value across the organization.

We list the best cloud firewall.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Why burnout is one of the biggest threats to your security

Wed, 07/16/2025 - 01:36

It’s a scenario that plays out far too often: A mid-sized company runs a routine threat validation exercise and stumbles on something unexpected, like an old infostealer variant that has been quietly active in their network for weeks.

This scenario doesn’t require a zero-day exploit or sophisticated malware. All it takes is one missed setting, inadequate endpoint oversight, or a user clicking what they shouldn’t. Such attacks don’t succeed because they’re advanced. They succeed because routine safeguards aren’t in place.

Take Lumma Stealer, for example. This is a simple phishing attack that lures users into running a fake CAPTCHA script. It spreads quickly but can be stopped cold by something as routine as restricting PowerShell access and providing basic user training. However, in many environments, even those basic defenses aren’t deployed.

This is the story behind many breaches today. Not headline-grabbing hacks or futuristic AI assaults—just overlooked updates, fatigued teams and basic cyber hygiene falling through the cracks.

Security Gaps That Shouldn’t Exist in 2025

Security leaders know the drill: patch the systems, limit access and train employees. Yet these essentials often get neglected. While the industry chases the latest exploits and talks up advanced tools, attackers keep targeting the same weak points. They don’t have to reinvent the wheel. They just need to find one that’s loose.

Just as the same old techniques are still at work, old malware is making a comeback. Variants like Mirai, Matsu and Klopp are resurfacing with minor updates and major impact. These aren’t sophisticated campaigns, but recycled attacks retooled just enough to slip past tired defenses.

The reason they work isn’t technical, it’s operational. Security teams are burned out. They’re managing too many alerts, juggling too many tools and doing it all with shrinking budgets and rising expectations. In this kind of environment, the basics don’t just get deprioritized, they get lost.

Burnout Is a Risk Factor

The cybersecurity industry often defines risk in terms of vulnerabilities, threat actors and tool coverage, but burnout may be the most overlooked risk of all. When analysts are overwhelmed, they miss routine maintenance. When processes are brittle, teams can’t keep up with the volume. When bandwidth runs out, even critical tasks can get sidelined.

This isn’t about laziness. It’s about capacity. Most breaches don’t reveal a lack of intelligence. They just demonstrate a lack of time.

Meanwhile, phishing campaigns are growing more sophisticated. Generative AI is making it easier for attackers to craft personalized lures. Infostealers continue to evolve, disguising themselves as login portals or trusted interfaces that lure users into running malicious code. Users often infect themselves, unknowingly handing over credentials or executing code.

These attacks still rely on the same assumptions: someone will click. The system will let it run. And no one will notice until it’s too late.

Why Real-World Readiness Matters More Than Tools

It’s easy to think readiness means buying new software or hiring a red team, but true preparedness is quieter and more disciplined. It’s about confirming that defenses such as access restrictions, endpoint rules and user permissions are working against the actual threats.

Achieving this level of preparedness takes more than monitoring generic threat feeds. Knowing that ransomware is trending globally isn’t the same as knowing which threat groups are actively scanning your infrastructure. That’s the difference between a broader weather forecast and radar focused on your ZIP code.

Organizations that regularly validate controls against real-world, environment-specific threats gain three key advantages.

First, they catch problems early. Second, they build confidence across their team. When everyone knows what to expect and how to respond, fatigue gives way to clarity. Thirdly, by knowing the threats that matter, and the ones focused on them, they can prioritize those fundamental activities that get ignored.

You may not need to patch every CVE right now, just the ones being used by the threat actors targeting you. What areas of your network are they actively doing reconnaissance on? Those subnets probably need more focus to patching and remediation.

Security Doesn’t Need to Be Sexy, It Needs to Work

There’s a cultural bias in cybersecurity toward innovation and incident response. The new tool, the emergency patch and the major breach all get more attention than the daily habits that quietly prevent problems.

Real resilience depends on consistency. It means users can’t run untrusted PowerShell scripts. It means patches are applied on a prioritized schedule, not “when we get around to it.” It means phishing training isn’t just a checkbox, but a habit reinforced over time.

These basics aren’t glamorous, but they work. In an environment where attackers are looking for the easiest way in, doing the simplest things correctly is one of the most effective strategies a team can take.

Discipline Is the New Innovation

The cybersecurity landscape will continue to change. AI will keep evolving, adversaries will go on adapting, and the next headline breach is likely already in motion. The best defense isn’t more noise or more tech, but better discipline.

Security teams don’t need to do everything. They need to do the right things consistently. That starts with reestablishing routine discipline: patch, configure, test, rinse and repeat. When those fundamentals are strong, the rest can hold.

For CISOs, now is the time to ask a simple but powerful question: Are we doing the basics well, and can we prove it? Start by assessing your organization’s hygiene baseline. What patches are overdue? What controls haven’t been tested in months? Where are your people stretched too thin to execute the essentials? The answers won’t just highlight the risks, they’ll point toward the pathway to resilience.

We list the best patch management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

WeTransfer issues flurry of promises that it's not using your data to train AI models after its new terms of service aroused suspicion

Tue, 07/15/2025 - 21:30
  • WeTransfer users were outraged when it seemed an updated terms of service implied their data would be used to train AI models.
  • The company moved fast to assure users it does not use uploaded content for AI training
  • WeTransfer rewrote the clause in clearer language

File-sharing platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools.

The offending language buried in the ToS said that using WeTransfer gave the company the right to use the data "for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy."

That part about machine learning and the general broad nature of the text seemed to suggest that WeTransfer could do whatever it wanted with your data, without any specific safeguards or clarifying qualifiers to alleviate suspicions.

Perhaps understandably, a lot of WeTransfer users, who include many creative professionals, were upset at what this seemed to imply. Many started posting their plans to switch away from WeTransfer to other services in the same vein. Others began warning that people should encrypt files or switch to old-school physical delivery methods.

Time to stop using @WeTransfer who from 8th August have decided they'll own anything you transfer to power AI pic.twitter.com/sYr1JnmemXJuly 15, 2025

WeTransfer noted the growing furor around the language and rushed to try and put out the fire. The company rewrote the section of the ToS and shared a blog explaining the confusion, promising repeatedly that no one's data would be used without their permission, especially for AI models.

"From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We’ve since updated the terms further to make them easier to understand," WeTransfer wrote in the blog. "We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension."

While still granting a standard license for improving WeTransfer, the new text omits references to machine learning, focusing instead on the familiar scope needed to run and improve the platform.

Clarified privacy

If this feels a little like deja vu, that's because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change to the company's fine print implied that Dropbox was taking content uploaded by users in order to train AI models. Public outcry led to Dropbox apologizing for the confusion and fixing the offending boilerplate.

The fact that it happened again in such a similar fashion is interesting not because of the awkward legal language used by software companies, but because it implies a knee-jerk distrust in these companies to protect your information. Assuming the worst is the default approach when there's uncertainty, and the companies have to make an extra effort to ease those tensions.

Sensitivity from creative professionals to even the appearance of data misuse. In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the stakes are very real. The lawsuits and boycotts by artists over how their creations are used, not to mention suspicions of corporate data use, make the kinds of reassurances offered by WeTransfer are probably going to be something tech companies will want to have in place early on, lest they face the misplaced wrath of their customers

You might also like
Categories: Technology

Microsoft employee uses terrible AI-generated image to advertise for Xbox artists just weeks after massive layoffs

Tue, 07/15/2025 - 17:00
  • An employee used a very bad AI-generated image to advertise graphic designer jobs at Xbox
  • The image shows a woman writing code that somehow appears on the back of a computer monitor, among other problems
  • The ad is especially awkward as Microsoft recently completed laying off more than 9,000 people

A post on LinkedIn seeking graphic designers for Xbox is going viral for the irony of terrible AI-generated graphics. Principal Development Lead for Xbox Graphics, Mike Matsel, shared a post announcing the roles, accompanied by what at first glance appears to be an innocuous cartoon of a woman at a workstation typing code. Except the code is on the back of her monitor, and that's just the beginning of the issues with the image.

The fact that Microsoft concluded the latest of several rounds of layoffs, affecting a total of more than 9,000 people, including many in the Xbox division, just a few weeks ago, makes it even more awkward.

(Image credit: LinkedIn/Mike Matsel)

The more you examine the image, the more obvious it becomes that it was (poorly) produced with AI. The computer is unconnected to anything, the desk sort of fades away into nothingness, and the shadows don't make sense. Plus, would Microsoft want a graphic of someone clearly using Apple headphones? Not to mention the fact that, in 2025, you're very unlikely to see someone with the corded iPhone headphones of nearly 20 years ago.

The image does at least sell the idea that Microsoft desperately needs graphic designers, or at least people who know when graphics are very wrong. The dozens of comments on the post emphasize just how annoying many people find the post. A lot are from developers and graphic designers who might otherwise be interested in the positions.

Awkward AI

The fact that this wasn’t just a bad image, but one that undermines the entire point of the job being advertised, is truly mind-boggling. It’s like handing out flyers for a bakery that uses clip art of a melting candle with "bread" written on the attached label.

It's so bizarrely bad that more than a few commenters wondered if it was on purpose. It might be a way to draw attention to the open positions, or, unlikely as this may be, a form of malicious compliance from someone instructed to use AI to announce the open jobs after their colleagues in those positions were recently let go. Or maybe it was the sharpest satire ever seen on LinkedIn.

Those are wildly unlikely theories, but it's telling that they aren't totally impossible. An ad symbolizing everything people are worried about, especially regarding the very artistic jobs being advertised, would be far too blatant to use in a joke. Still, apparently, that's just reality now.

The fact that Microsoft is currently investing billions of dollars in AI only adds to the dissonant reaction. Even if it wasn't formally approved by Microsoft, it still has their Xbox logo on it. Then again, even senior executives can faceplant when discussing and using AI.

Just last week, Executive Producer at Xbox Game Studios Publishing Matt Turnbull suggested that people recently let go could turn to AI chatbots to help get over their emotional distress and find new jobs. He took down the essay encouraging former employees to use AI tools to both find jobs and for "emotional clarity," eventually, but this graphic disaster remains visible to the public, as opposed to the code hiding behind the back of the monitor.

You might also like
Categories: Technology

AWS launches Kiro, an agentic AI IDE, to end the chaos of vibe coding

Tue, 07/15/2025 - 15:30
  • AWS unveils Kiro, an agentic AI way to code
  • Kiro looks to help solve typically issues seen in "vibe coding"
  • Kiro is in preview now, with three tiers set to be available

Amazon Web Services (AWS) has unveiled Kiro, an IDE which uses AI agents to streamline the development process.

Available now in preview, Kiro looks to cut down on potential issues with "vibe coding", the process where agents are being asked to create and build software with minimal human interaction.

As well as helping with coding, Kiro can also automatically create and update project plans and technical blueprints, solving one of the most troublesome issues for developers who are still getting to grips with the potential AI brings.

AWS Kiro

Announcing the launch, AWS said Kiro is looking to help transition from “vibe coding to viable code.”

It works by breaking down prompts into structured components, which can then be used to guide implementation and testing, as well as tracking any changes as the code evolves, ensuring no inconsistencies break through.

There's also Model Context Protocol (MCP) support for connecting specialized tools, steering rules to guide AI behavior across your project, and agentic chat for ad-hoc coding tasks.

Finally, it can also automatically check through code to make sure nothing is amiss, making sure developers can submit or launch code without fear of any problems.

Kiro looks, “to solve the fundamental challenges that make building software products so difficult — from ensuring design alignment across teams and resolving conflicting requirements, to eliminating tech debt, bringing rigor to code reviews, and preserving institutional knowledge when senior engineers leave," Nikhil Swaminathan, Kiro’s product lead, and Deepak Singh, Amazon’s vice president of developer experience and agents, said.

"Kiro is great at ‘vibe coding’ but goes way beyond that—Kiro’s strength is getting those prototypes into production systems with features such as specs and hooks."

For now, Kiro is free to use during the preview period, but it seems AWS is looking at introducing three pricing tiers: a free version with 50 agent interactions per month; a Pro tier at $19 per user per month with 1,000 interactions; and a Pro+ tier at $39 per user per month with 3,000 interactions.

"Kiro is really good at "vibe coding" but goes well beyond that," Amazon CEO Andy Jassy wrote in a post on X.

"While other AI coding assistants might help you prototype quickly, Kiro helps you take those prototypes all the way to production by following a mature, structured development process out of the box. This means developers can spend less time on boilerplate code and more time where it matters most – innovating and building solutions that customers will love.

You might also like
Categories: Technology

'A ticking time bomb': US trains are vulnerable to a simple 13-year-old known security vulnerability - here's what you need to know

Tue, 07/15/2025 - 15:03
  • Hackers only need cheap hardware and basic skills to stop a moving freight train remotely
  • The American Association of Railways dismissed the threat until federal pressure forced a response
  • The system still isn’t fixed, and full updates won’t arrive until at least 2027

A critical flaw in the wireless systems used across US rail networks has remained unresolved for more than a decade, exposing trains to remote interference.

The vulnerability affects End-of-Train (EoT) devices, which relay data from the last carriage to the front of the train, forming a link with the Head-of-Train (HoT) module.

Although the issue was flagged in 2012, it was largely dismissed until federal intervention forced a response.

Ignored warnings and delayed responses

Hardware security researcher Neils first identified the flaw in 2012, when software-defined radios (SDRs) began to proliferate.

The discovery revealed that these radios could easily mimic signals sent between the HoT and EoT units.

Since the system relies on a basic BCH checksum and lacks encryption, any device transmitting on the same frequency could inject false packets.

In a concerning twist, the HoT is capable of sending brake commands to the EoT, which means an attacker could stop a train remotely.

“This vulnerability is still not patched,” Neils stated on social media, revealing it took over a decade and a public advisory from the Cybersecurity and Infrastructure Security Agency (CISA) before meaningful action was taken.

The issue, now catalogued as CVE-2025-1727, allows for the disruption of U.S. trains with hardware costing under $500.

Neils's findings were met with skepticism by the American Association of Railways (AAR), which dismissed the vulnerability as merely “theoretical” back in 2012.

Attempts to demonstrate the flaw were thwarted due to the Federal Railway Authority's lack of a dedicated test track and the AAR denying access to operational sites.

Even after the Boston Review published the findings, the AAR publicly refuted them via a piece in Fortune.

By 2024, the AAR’s Director of Information Security continued to downplay the threat, arguing that the devices in question were approaching end-of-life and didn’t warrant urgent replacement.

It wasn’t until CISA issued a formal advisory that the AAR began outlining a fix. In April 2025, an update was announced, but full deployment is not expected until 2027.

The vulnerability stems from technology developed in the 1980s, when frequency restrictions reduced the risk of interference, but today’s widespread access to SDRs has altered the risk landscape dramatically.

“Turns out you can just hack any train in the USA and take control over the brakes,” Neils said, encapsulating the broader concern.

The ongoing delay and denial mean US trains are probably sitting on a keg of gunpowder that could lead to serious risks at any time.

Via TomsHardware

You might also like
Categories: Technology

A quarter of applications now include AI, but enterprises still aren't ready to reap the benefits

Tue, 07/15/2025 - 13:36
  • Only 2% of enterprises are highly ready for AI, report claims
  • Fewer than one-third have deployed AI firewalls to date
  • Another one in three could do with diversifying their AI models

Although more and more applications are getting AI overhauls, new F5 research had claimed only 2% of enterprises are highly ready for AI.

More than one in five (21%) fall into the low-readiness category, and while three-quarters (77%) are considered moderately ready, they continue to face security and governance hurdles.

This comes as one in four applications use AI, with many organizations splitting their AI usage across multiple models including paid models like GPT-4 and open-source models like Llama, Mistral and Gemma.

Enterprises aren't benefitting from the AI they have access to

Although 71% of the State of AI Application Strategy Report respondents said they use AI to enhance security, F5 highlighted ongoing challenges with security and governance. Fewer than one in three (31%) have deployed AI firewalls, and only 24% perform continuous data labelling, potentially increasing risks.

Looking ahead, one in two (47%) say they plan on deploying AI firewalls in the next year. F5 also recommends that enterprises diversify AI models across paid and open-source opens, scale AI usage to operations, analytics and security, and deploy AI-specific protections like firewalls and data governance strategies.

At the moment, it's estimated that two-thirds (65%) use two or more paid models and at least one open-source model, demonstrating considerable room for improvement.

"As AI becomes core to business strategy, readiness requires more than experimentation—it demands security, scalability, and alignment," F5 CPO and CMO John Maddison explained.

The report highlights how enterprises that lack of maturity can stifle growth, introduce operational bottlenecks and present compliance challenges.

"AI is already transforming security operations, but without mature governance and purpose-built protections, enterprises risk amplifying threats," Maddison added.

You might also like
Categories: Technology

Ex-Dyson engineer to launch LTO tape holographic rival that uses cheap $5 laser diode, promises 200TB cartridges on 100m reels — but read/write speeds are unknown

Tue, 07/15/2025 - 12:28
  • Startup’s ribbon-based holographic tape promises 200TB per LTO cartridge
  • The tech uses polymer film and $5 laser to write optical voxels
  • Integrates into LTO systems with no upstream software or hardware changes

UK startup HoloMem is developing a holographic storage system aimed at replacing or supplementing LTO tape.

The company, founded by former Dyson engineer Charlie Gale, uses polymer ribbon cartridges written with $5 laser diodes. Each 100-meter cartridge could store up to 200TB in a write-once, read-many format.

The cartridges match LTO dimensions and work in existing tape libraries without changes to upstream software. Drives function as drop-in shelves, allowing libraries to operate in a hybrid LTO and HoloMem setup.

HO1O

The idea began at Dyson, where Gale helped create a holographic label system called HO1O. It embedded multiple QR codes in a single hologram, readable from different angles or light sources.

“What we originally did at HO1O for prototypes was to use a light-sensitive polymer material that you just exposed to laser light… it locks polymer change and retains that image,” Gale told Blocks & Files.

This concept evolved into multi-layer data storage using similar materials.

Unlike other optical approaches that use glass or ceramics, HoloMem writes data as holographic voxels into polymer film. The film uses a 16-micron thick polymer sheet laminated between PET layers, forming a 120-micron ribbon.

The prototype HoloDrive writes and reads holograms using a 3D-printed lens and a digital micromirror device.

“We are writing data pages of thousands of bits,” Gale said. Throughput hasn’t been disclosed, although it reportedly operates at LTO-9 speeds. The drive uses £30 circuit boards and modified LTO mechanics.

HoloMem has received £900,000 in UK innovation grants and is partnering with TechRe and QStar for field trials and integration testing. It holds patents for the optical engine, media design and volumetric storage method.

Blocks & Files reports: “We understand TechRe will deploy prototype Holodrives inside LTO libraries in its UK data centers to test out the product’s performance, reliability and robustness. HoloMem has written device firmware so that, we understand, it presents itself as a kind of LTO drive.”

Future capacity increases may come through multi-channel recording, using multiple light wavelengths to layer data. Each added channel could multiply storage with no hardware change.

You might also like
Categories: Technology

This useful Spotify access feature could be coming to OnePlus and Oppo earbuds, following the likes of Sony and Bose

Tue, 07/15/2025 - 12:00
  • A new line of code has been found in the HeyMelody app that suggests the Spotify Tap function could be coming to Oppo and OnePlus earbuds
  • Android Authority managed to activate the feature, but it has several setbacks
  • We don't know when it could be released, or which OnePlus and Oppo audio accessories will be compatible with Spotify Tap

OnePlus and Oppo are solid audio brands if you’re after a decent pair of mid-range earbuds, and now they could be getting a new handy Spotify integration, which could be a big convenience upgrade for Android users.

A new Android Authority teardown of the HeyMelody app highlights that the Spotify Tap function, which is supported on most audio devices from Sony, Bose, and Jabra, could be coming to OnePlus and Oppo earbuds. For those of you who are unaware, HeyMelody is the native app for setting up OnePlus and Oppo audio tech, similar to Sony’s Sound Connect app.

But what is Spotify Tap? It’s essentially a convenience feature that allows you to play music directly from Spotify by double or triple-tapping compatible audio accessories such as the brand new Sony WH-1000XM6 headphones. If you’re already an owner of the best headphones and best earbuds alike, you’ll be more than familiar with this function.

As for Android Authority’s findings, the outlet dove into the HeyMelody app v115.8 and found a code related to a possible new integration with OnePlus and Oppo earbuds, and even managed to activate the feature.

(Image credit: Android Authority )

For starters, the teardown shows two different options (double and triple tap) for Spotify music playback in each earbud. The outlet also states that you can redo this gesture so that Spotify plays you a recommended song, but there were a few setbacks.

While Android Authority was able to activate Spotify Tap, the outlet noted that changes to background settings were required to get the options to show, adding that you might only be able to set this feature for one earbud at a time, and not both. However, this is likely to change if Spotify Tap is ever rolled out to OnePlus and Oppo earbuds, it added.

As it stands, the teardown doesn’t explicitly state when this feature could roll out, or which OnePlus and Oppo earbuds will receive the Spotify Tap treatment if it does. According to Android Authority, it wouldn’t be surprising if Spotify Tap is only available to selected upcoming hardware.

In that case, we’re taking this lightly and are waiting to see what other findings emerge – but we’ll certainly have our ears to the ground in the meantime.

You might also like
Categories: Technology

Google transforms NotebookLM into a curated knowledge hub and I might be in geek heaven

Tue, 07/15/2025 - 12:00
  • Google introduces featured notebooks into NotebookLM
  • The curated content comes from sites like The Economist and The Atlantic
  • You can ask questions about each notebook

Tired of waiting for you to use NotebookLM to make fantastic learning resources of your own, Google has decided to take matters into its own hands and produced a series of carefully curated Notebooks from respected authors, researchers, publications, and nonprofits, including The Economist and The Atlantic.

With NotebookLM, you can read the original source material, but also pose questions to a chatbot that’s versed in the material, so you can explore specific topics in greater depth.

And of course, you can listen to the AI-generated audio overviews, which sound like podcasts, that NotebookLM is famous for, or explore the newer Mind Maps feature.

The initial lineup of curated notebooks includes longevity advice from Eric Topol, bestselling author of Super Agers, expert analysis and predictions for the year 2025 as shared in The World Ahead annual report by The Economist, and an advice notebook based on bestselling author Arthur C. Brooks' How to Build A Life columns in The Atlantic.

As if that wasn’t enough, they’re throwing in The Complete Works of William Shakespeare for anybody who needs help exploring the works of the Bard.

You can expect the list of featured notebooks to grow, too. Google says it will continue to introduce new featured notebooks, including additional collections from its partnerships with The Economist and The Atlantic.

(Image credit: Apple/Google)The books of the future

Google says that since introducing the ability to share notebooks last month, ”more than 140,000 public notebooks have been created, on a wide range of topics”.

I find these featured notebooks dangerous myself because each one is a little rabbit hole I can happily disappear down for over an hour.

For example, I opened the How to Build a Life notebook based on Arthur Brooks' columns in The Atlantic and asked it what age was best to buy a house, and I didn’t emerge for another hour!

It's a different experience from reading a book, as you're constantly switching to an interactive way of consuming media, which makes the learning so much more fun.

Commenting on the new featured notebooks Nicholas Thompson, CEO of The Atlantic said, "The books of the future won’t just be static: some will talk to you, some will evolve with you, and some will exist in forms we can’t imagine now. We’re delighted to partner with Google in its pioneering work on this front.”

You might also like
Categories: Technology

HBO’s Harry Potter TV show is supposed to be a fresh start, but Nick Frost’s Hagrid says otherwise

Tue, 07/15/2025 - 11:42

HBO’s Harry Potter TV show is coming to the small screen in 2027, with filming officially beginning this week. We’ve already had one first look in the form of Dominic McLaughlin’s Harry, and now a second character has followed suit. Nick Frost’s Hagrid has already made waves on the internet, but there’s something important to bear in mind.

Of course, we all love the original cast in the Harry Potter movies, including Maggie Smith (McGonagall), Alan Rickman (Snape), Michael Gambon (Dumbledore) and original Hagrid Robbie Coltrane. Yet the fact remains that in two years time we’ll have a fresh crop of famous faces in these same roles, such as Paapa Essiedu (Snape), John Lithgow (Dumbledore), Nick Frost (Hagrid), and Janet McTeer (McGonagall).

Obviously, the HBO Max version can never replace who came before, and it goes without saying that the new TV show will likely feel completely different. While these are just assumptions at this stage, the first look at Nick Frost’s Hagrid could certainly blur the line between the two in the worst way.

Is Hagrid in the HBO Harry Potter TV show too much like the original movie version?

Robbie Coltrane as Hagrid in the Harry Potter movies. (Image credit: Warner Bros.)

Obviously, the fact that McLaughlin's Harry looks so similar to a young Daniel Radcliffe has gone down incredibly well (including with me), but the similarities in Hagrid’s past and present is slightly more unsettling. Let’s face it – we’re looking at an airbrushed version of the original in this new snap. Of course, this could be hypocritical, but there’s extra context with Nick Frost’s Hagrid.

In an interview with Collider, Frost previously confirmed he was “never” going to try and be a version of Coltrane’s portrayal. “You get cast because you're going to bring something to that. While I'm really aware of what went before me in terms of Robbie [Coltrane]'s amazing performance, I'm never going to try and be Robbie. I'm going to try and do something, not ‘different,’ I think you have to be respectful to the subject matter, but within that, there's scope for minutia,” he said.

With this in mind, I wasn’t expecting Hagrid’s physical appearance to be so similar? It’s expected character descriptions will play by the book’s rules, but that doesn’t mean the TV show has to copy exactly what we saw in the movie. Arabella Stanton (Hermione) and Paapa Essiedu (Snape)’s castings are great examples of this, as is the cast of hit Broadway play Harry Potter and the Cursed Child.

There’s no question that I’m running before I can walk with my assessment here, but I do think the physical similarity is a hindrance rather than a help. Hagrid is arguably the most beloved adult character of the bunch, so we’ll have the highest expectations for him. Stick an identical wig and a massive beard on him, and Frost is instantly lost in Coltrane’s shadow. Clearly the differences in Hagrid will come through in his personality, but is that enough of a difference? Is there enough of a reason to reboot Harry Potter at all?

“I always read Hagrid as he's like a lovely, lost, violent, funny, warm child. I think the beauty of being able to do a book a season means I get to explore that a lot more, and I can't wait. He's funny! I want it to be funny and cheeky and scared and protective and childlike. That's what I'm planning on doing.”

You might also like
Categories: Technology

Nintendo's anti-piracy rules have got one Switch 2 owner's console banned after they bought pre-owned games that they later found out were cloned

Tue, 07/15/2025 - 11:27
  • A Nintendo Switch 2 owner was reportedly banned after playing legitimate second-hand games that were later found to be cloned
  • Reddit user 'dmanthey' says they were able to reverse the ban after speaking to Nintendo
  • Nintendo has been cracking down on piracy by restricting consoles caught using piracy tools

A Nintendo Switch 2 owner is warning players about buying pre-owned physical games after being banned for unknowingly playing cloned versions.

Last month, it was reported that Nintendo was cracking down on piracy and had begun blocking access to online services on the Switch 2 if players were caught using MIG Flash, a tool used to create copies of games.

But it seems that even innocent players aren't safe from the company's strict anti-piracy policy.

As reported by IGN, Redditor 'dmanthey' shared a post saying that they were banned by Nintendo after loading up some original Switch games they bought from Facebook Marketplace.

The user explained that although the physical copies were legitimate, they later discovered that the games had already been copied by the original owner, which was the reason their console was restricted.

"Switch 2 users - be careful buying used Switch 1 games. You can get banned if a bad actor dumped it," 'dmanthey' said. "Got banned and unbanned after simply downloading patches for 4 Switch 1 games I bought from Facebook marketplace."

Dmanthey explained in the thread that they unknowingly played original Switch cartridges that were cloned using a piracy-enabling device.

"Basically, a thief buys/rents a game. They make a copy for themselves using the MIG dumper," the user said. "They resell the original game and keep a copy for themselves on their MIG. Then both of you get banned when the Switch 2 goes online. Only one of you has the carts, so that's the person that will be unbanned."

Thankfully, dmanthey was able to get unbanned after speaking to Nintendo's customer service and providing evidence of their purchase and conversation from the Facebook Marketplace seller.

"I contacted Nintendo support and found out I was banned," they said. "They had me pull up the Facebook Marketplace listing and take some pics of the cartridges. The whole process was painless and fast."

'Dmanthey' added, "The amount of info they had is crazy".

"They could see my ddwrt endpoints, the brand of my memory card, they even knew that I had an EVGA mouse and keyboard plugged into my Switch 2," they continued.

Another user was faced with a similar situation last month after they purchased pre-owned Switch 2 from Walmart, only to find that it had been 'bricked' by Nintendo after booting it up.

You might also like...
Categories: Technology

China regains access to Nvidia chips after US lifts restrictions

Tue, 07/15/2025 - 11:25
  • Nvidia could soon resume selling H20 chips in China
  • CEO Jensen Huang has liaised with China and the US
  • The company lost billions in sales as a result of export bans

Nvidia is planning to resume sales of its H20 AI chips to China after the US government confirmed it would grant the tech giant export licenses.

The move comes after Nvidia CEO Jensen Huang's visit to China and his discussions with US President Donald Trump, all in a bid to reach an agreement and resume sales.

As a result, it's believed that Chinese companies like ByteDance and Tencent are now lining up to place orders on H20 chips after a brief pause to exports.

Nvidia could resume Chinese exports soon

Nvidia had already custom-designed the H20 chip for China after US export restrictions, but it was banned in April 2025, leading to an estimated cost of $10-15 billion in lost sales and a further $5.5 billion in inventory write-offs. The costs were so significant that Nvidia declared these losses in its quarterly earnings report.

The potential approval of licenses by the US government could reverse charges, bringing in an additional $15-20 billion in revenue this year.

However, Trump isn't necessarily expressing a preference for Nvidia. AMD is also expecting review of its export licenses for MI308 chips after reporting a smaller but still noteworthy $1.5 billion impact from export curbs.

Although domestic competition has heated up in China, many firms still prefer Nvidia for its CUDA ecosystem. Huang also acknowledged the importance of China to Nvidia's strategy, calling the market "massive, dynamic, and highly innovative" (via Reuters).

The potential easing of restrictions comes at an important time – China also eased rare earth export restrictions, suggesting the two global superpowers could be slowly reaching an agreement.

You might also like
Categories: Technology

Is the inside of your PC a dust-filled nightmare? Maybe not in the future, thanks to case filters inspired by the human nose

Tue, 07/15/2025 - 11:15
  • A 'bioinspired super-adhesive filter' has been tested by Korean scientists
  • It uses oil, mimicking 'mucus-coated nasal hairs' for better filtering of dust
  • Your PC could be a lot more dust-free in the future as a result, if this ever comes to fruition with mesh case filters

The dust filters that aim to prevent particles from getting inside your desktop PC or laptop could take a big leap forward in the future, thanks to an invention that models itself after the human nose.

The 'bioinspired super-adhesive filter' has been tested by Korean researchers, and it's essentially an oil-coated mesh that more effectively stops dust, or other infiltrating particles, in their tracks.

A paper in the journal Nature, authored by scientists from Chung-Ang University in South Korea (as highlighted by Tom's Hardware), explains that the invention is "inspired by the natural filtration abilities of mucus-coated nasal hairs," which doesn't evoke the most pleasant of images.

The reality is that it's a 'biomimetic filter' featuring a thin liquid coating, and just as with nasal hairs, that liquid helps to trap invading particles more effectively, as they stick to it. Or as the paper puts it: "When PM [particulate matter] encounters the mucus, a meniscus forms, generating strong adhesion by capillarity."

The result of this oil-based filter mimic should be a more dust-free computer. And while the researchers are primarily targeting the likes of household or industrial air filtration systems, the tech could be applied to anything where a dust filter is needed, including the humble PC.

The filters use a "thin, uniform silicone oil layer" which is sprayed on, and they capture significantly more particles than traditional efforts, and are effective for 2-3x longer, we're told.

To extend their usable lifespan, the filters can be washed, dried, and the oil reapplied by simply spraying it on (with a non-toxic oil obviously being used).

Analysis: maybe snot

(Image credit: TechRadar)

Should we prepare ourselves for the Cooler Master Mucus 5N0T PC case, then? Well, maybe not, but this innovation could have serious benefits for the world of computers.

There's a balance required with the typical desktop PC case or laptop chassis, in that they need plenty of vents to keep the components inside cool. However, a lot of space for air to move through for cooling means a lot more dust potentially enters the PC.

Traditional meshes try to capture that dust, but don't always do much of a good job. Sure, they help, but if you look inside your PC (through the glass window on the side, if it has one), you may notice it's pretty dusty in there.

After a few years, dust can accumulate in a considerable quantity, particularly around the CPU fan and likely the graphics card as well, if you don't clean inside your PC case. And let's be honest, most of us are way too good at procrastinating when it comes to that kind of PC housekeeping (I know I am).

New filters that rely on oil could be a real boon in terms of keeping your PC a lot more dust-free throughout its lifespan, perhaps eliminating the need for any cleaning at all one day - or at least making this chore a far more infrequent task.

If all this talk has inspired you to clean up your PC, do so very carefully (using a can of compressed air), and make sure you look at some good advice on how to carry this out properly (without damaging fans). That's especially the case with laptops, and I wouldn't recommend trying to open a notebook case, in order to clean inside, to anyone but the most tech-savvy readers.

You might also like...
Categories: Technology

North Korean hackers release malware-ridden packages into npm registry

Tue, 07/15/2025 - 10:58
  • Security researchers spotted 67 malicious packages on npm
  • The packages are part of the Contagious Interview campaign
  • They are most likely deployed by North Korean attackers

North Korean hackers have been seen pushing dozens of malicious packages to npm in an attempt to compromise western technology products through supply chain attacks.

Cybersecurity researchers Socket claim the latest push of 67 malicious packages is just the second leg of a previous attack, in which 35 packages were published, as part of a campaign called Contagious Interview.

"The Contagious Interview operation continues to follow a whack-a-mole dynamic, where defenders detect and report malicious packages, and North Korean threat actors quickly respond by uploading new variants using the same, similar, or slightly evolved playbooks," Socket researcher Kirill Boychenko said.

Thousands of victims

Uploading malicious code to npm is just a setup. The real attack most likely happens elsewhere - on LinkedIn, Telegram, or Discord. North Korean attackers would pose as recruiters, or HR managers in large, reputable tech companies, and would reach out to software developers offering work.

The interview process includes multiple rounds of talks and concludes with a test assignment. That test assignment requires the job seeker to download and run an npm package, which is where the person ends up with a compromised device. Obviously, that doesn’t mean that other people couldn’t accidentally download tainted packages, as well.

Cumulatively, the packages attracted more than 17,000 downloads, which is quite the attack surface.

North Koreans are infamous for their fake job and fake employee scams, whose goals usually vary between cyber-espionage and financial theft. If they’re not stealing intellectual property or proprietary data, then they’re stealing cryptocurrencies which the government uses to fund the state apparatus and its nuclear weapons program.

The campaigns deploy all sorts of malware, from the BeaverTail infostealer, across XORIndex Loader, HexEval, and many others.

"Contagious Interview threat actors will continue to diversify their malware portfolio, rotating through new npm maintainer aliases, reusing loaders such as HexEval Loader and malware families like BeaverTail and InvisibleFerret, and actively deploying newly observed variants including XORIndex Loader," the researchers concluded.

Via The Hacker News

You might also like
Categories: Technology

AI is helping developers save time, but the struggle to find timely information is costing businesses millions

Tue, 07/15/2025 - 10:27
  • Many developers report saving time when using AI to improve code
  • Searching for information still needs to be addressed in AI tools
  • Developers and leaders need to collaborate on solutions

New Atlassian research suggests that generative AI is indeed helping boost productivity among developers, saving two in three (68%) an average of more than 10 hours per week and helping to improve code quality and build new features.

This is up from less than half (46%) last year, with Atlassian indicating that the time that's being saved through AI is now being reinvested into improving code and building new features, resulting in a net gain for companies.

However, it's not all roses, because every force is met with an equal and opposite force – many developers are reporting that AI is actually costing them time.

AI on the whole is beneficial to developers

According to the study, one in two developers report losing 10+ hours per week due to inefficiencies, like searching for information, suggesting that artificial intelligence isn't actually helping to improve productivity in some areas, or for some roles. As many as 90% report losing 6 hours per week due to fragmented workflows and poor collaboration.

Difficulties finding information, a lack of clear direction from leaders and poor collaboration with other teams were highlighted as three of the biggest contributors to poor productivity.

"This pressure-cooked mix of innovation and strain demands a closer look at how AI is reshaping the developer experience, and what that means for the future of software development across the industry," Atlassian CTO Rajeev Rajan explained.

The report also uncovered a clear disconnect between AI-powered tools and the tools that workers actually need. Although most solutions cater to coding, this only accounts for an estimated 16% of a developer's working week, with as much as 84% of time spend on tedious tasks.

Looking ahead, Atlassian calls for closer collaboration and communication between developers and leadership to identify friction points.

You might also like
Categories: Technology

Finally! Sony revives it full-frame premium compact camera line after a 10-year hiatus, with the pricey new RX1R III

Tue, 07/15/2025 - 10:10
  • The new RX1R III has the same high-resolution 61MP sensor as the A7R V and A7C R
  • Like them, it also has the latest Bionz XR processor and AI autofocus skills
  • It features the same Zeiss 35mm f/2 Lens as its 10-year-old predecessor

*This is a breaking news story. We'll be updating this page as we get more information

Sony dropped a huge surprise today by unveiling the RX1R III, a third instalment in its line of high-resolution full-frame premium compact cameras.

I'm shocked because the RX1R III comes 10 years after the RX1R II, without so much of a whisper leaked, and such a lengthy gap between cameras is practically unheard of.

That said, premium compacts such as the Fujifilm X100VI have enjoyed a surge in popularity, so it's understandable that Sony has revived the series. And it's done so with its latest tech – this is no mere refresh of a 10-year-old model. No, the RX1R III features the same 61MP sensor, Bionz XR processor and AI processing chip for subject detection autofocus as the A7R V and A7C R.

Paired with Sony's super-sharp Zeiss Sonnar T* 35mm f/2 lens – that's the exact same optic as the one found in the RX1R II – you have what looks like the ultimate everyday camera for reportage, street, travel photography and more.

Here's the rub – the RX1R III costs $5,098 at B&H Photo with pre-orders available. (UK and Australia pricing TBC). That's quite the price hike from the RX1R II, which was announced in October 2015 for $3,299. It's certainly not just inflation.

That price point pitches the RX1R III against the Fujifilm GFX100RF and a little under the Leica Q3, and more than double the price of the Fujifilm X100VI.

I expect Sony's latest premium compact to have the advantage over these rivals in a few areas, namely performance and autofocus skills. However, the competition is much stiffer 10 years down the line than it was for the RX1R II.

We're yet to get our hands on the Sony RX1R III, but we'll be sure to do so and give it a proper test: it could become our top premium compact camera pick.

You might also like
Categories: Technology

Pages