Top online payments system, Paypal, and one of the best website builders, Wix, have strengthened their partnership with new integrations, making operations simpler for ecommerce website owners, and checkouts easier for customers.
PayPal now comes as a built-in part of Wix Payments, meaning merchants will be able to connect their PayPal Business accounts, and manage all transactions in a single dashboard, alongside other Wix Payments activity. Previously, if merchants running Wix websites wanted to offer PayPal as a payment gateway, they had to switch between two platforms for all operations, including reports, chargeback alerts, and payouts.
Furthermore, the money from PayPal purchases will now flow directly into the Wix Payments account, giving merchants clearer visibility over their income, and reducing the need to reconcile between two systems.
Wix PayPalMerchants will also be able to benefit from PayPal’s broader suite of features, such as PayPal Pay Later (BNPL) and Venmo, it was said. Finally, PayPal will also now serve as a Payment Service Provider (PSP), processing card purchases within Wix Payments.
“We’re always looking for ways to create more seamless experiences for our users and provide them with the best way to accept payments and manage funds online, in person, and on the go,” said Amit Sagiv and Volodymyr Tsukur, Co-Heads of Wix Payments.
“By bringing PayPal under the Wix Payments umbrella, we gain significantly more control over the user experience and how PayPal’s products are delivered to our merchants. This deeper integration allows us to improve conversion, offer more value, and drive stronger profitability, while giving our users a faster, more unified checkout flow.”
At press time, the new integration is only available to Wix Payments users in the US - however, the company said there are plans to make this feature available in more regions "over time".
You might also likeWhile mainstream media channels were earlier considered the primary destination by brands for digital marketing of inspiration, consideration, and conversion, that is no longer true today.
With growing diversification of the media landscape, Retail Media Networks (RMN), a collection of digital channels owned by retailers, have emerged among the fastest growing digital media channels.
With a healthy annual double-digit growth, the global retail media market is expected to reach ($179.5 billion) by 2025. In the UK alone, retail media ad spending is expected to outdo TV ad spending in 2025 and exceed £7 billion in 2028.
Amazon leads the pack with the lion’s share of retail media revenue (~$60bn in 2024). Walmart is a distant second (~$4bn). This gap speaks of the market’s growth potential and intense competition for other RMNs.
Compared to the thin traditional retail margins, RMN revenues typically exceed 70%. Many retailers have entered the fray considering this additional revenue stream and margin contribution potential – over 200 RMNs – have been launched in the last few years.
The rise of RMNs:The availability of various social media and online channels has ensured the path to purchase is not linear anymore and follows multiple channels. Post-pandemic, consumer behavior has changed significantly, as seen in the emergence of the Research Online Purchase Offline or ‘ROPO’ effect.
Both local and large brands are constantly seeking opportunities to create brand awareness across available channels. They want to reach consumers with the right messages, right content, and at the right moment on their path to purchase.
Today’s retailers offer a variety of ad units and ad formats with audience reach across an extended ecosystem. It includes their own onsite, in-store, and partner networks. Most importantly, retailers with right shopper loyalty programs have high quality first-party (1P) data that advertisers want to capitalize. Therefore, advertisers are more willing to invest in retail media which can deliver incrementality and ROI.
Well-established RMN can create a true fly-wheel effect for retailers in growing sales, consumer experience, and ad revenue.
Challenges to effectiveness of RMNs:Despite the opportunity for retailers in the RMN business, they may not generate expected revenues from brands and their agencies due to various reasons like lack of relevant operating model and technology capabilities. The retail business requires a buyer mindset, while the media requires a seller mindset.
The absence of integrated joint business planning (JBP) hampers collaboration between retailers and brands organizations. Insufficient technology capabilities lead to poor 1P data, limited ad inventory and formats, without a self-service model or supplier insights to verify ROI and incrementality. Often organizations apply the wrong measurement metrics to measure success. RMNs also face intense competition from various retailers.
Ingredients of a successful RMN:Currently, over 80% of the RMN spend by brands is for onsite (retailer’s .com and mobile app) channels in the form of sponsored products, brands, display ads, and videos - their primary focus is bottom-of-the-funnel marketing.
Retailers have a high-margin revenue stream in monetizing the 1P data in their omni channels by becoming full funnel player – ecommerce sites, mobile apps, in-store ad units, magazines, themed events.
With offsite channels like Meta, Google, Tik Tok, CTVs and in-store digital screens, RMNs can transform into full-funnel marketing channels. Many have already become omni-channel, media owners through strategic partnerships like Tesco Media & Insights + ITVX, Walmart + Tiktok.
The following steps will support the success of RMNs:
When it comes to instore, the ability to integrate ad servers and screens delivering ad content and including a feedback loop on aspects like the number of impressions shown, view time etc., is crucial. By mapping these metrics against in-store purchases, retailers help brands get an accurate view of sales incrementality, iROAS, and other key metrics to close the marketing loop.
RMNs that offer a 360-degree view of customer interactions across retailer touchpoints will help brands achieve micro-segmentation and hyper-personalization.
From media buyer to agency mindset:To compete against the likes of Amazon, Google, and Meta, RMNs must demonstrate how they can provide superior ROI to the brand advertisers by leveraging AI and ML technologies, impacts consumer behavior. A consulting partner like Infosys can draw from their vast experience in implementing and integrating such technology platforms for global retailers.
Above all, retailers must begin to view RMN earnings as an additional revenue stream derived from a brand’s marketing spends. Those able to effectively don an agency’s hat in selling ad performance will encourage brands to entrust these precious marketing resources with them.
We've featured the best productivity tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Security professionals have long been reporting high levels of stress and burnout, which is only compounded by a skills shortage in the industry, and new research claims the sheer volume of threats, as well as the data those threats produce, is putting firms at risk.
Research from Google Cloud found threat notifications aren’t the helpful tool they could be, and in fact can be overwhelming security teams, with nearly two-thirds (61%) of security practitioners saying they think there are, ‘too many threat intelligence data feeds’, and 60% believing there are too few threat analysts to sift through the data efficiently.
“Rather than aiding efficiency, myriad [threat intelligence] feeds inundate security teams with data, making it hard to extract useful insights or prioritize and respond to threats. Security teams need visibility into relevant threats, AI-powered correlation at scale, and skilled defenders to use actionable insights, enabling a shift from a reactive to a proactive security posture,” the study argued.
Needles in a haystackToo much data leads to analysts stuck in ‘reactive mode’, with 86% of respondents saying their organisation has gaps in its understanding of the threat landscape, as well as 85% saying more focus could be put on emerging threats, and 72% are mostly reactive to threats, not able to get ahead of trends.
Adjacent research from SentinelOne shows that a large proportion of Cloud security alerts are false positives (not relevant to the organisation). The majority of respondents (53%) say that over half of the alerts they receive are a false positive, outlining just how real the ‘alert fatigue’ is.
This makes securing cloud environments difficult, say 92% of respondents, with too many point solutions leading to management and integration issues, creating more alerts, lower quality alerts, and therefore slower reactions to attacks thanks to the confusion.
Perhaps unsurprisingly, both sets of research have one suggestion to solve this issue - and it’s not investing in better training and support to address the skills shortage. Instead, you guessed it, it’s AI.
AI can help ease the pressure by improving an organisation’s ability to operationalise threat intelligence, generating ‘easy-to-read summaries’ and recommending next-steps to ‘uplevel junior analysts’, Google's research says.
"We believe the key is to embed threat intelligence directly into security workflows and tools, so it can be accessed and analyzed quickly and effectively," noted Jayce Nichols, Google Cloud Director, Intelligence Solutions.
"AI has a vital role in this integration, helping to synthesize the raw data, manage repetitive tasks, and reduce toil to free human analysts to focus their efforts on critical decision-making."
You might also likeXbox has unveiled its plans for Gamescom 2025, which will include the opportunity to play a Hollow Knight: Silksong demo.
The brand will have a strong presence at the European gaming event, which runs from August 20 to 24 in Cologne, Germany. The Xbox booth will show off more than 20 games across a whopping 120 demo stations, alongside offering photo opportunities and unique experiences.
Big highlights include hands-on time with the Asus ROG Xbox Ally and ROG Xbox Ally X, the two recently revealed Xbox PC handhelds. A demo of Hollow Knight: Silksong will be playable on the handheld, potentially giving us our first substantial look at the long-awaited game in years.
Hollow Knight: Silksong was first announced back in 2019 but we have hardly heard a peep about it aside from a few brief appearances at various showcase events such as the Nintendo Switch 2 Direct earlier this year. The game also featured prominently in the Asus ROG Xbox Ally and ROG Xbox Ally X reveal, where it was confirmed that it would be available in time for the handheld's launch.
Could this mean that a Hollow Knight: Silksong release is around the corner? It definitely seems so, especially with the handhelds slated for later this year.
The Xbox booth will also offer visitors the chance to try the likes of Grounded 2, the first public hands-on demo of Ninja Gaiden 4, in addition to some third-party titles like Borderlands 4 and Metal Gear Solid Delta: Snake Eater.
You might also like...Microsoft has announced that it will require age verification for the continued use of Xbox social features, per the UK's Online Safety Act.
In a new Xbox support post, Microsoft said: "As part of our compliance programme for the UK Online Safety Act and our ongoing investments in tools and technologies that help ensure age-appropriate experiences, we're introducing age verification for Microsoft accounts in the UK."
The company explained that players over the age of 18 who don't verify their age between now and the beginning of 2026 can still play their Xbox console, but "starting early next year", certain social features will be limited to friends only unless age verification is complete.
For now, accounts belonging to players 18 and over in the UK are being asked to verify their accounts and will begin seeing notifications encouraging them to verify their age. This is an optional process for now, but it will change come early 2026.
Until an account's age is verified, users will only be able to use voice and text communication, party functionality, and game invites, and user-generated content like the Activity Feed.
Without age verification, the Looking for Group and custom clubs features won't be accessible.
"If you have an existing account or are setting up a new one, you may be asked to verify your age using Yoti, a trusted and secure third-party identity verification service," the post reads.
There are several ways to verify identity, including with a government-issued photo ID, like a passport, residency card, or any other government-issued identification document with the user's picture on it.
They can also use a live photo, ID verification, a mobile number to verify age through their carrier, and a credit card check.
"Whether a player verifies their age will not affect any previous purchases, entitlements, gameplay history, achievements, or the ability to play and purchase games, however we encourage players to verify their age via this one-time process now to avoid uninterrupted use of social features on Xbox in the future," said Xbox vice president of gaming trust and safety Kim Kunes in a separate Xbox Wire post.
"As this age verification process rolls out across the UK, we’ll continue to evaluate how we can keep players around the world safe and learn from the UK process. We expect to roll out age verification processes to more regions in the future. There is no one-size-fits-all solution to player safety, so these methods may look different across regions and experiences."
Xbox isn't the first platform to be affected by the UK's Online Safety Act. Reddit and Discord have also implemented new age verification systems to access 18+ content; however, gamers are already getting around Discord's tool by using Death Stranding's photo mode.
You might also like...Microsoft has reportedly fixed a bug in Windows 11 which caused the mouse cursor to supersize itself in irritating fashion under certain circumstances.
Windows Latest explained the nature of the bug, and provided a video illustrating the odd behavior. It shows the mouse cursor being at its default size (which is '1' in the slider in settings for the mouse), and yet clearly the cursor is far larger than it should be.
When Windows Latest manipulates the slider to make the mouse cursor larger, then returns it to a size of '1', the cursor ends up being corrected and back to normal. Apparently, this issue manifests after resuming from sleep on a Windows 11 PC.
Windows Latest says this bug has been kicking around since Windows 11 24H2 first arrived (in October last year), but the issue hasn't been a constant thorn in its side. Seemingly it has only happened now and again – but nonetheless, it's been a continued annoyance.
Not anymore, though, because apparently with the July update for Windows 11, the problem has been fixed.
(Image credit: Zachariah Kelly / TechRadar)Analysis: Mouse mattersOddly enough, Microsoft never acknowledged this issue, although other Windows 11 users certainly have – Windows Latest hasn't been alone in suffering at the hands of this bug.
I've spotted a few reports on Reddit regarding the issue, and some posters have experienced the supersized cursor after rebooting their machine rather than coming back from sleep mode (and there are similar complaints on Microsoft's own help forums).
Whatever the case, the issue seems to be fairly random in terms of when or whether it occurs, but the commonality is some kind of change of state for the PC in terms of sleeping or restarting.
While the mouse cursor changing size may not sound like that big a deal, it's actually pretty disruptive. As Windows Latest observes, having a supersized cursor can make it fiddlier and more difficult to select smaller menu items in apps or Windows 11 itself.
And if you weren't aware of the mentioned workaround – to head into the Settings app, find the mouse size slider, and adjust it – you might end up rebooting your PC to cure the problem. And that's if a reboot does actually fix things, because, as some others have noted, restarting can cause the issue, too.
This was an irksome glitch, then, so it's good to hear that it's now apparently resolved with the latest update for Windows 11.
You might also like...New data from Synergy Research has claimed European providers of cloud storage and other services only account for 15% of their own regional market, highlighting the hold that US rivals have even in foreign territories.
Overall market share dropped to around 15% in 2022, remaining steady ever since, but in the five years from 2017 to 2022 European cloud providers lost half of their share, down from 29%.
While European providers were able to triple their revenues between 2017 and 2024, the market grew sixfold in that same period – it's now worth an estimated €61 billion.
Europe's cloud market is dominated by... the USAmazon, Microsoft and Google now control around 70% of the European cloud market, Synergy found, with SAP and Deutsche Telekom confirmed to be the leading EU providers, but with just 2% of the market each. OVHcloud, Telecom Italia and Orange rounded up the top five.
Synergy described the dominance of US cloud giants as an "impossible hill to climb" for European challengers, with US providers typically investing around €10 billion every single quarter into European infrastructure. On the flip side, European firms typically lack the long-term investment support required by the cloud sector.
"The cloud market is a game of scale where aspiring leaders have to place huge financial bets, must have a long-term view of investments and profitability, must maintain a focused determination to succeed, and must consistently achieve operational excellence," Synergy Chief Analyst John Dinsdale explained.
However, change could be on the horizon with data privacy issues bubbling to the surface under Trump-era US policies - as Microsoft recently admitted it can't guarantee data sovereignty in Europe if the US government demands access.
Still, Dinsdale believes the US cloud dominance could be hard to shake off now that it's embedded in Europe: "While many European cloud providers will continue to grow, they are unlikely to move the needle much in terms of overall European market share."
You might also likeSpider-Man: Brand New Day won't arrive in theaters until July 2026, but some fans think they've already worked out where it'll sit on the Marvel timeline.
With filming due to begin on Spider-Man: Brand New Day in August, preparations have been underway in Glasgow for a number of weeks now. The Scottish city is being used a stand-in for New York City (NYC), so Glaswegians have seen their hometown receive a US makeover before the cameras start rolling.
One eagle-eyed Marvel fan has wasted no time snapping images of the sets being erected for Spider-Man 4, too. Indeed, X/Twitter user lukec1605 recently uploaded some photographs that indicate what year it might take place in.
Photos from set on #SpiderManBrandNewDay @eavoss @NewRockstars pic.twitter.com/LZICv2IohfJuly 28, 2025
As the above post reveals, the Marvel Cinematic Universe's (MCU) version of NYC is being renovated, with numerous construction builds in progress. This might have something to do with events that occurred in Thunderbolts*, aka one of three new movies released by Marvel Studios this year. That film is set in the MCU's present, which is believed to be the year 2027. You can read more about what happened in that flick via by Thunderbolts* ending explained piece.
But I'm getting off-track. Two of the images in the aforementioned post reveal that work is due to be completed on these renovations and new builds by December 2027. Cue MCU fans jumping to conclusions and convincing themselves that the next Marvel Phase 6 movie will take place in late 2027.
I'm not satisfied this is the case, though. Those pictures only indicate that the buildings will be erected before that year ends. Depending on the size of said build, it can take multiple years to complete work on them, too. It's entirely possible, then, that Spider-Man's next outing in the MCU could be set in early or mid-2027, or even sometime in 2026.
Some Marvel fans don't think Spider-Man 4 will be set in late 2027 (Image credit: Reddit)There's evidence that Brand New Day could take place well before December 2027 as well. Season 1 of Daredevil: Born Again, whose story is thought to play out between late 2026 and early 2027, sees Wilson Fisk become NYC's latest mayor. Throughout the Disney+ show's first installment, Fisk fast-tracks a number of developments in the city, so it's plausible that the ongoing construction work was greenlit by him. If that's the case, events in Spider-Man 4 might run concurrent to Daredevil: Born Again season 1.
That said, Jon Bernthal's Frank Castle/The Punisher will a supporting role to play in Brand New Day. The last time we saw him, i.e. in Born Again's season 1 finale, he escaped captivity after being incarcerated in a secret prison facility patrolled by Fisk's Anti-Vigilante Task Force. In order to show up in Spider-Man 4, he'll need to have broken out of jail before that film begins. This would mean Brand New Day has to take place from mid-2027 onwards.
Hopefully, we'll get a better idea of when the film is set, plus who Stranger Things' Sadie Sink is playing in Spider-Man 4, when principal photography finally gets underway. In the meantime, find out why Spider-Man: Brand New Day's release was delayed or learn more about how its official title takes its cue from the most controversial moment in Spidey's comic book history.
You might also likePlayStation's Project Defiant fight stick finally has an official name, alongside brand new details and a vague release window.
A new PlayStation Blog post has revealed that Project Defiant is officially called the FlexStrike, and it's currently set to arrive sometime in 2026. The news comes right before Sony's own EVO 2025 fighting game tournament event in Las Vegas, where the FlexStrike will be on display (but not playable) for the first time.
FlexStrike will be compatible with both PS5 and PC, and it supports Sony's proprietary PlayStation Link wireless tech. Here, a PlayStation Link USB adapter can be used to hook up a compatible gaming headset - like the Pulse Elite or Pulse Explore earbuds - as well as up to two FlexStrike controllers for local play.
Like many of the best fight sticks, the FlexStrike will also be customizable to a degree. One really cool feature shown in the trailer (above) is a 'toolless' gate swap. By opening the non-slip grip at the bottom, players will be able to swap between square, circular, and octagonal gates on the fly with the joystick. This means you won't have to buy a separate joystick or gate, or use any additional tools to get the job done.
The controller has several amenities you'll find on other top fight sticks, including a stick input swap for menu navigation, and a lock switch that disables certain buttons (like pausing) for tournament play. The eight face buttons are also mechanical, which means they should register clicky, instantaneous inputs.
Lastly, players can use a DualSense Wireless Controller in tandem with the FlexStrike for menu navigation, not unlike what we see with the PlayStation Access controller.
PlayStation appears to be investing quite heavily in fighting game hardware and software. It's likely that the FlexStrike will launch around the same time as Marvel Tokon: Fighting Souls, published by PlayStation Studios and developed by Arc System Works; the team behind Guilty Gear Strive, Granblue Fantasy Versus: Rising, and many more of the best fighting games.
TechRadar Gaming will be very keen to deliver a verdict on the FlexStrike when it launches next year, so stay tuned for a potential review in 2026.
You might also like...U.N. officials say many people in Gaza are experiencing "famine-like conditions." Health experts who have studied past famines warn that the fallout can reverberate across generations.
(Image credit: Abdalhkem Abu Riash)
Trump says he personally told his "very good friend Rupert Murdoch" that he had not sent a racy birthday greeting two decades ago to Jeffrey Epstein. Murdoch's Journal reported it anyway.
(Image credit: BRENDAN SMIALOWSKI/AFP via Getty Images)
"DACA does not confer any form of legal status in this country," said DHS assistant press secretary Tricia McLaughlin, who then encouraged "every person here illegally" to self-deport.
(Image credit: Jahi Chikwendiu)
Experience Level Objectives (XLOs) represent a fundamental evolution in monitoring philosophy, moving beyond the conventional Service Level Objectives (SLOs) and SLAs that have dominated IT operations for years.
This post examines the key differences between these approaches and explains why XLOs provide a more business-aligned framework for modern digital operations.
User-Centric vs. infrastructure-centric measurementsTraditional SLA and SLO monitoring has primarily focused on system availability and IT infrastructure health. This approach centers on technical metrics like uptime percentages, server response times, and infrastructure resource utilization. While these metrics provide valuable insights into system health, they create a significant disconnect between technical indicators and actual business metrics.
In contrast, XLO monitoring prioritizes metrics that directly gauge user experience and satisfaction. This shift reflects a growing recognition that digital service quality cannot be measured solely by whether systems are functioning, but rather by how well they are functioning from the user's perspective. As research increasingly shows, "slow is the new down"—acknowledging that poor performance, even without complete failure, can severely impact user satisfaction and business outcomes.
This philosophical difference addresses a critical blind spot in traditional monitoring approaches. A system can report 100% uptime while delivering a frustratingly slow experience that drives users away. XLOs close this gap by measuring what actually matters to users: the quality and speed of their interactions with digital services.
The importance of monitoring from where it mattersMost monitoring tools rely on cloud-based vantage points for digital experience monitoring —convenient (for the vendor), but disconnected from the actual user experience. These first-mile checks confirm whether the infrastructure is up, but say little about how your application is experienced by users in the real world. Hence, it is primarily useful for QA purposes, especially for new code releases.
XLOs shift the perspective. They depend on insights captured from where users truly are—whether that’s on a connection inside an office through a regional ISP, a mobile connections through a mobile operator, or even a laptop connected via Starlink. This visibility uncovers the real issues users face: congestion, routing delays, delays from third part code, and other last-mile failures that cloud monitoring can’t see.
If SLOs tell you your system is available, XLOs tell you whether it’s delivering the experience the business expects to real users. This outside-in view is what turns data into real business insight. It closes the visibility gap between infrastructure health and user experience—and that’s where the real value lies.
End-to-End Journey PerspectiveTraditional SLOs often focus on individual components or services, creating a fragmented view of performance. XLOs, by contrast, are designed to capture the complete user journey across multiple systems and services. This end-to-end perspective reflects the reality that users experience services holistically, not as isolated components. Modern digital services span multiple providers, platforms, and technologies, making isolated component monitoring inadequate for ensuring overall service quality.
While an SLA may measure the uptime of an S3 storage bucket, or the uptime of your DNS or CDN provider, these are only three of dozens or hundreds of components in an entire system. As a rule of thumb, the quality of the experience delivered by a system is as good as the worst of its components. Thus, while most components could be working perfectly, an issue in a third-party API may be resulting in the entire experience for your users to be unacceptable.
The XLO, by contrast, is less concerned about CPU utilization or database response time, while entirely focused in the resulting experience for a user – whether the user is a customer, an internal user, or an API consumed by an internal or external system.
Business alignment and value demonstrationA critical difference between XLOs and traditional SLOs is their alignment with business outcomes. Traditional SLOs primarily serve technical teams, measuring system health in terms that may not translate directly to business impact, while SLAs establish accountability from vendors that deliver a component of the functionality of a system. This creates challenges in demonstrating IT's value to business stakeholders and securing resources for performance improvements.
XLOs fundamentally change this dynamic by providing metrics that directly correlate with business performance. By moving beyond "Is it up?" to answer "Is it meeting our users’ expectations?", XLOs address what business stakeholders actually care about. This alignment helps prove the value of IT Operations and justify investments in performance improvements by demonstrating clear connections between technical performance and business outcomes.
As more components of our business and personal lives are based on digital experiences or supported by digital processes, delivering on the expectations is a business priority. In a recent survey of thousands of users showed bad digital experiences are the main reason why consumers switch to different banking providers.
As a specific example, a team can set specific XLO targets that reflect business priorities, such as ensuring the critical part of loading a page, measured as Largest Contentful Paint (LCP), does not exceed 2.5 seconds 90% of the time in a given month. This specific threshold directly impacts bounce rates and user engagement, providing clear business value.
Accelerating maturity with XLOsAccording to the GigaOm Maturity Model for IPM, organizations progress through five stages—from chaotic, reactive operations to optimized, business-driven monitoring. Traditional SLOs keep teams stuck in the early stages, focused on infrastructure uptime and siloed metrics. XLOs act as a catalyst for maturity by:
Aligning with advanced stages: XLOs introduce user-focused metrics that resonate with the 'Quantitative' and 'Optimized' stages, emphasizing business outcomes.
Facilitating proactive issue detection: Tools like burndown charts enable early identification of performance degradations, a hallmark of mature operations.
Fostering cross-functional collaboration: XLOs unify teams around shared objectives, essential for achieving higher maturity levels.
For example, a retail company using XLOs to monitor checkout flow performance (e.g., Time to Interactive across regions) isn’t just fixing errors—they’re optimizing a revenue-critical journey, a hallmark of GigaOm’s value-based observability.
Proactive vs. Reactive MonitoringTraditional SLO monitoring often creates a reactive posture, where teams respond to issues after they've already impacted users. This approach typically waits for error thresholds to trigger alerts before teams mobilize to address problems. Once these thresholds are crossed, the business is already suffering some impact.
XLO monitoring enables a substantially more proactive approach. By tracking performance trends over time, proactively simulating user experiences from their real-world locations, businesses can detect gradual degradations before they breach critical thresholds – and often before they impact users.
Tracking XLOs over time is where burn-down charts come into play. Burn down charts help track the progress of your performance against your set objectives, showing how much of your performance budget is left as time goes on.
When a team adopts XLOs as a KPI, it influences how the teams make decisions, how they see success, and what risks are acceptable. Operations can evaluate whether to release changes based on their projected impact on experience metrics, maintaining consistently high user satisfaction. In this way, burn down charts offer a clear status of service health over periods of time.
Breaking down organizational silosA significant practical difference between XLO and traditional SLO approaches lies in their organizational impact. Traditional SLOs often reinforce existing silos between development, operations, and business teams, as each group focuses on their own specialized metrics.
XLOs, by contrast, create a common language and shared objectives across organizational boundaries. By providing metrics that matter to both technical and business stakeholders, XLOs facilitate cross-functional collaboration and shared accountability for user experience. This collaborative approach enables faster problem resolution and more effective performance optimization.
Building a digital operations center (DOC)For a long time, IT operations teams have built NOCs and SOCs to manage network operations and security. In today’s world where most business interactions are digital, as organizations mature, many are formalizing their cross-functional efforts by building Digital Operations Centers (DOCs).
A DOC brings together teams across IT, engineering, and business functions to monitor experience-centric metrics in real time. With XLOs at the core, a DOC isn’t just a control room—it’s a shared space for aligning around user outcomes, accelerating response times, and making performance a business-wide priority. It’s a sign of maturity and a strategic investment in digital resilience.
A DOC puts digital user experience at the center of the business and provides visibility into how every critical digital operation in the business performs - and what is the performance of all the key components that contribute to delivering that experience – from internet backbone to third party components, cloud services, APIs, DNS, front-end servers, databases, microservices, down to application code.
A DOC is a natural evolution of a NOC and a SOC as IT operations teams evolve from a systems-uptime focus to becoming a true operational intelligence team that is a critical component of how the business operates, and not only the team keeping the lights on.
Specific Experience MetricsXLO monitoring to measure specific performance metrics that directly impact user experience can include:
Wait Time: The duration between the user’s request and the server’s initial response
Response Time: The total time taken for the server to process a request and send back the complete response
First Contentful Paint (FCP): The time it takes for the browser to render the first piece of content on the screen
Largest Contentful Paint (LCP): Time when the largest content is visible within the browser
Cumulative Layout Shift (CLS): A measure of how much the layout of the page shifts unexpectedly during loading
Time to Interactive: The time it takes for a page to become fully interactive and responsive to user inputs
These metrics create a multidimensional view of the user experience that traditional infrastructure-focused SLOs simply cannot provide.
The Strategic Value of XLO MonitoringSLOs and Experience Level Objectives (XLOs) aren’t just buzzwords; they're guiding principles for ensuring performance indicators align with real customer expectations.
The SRE Report 2025According to the SRE Report 2025, 40% of businesses are prioritizing the adoption of SLOs and XLOs over the next 12 months. By focusing on user experience rather than just system availability, providing specific experience-focused metrics, aligning with business outcomes, enabling proactive optimization, capturing end-to-end journeys, and breaking down organizational silos, XLOs provide a more comprehensive and business-relevant approach to monitoring.
This evolution reflects changing expectations from both users and businesses.
For organizations seeking to improve digital experience quality while demonstrating clear business value from IT investments, XLOs offer a powerful framework that goes beyond traditional SLO limitations. By implementing XLO monitoring, organizations can align technical performance with business objectives, ultimately delivering superior digital experiences that drive competitive advantage.
We've listed the best Active directory documentation tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
IT teams know the balancing act all too well. Security teams implement new protocols that generate a flood of user complaints. The IT help desk is overwhelmed with tickets that could have been prevented.
Meanwhile, employees bypass carefully designed systems because they're too cumbersome. And today's increasingly distributed workforce only exacerbates this balancing act, creating a larger attack surface across more devices, locations, and applications.
While IT management may have accepted this as the inevitable reality, the challenges are only intensifying. AI-powered cyberattacks are becoming more sophisticated daily, capable of adapting faster than traditional security measures can respond. The old playbook of treating security, IT operations, and employee experience as separate functions has reached its breaking point.
A unified approach is needed, or IT leaders risk not only making their organization at risk for security vulnerabilities, but losing visibility and control of their organizations’ digital work environments.
The "self-driving car" of enterprise ITAlthough the rise of new AI tools and devices has created headaches for IT, AI-powered digital environments, or an autonomous workspace, offer IT leaders a path to modernizing and knocking down the divisions that exist across employee experience, security and operations.
These environments self-configure, self-heal, and self-secure with minimal human intervention. Think of it as the "self-driving car" of enterprise IT.
Unlike traditional automated systems that follow preset rules and require constant human oversight, autonomous workspaces continuously learn from data patterns and user behaviors.
Because these environments monitor every aspect of the digital environment simultaneously, previous silos that plagued IT teams’ decision making are eliminated, offering IT teams full context of their organization’s digital environment.
For example, when a security anomaly emerges, the system doesn't just alert administrators; it automatically quarantines the threat while maintaining seamless user access to legitimate resources. When a device falls out of compliance, it self-corrects without user intervention.
And rather than looking at these issues in a vacuum, autonomous workspaces enable IT to connect dots across different factions of the workplace, understanding if an employee’s application performance issue is underpinned by a larger problem or vulnerability.
The strategic imperative for not only IT teams, but a businesses' bottom lineWhile an autonomous workspace can free IT teams from the endless cycle of firefighting, the benefits of adopting an autonomous workspace extend beyond just the IT team, ultimately providing a foundation for business resiliency and cost efficiency.
1. Security rigorAs generative AI tools become embedded in daily workflows, they also broaden the attack surface, and a reactive security approach is proving inadequate. Autonomous workspaces flip this model by implementing predictive zero-trust security. Instead of waiting for threats to manifest, these systems continuously analyze patterns and behaviors to identify potential risks before they materialize.
The system makes intelligent trust decisions in milliseconds, based on comprehensive understanding of user behavior, network conditions, and threat intelligence, helping equip a business for the increasingly sophisticated cyberattacks of today and future.
2. Employee experience benefitsOrganizations that take a holistic approach to employees’ digital experience gain more than just operational benefits. A modern digital experience gives employees self-service access to the apps, resources and the support they need, when they need it.
This approach helps reduce disruptions and prevents issues before they can impact employee productivity. With secure access from anywhere, employees can stay focused and in control of how they work.
The result is stronger collaboration, higher employee satisfaction, and a significant advantage in attracting and retaining top talent in a growing hybrid work environment.
3. Streamlined resourcesThink about the traditional approach to endpoint management. Security teams set protocols. IT operations teams install management tools to ensure compliance. And user experience teams try to minimize the performance impact. The result? Conflicting priorities, duplicated efforts, and frustrated users. Autonomous workspaces break down silos and integrate these different functions into a single, intelligent platform, streamlining IT resources and costs, while enhancing collaboration across teams.
The most successful implementations of autonomous workspaces share a common characteristic: they eliminate artificial boundaries between security, IT operations, and employee experience teams. This convergence isn't just about organizational structure—it's about creating technology ecosystems where security and IT enhance rather than complicate employee productivity and collaboration.
As the enterprise landscape continues to evolve, the organizations that thrive will be those that embrace autonomous workspaces not merely as a technology solution, but as the foundation of their digital work strategy.
We list the best IT documentation tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Generative AI is a headline act in many industries, but the data powering these AI tools plays the lead role backstage. Without clean, curated, and compliant data, even the most ambitious AI and machine learning (ML) initiatives will falter.
Today, enterprises are moving quickly to integrate AI into their operations. According to McKinsey, in 2024, 65% of organizations reported regularly using generative AI, marking a twofold increase from 2023.
However, the true potential of AI and ML in the enterprise won’t come from surface-level content generation. It will come from deeply embedding models into decision-making systems, workflows, and customer-facing processes where data quality, governance, and trust become central.
Additionally, simply incorporating AI and ML features and functionality into foundational applications won’t do an enterprise any good. Organizations must leverage all aspects of their data to create strategic advantages that help them stand out from the competition.
To do this, the data powering their applications must be clean and accurate to mitigate bias, hallucinations, and/or regulatory infractions. Otherwise, they risk issues in training and output, ultimately negating the benefits that the AI and ML projects were initially meant to create.
The importance of good, clean dataData is the foundation of any successful AI initiative, and enterprises need to raise the bar for data quality, completeness, and ethical governance. However, this isn’t always as easy as it sounds. According to Qlik, 81% of companies still struggle with AI data quality, and 77% of companies with over $5 billion in revenue expect poor AI data quality to cause a major crisis.
In 2021, for example, Zillow shut down Zillow Offers because it failed to accurately value homes due to faulty algorithms, leading to massive losses. This case highlights a critical importance – AI and ML projects must operate on good, clean data in order to produce the most accurate, best results.
Today, AI and ML technologies rely on data to learn patterns, make predictions and recommendations, and help enterprises drive better decision-making. Techniques like retrieval-augmented generation (RAG) pull from enterprise knowledge bases in real-time, but if those sources are incomplete or outdated, the model will generate inaccurate or irrelevant answers.
Agentic AI’s ability to act reliably hinges on consuming accurate, timely data in real time. For example, an autonomous trading algorithm reacting to faulty market data could trigger millions in losses within seconds.
Establishing and maintaining an environment of good dataIn order for enterprises to establish and maintain an environment of good data that can be leveraged for AI and ML usage, there are three key elements to consider:
1. Build a comprehensive data collection engineEffective data collection is essential for successful AI and ML projects, and modern data platforms and tools, such as those for integration, transformation, quality monitoring, cataloging, and observability, to support the demands of their AI development and output. They ensure the organization is getting the right data.
Whether the data be structured, semi-structured, or unstructured, any data collected should come from a variety of sources and methods to support robust model training and testing to encapsulate the different user scenarios that they may encounter upon deployment. Additionally, companies must ensure they follow ethical data collection standards. Whether the data is first-, second-, or third-party, it must be sourced correctly and with consent given for its collection and use.
2. Ensure high data qualityHigh-quality, fit-for-purpose data is imperative for the performance, accuracy, and reliability of AI and ML models. Given that these technologies introduce new dimensions, the data used must be specifically aligned with the requirements of the intended use case. However, 67% of data and analytics professionals say they don’t have complete trust in their organizations’ data for decision-making.
To address this, it's essential that enterprises have data that is representative of real-world scenarios, monitor for missing data, eliminate duplicate data, and maintain consistency across data sources. Furthermore, recognizing and addressing biases in training data is critical, as biased data can compromise outcomes and fairness and negatively impact customer experience and credibility.
3. Implement trust and data governance frameworksThe push for responsible AI has placed a spotlight on data governance. With 42% of data and analytics professionals saying their organization is unprepared to handle the governance of legal, privacy, and security policies for AI initiatives, it’s critical that there is a shift from traditional data governance frameworks to more dynamic frameworks.
In particular, with Agentic AI coming into significant prominence, it’s crucial to address why agents make specific decisions or take specific actions. Enterprises must have a sharp focus on Explainable AI techniques to build trust, assign accountability and ensure compliance. Trust in AI outputs begins with trust in the data behind them.
In summaryAI and ML projects will fail without good data because data is the foundation that enables these technologies to learn. Data strategies and AI and ML strategies are intertwined. Enterprises must make an operational shift that puts data at the core of everything they do – from technology infrastructure investment all the way to governance.
Those that take the time to put data first will see projects flourish. Those that don’t will be faced with ongoing struggles and competition biting at their heels.
We list the best data visualization tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Ryne Sandberg, a Hall of Fame second baseman who became one of baseball's best all-around players while starring for the Chicago Cubs, has died. He was 65.
(Image credit: John Swart)
A gunman opened fire Monday outside the largest casino in Reno, Nevada, killing three people and wounding three others before police shot the suspect and arrested him, officials said.
(Image credit: Andy Barron)
Heavy rains and flooding killed 30 people in Beijing, bringing the death toll from the storms in the region to at least 34. More than 80,000 people have been relocated in Beijing.
(Image credit: Mahesh Kumar A.)