Error message

  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, bool given in format_date() (line 2062 of /home/cay45lq1/public_html/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, bool given in format_date() (line 2072 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

Intel is giving PC gamers Battlefield 6 for free with some desktop CPUs to try and stoke sales

TechRadar News - Tue, 08/26/2025 - 06:59
  • Intel has a new 'Gamer Days 2025' promotion running until September 7th
  • You get Battlefield 6 for free with certain CPUs or prebuilt PCs
  • There are also some chunky discounts to be had with some CPUs

Intel's latest angle to try and shift more of its desktop processors is a promotion tied in with Battlefield 6.

Wccftech reports that Intel has kicked off its 'Gamer Days 2025' campaign – running through until September 7th – which involves some beefy discounts on some of its CPUs (as well as prebuilt PCs containing those processors). However, the big draw for many will be the free copy of Battlefield 6 that's bundled with this offer.

To pick out some examples from the current generation of Arrow Lake CPUs, the flagship Intel Core Ultra 9 285K has been reduced by 12% on Amazon (so in the US it's $530 instead of $600 now), plus that Battlefield 6 freebie. The more mainstream Core Ultra 7 265K processor is the real attraction here, though, with a discount of 36% at Amazon currently (meaning a reduction from an MSRP of $399 down to $259).

This isn't just about Arrow Lake, though, as some previous-generation chips are also reduced. These include the Intel Core i5-14600K which is currently out of stock at Amazon in the US (but is down to $150 at Newegg after a discount code is applied, a seriously tempting proposition at that price level).

In total, there are almost 20 processors involved in this promotion on Amazon, and PC builders such as CyberPowerPC and Origin are in the mix when it comes to the prebuilt rigs included in the offer in the US (or there's the likes of Scan and Overclockers in the UK).

Analysis: heavy hitter of a freebie is needed

(Image credit: EA)

Battlefield 6 represents $70 of value in the US, so if you were intending to buy the game anyway, grabbing one of these CPUs with that outlay knocked off leaves them pretty cheap indeed in some cases.

The price cuts in themselves are nice, but it's the game offer that's doing a lot of the heavy lifting here, as we've already seen these kinds of discounts for Intel chips – even the current-gen models.

Or I should say especially the current-gen Arrow Lake CPUs, as these are rather lackluster in terms of their gaming performance, so Intel needs help getting some sales momentum behind them. That goes for past-gen chips, too, which have the shadow of previous stability issues still hanging over them – consumers aren’t going to forget that episode in a hurry.

You might also like...
Categories: Technology

Fitbit's new Dark Mode app makes it feel more like Garmin Connect – here's how to turn it on

TechRadar News - Tue, 08/26/2025 - 06:50
  • Last week, the Fitbit app got a redesign with Dark Mode
  • The much-requested feature has been a long time coming, as many competitor apps have been using this design for years
  • Here's how to switch Dark Mode on and off using Settings

The Fitbit app is undergoing some big changes. To coincide with the launch of the Google Pixel Watch 4 (you can read our early impressions in our Google Pixel Watch 4 hands-on review) it's getting a personal AI health coach in the US and, as far as we're aware, UI changes as well.

However, before those changes come into effect, Google has given the Fitbit app a significant facelift already, with the launch of Dark Mode.

The Fitbit app, since its inception, has always been set against a bright off-white backdrop regardless whether the rest of your phone is in Dark Mode or not.

It's resisted change even though competitors for the crown of best fitness app, such as Apple Health and Garmin Connect, have been on dark backgrounds to make parsing through complex graphs and planning workouts easier on the eyes.

(Image credit: Garmin/Shutterstock)

In my opinion, it's crazy that despite the popularity of the best Fitbits, it's taken so long for the companion app to get a Dark Mode. It's a simple inversion that makes the experience of using the app so much better for most people.

However, if I was being completely cynical, it might have taken so long because Google simply didn't know what to do with Fitbit.

I've written plenty about Google's neglect of the brand while folding the best hardware features into its Pixel Watch series, but it seems as though after last year's app redesign, Dark Mode and this year's heavy investment into the AI health coach, Google's finally seeing a way for Fitbit to exist within Google's complex ecosystem going forward.

When I opened my Fitbit app this morning, Dark Mode was already enabled. However, in case yours hasn't switched over automatically or you're looking for manual adjustment, here's how to toggle Dark Mode on and off.

(Image credit: Future)
  • Ensure your Fitbit app is updated to version 4.50. If not, navigate to Software Updates in your phone's settings
  • In the Fitbit app, tap your profile image and go to Fitbit Settings
  • In Settings, tap the new Theme option
  • You can choose between System Default, Light or Dark options
  • System Default will match Fitbit to your phone's theme, so if you use Dark Mode on your phone's operating system, Fitbit will switch automatically
You might also like...
Categories: Technology

A critical Docker Desktop security flaw puts Windows hosts at risk of attack, so patch now

TechRadar News - Tue, 08/26/2025 - 06:31
  • Researchers find 9.3/10 flaw in Docker Desktop for Windows and macOS
  • The bug allows threat actors to compromise underlying hosts and tamper with data
  • A fix was quickly released, so users should patch now

Docker has patched a critical severity vulnerability in its Desktop app for Windows and macOS which could have allowed threat actors to fully take over vulnerable hosts, exfiltrate sensitive data, and more.

The vulnerability is described as a server-side request forgery (SSRF) and, according to the NVD, it “allows local running Linux containers to access the Docker Engine API via the configured Docker subnet.”

“A malicious container running on Docker Desktop could access the Docker Engine and launch additional containers without requiring the Docker socket to be mounted,” Docker said in a follow-up security advisory. “This could allow unauthorized access to user files on the host system. Enhanced Container Isolation (ECI) does not mitigate this vulnerability.”

Not all systems are affected in the same way

The bug was discovered and reported by security researcher Felix Boulet. It is now tracked as CVE-2025-9074 and was given a severity rating of 9.3/10 (critical).

However, a separate researcher, Philippe Dugre, stressed that the risk is not the same on all platforms, noting it’s actually somewhat greater on Windows, compared to macOS.

This is due to the safeguards baked into the macOS operating system. Dugre managed to create a file in the user’s home directory on Windows, but not on macOS:

"On Windows, since the Docker Engine runs via WSL2, the attacker can mount as an administrator the entire filesystem, read any sensitive file, and ultimately overwrite a system DLL to escalate the attacker to administrator of the host system," Dugre explained.

"On MacOS, however, the Docker Desktop application still has a layer of isolation and trying to mount a user directory prompts the user for permission. By default, the docker application does not have access to the rest of the filesystem and does not run with administrative privileges, so the host is a lot safer than in the Windows case," he added.

Docker fixed it in Desktop version 4.44.3, so users are advised to upgrade as soon as possible.

Via BleepingComputer

You might also like
Categories: Technology

A shocking lawsuit against Amazon makes me want to cancel my Prime Video subscription immediately

TechRadar News - Tue, 08/26/2025 - 06:00

In a new proposed class action lawsuit against Amazon [via The Hollywood Reporter], the company has been accused of “bait and switch" (a type of fraudulent activity) on Prime Video. This means they are allegedly misleading consumers into thinking they’ve purchased content when they’re only getting a license to watch, which can be revoked at any time if Amazon loses the rights to a title.

The potential lawsuit claims: "violations of California unfair competition, false advertising and consumer legal remedies laws. It seeks unspecified damages, including disgorgement of profits and punitive damages for allegedly intentionally malicious conduct." Filed in Washington federal court on August 22 2025, the proposal claims that Amazon is "misrepresenting the nature of movie and TV transactions during the purchase process".

In case that's not clear, let's take the unwatched digital version of Conclave I 'bought' on Prime Video so my parents didn't miss out on this year's Oscar hype as an example. I have a digital copy, but if Prime Video's licensing agreements were to change, so too could the version of Conclave I have access to. If Amazon were to no longer have the rights to the title, my parents would lose the movie.

As the complaint points out, “you receive a license to the video and you agree to our terms," meaning that what you actually get for parting with your money is written in the small print. But should Prime Video be allowed to tell subscribers that they've "bought" a movie, and what does this mean for us users in the long run?

Prime Video’s new class action lawsuit proves we need to invest in more physical media

Prime Video has a huge back catalog, but are we really buying it? (Image credit: Amazon)

Before we go any further, let's not forget that this isn't the first lawsuit of this kind Prime Video has had against it. In 2020, a separate lawsuit alleged "unfair competition and false advertising over the practice". While Amazon has not yet publicly commented on the new class action, it claimed in 2020 that using the word "buy" isn't deceptive to subscribers because consumers already understand that their purchases is subjective to license agreements. Five years later, and I'd say that likely isn't the case.

Back in 2023, a Californian legislature brought the problem to the forefront again. Gamers found that their access to The Crew would be stopped after Ubisoft shut down the game's servers, inspiring the 'Stop Killing Games' movement that took aim at publishers destroying previously-bought consumer titles.

However, it's changes to Californian legislature this year work to our new lawsuit's advantage. Essentially, a state law has barred the use of the word 'purchase' in a transaction unless "it offers unrestricted ownership of the product." Obviously, our Prime Video small print doesn't fit into this, and Amazon can hardly afford to lose such a huge profit share as California (if it was its own separate country, California would be the fourth largest economy in the world).

We don't yet know what any of this means for streamers with a Prime Video subscription on a wider level, but to me, it's an incredibly stark reminder that we need to keep investing in physical media as much as possible. Yes, it's more expensive than paying a flat fee every month for all the content you can possibly want. But it's like dating: if you become more intentional in what you invest in, the results are lifelong.

If you have physical copies of movies and TV shows that you love, you can never be parted from them, and it's the only way we can now guarantee the security of what we buy. Maybe it's time for the best streaming services to revert to the good old days of sending us discs in the post to watch and return when we're done with them, just like Netflix did in the late 2000s.

You might also like
Categories: Technology

Companies spending too much on SaaS could cost them more than just money

TechRadar News - Tue, 08/26/2025 - 06:00

Walk into most organizations today and ask what they're spending on SaaS. Odds are, no one can give you a confident answer. Not because they don't want to, it's because no one actually knows.

Ask a different question: who owns SaaS spend in your company? You'll likely hear three things: "Finance handles it," "That's IT's job," or "Honestly, it depends.”

And therein lies the real problem. While companies are dropping anywhere from $9,000 to $17,000 per employee annually on software, most organizations have zero clue what they're actually buying.

The explosion of software tools across every function, only exacerbated by AI, has quietly created a gap between what companies think they're managing and what they're actually managing. And that gap is getting more expensive by the month.

SaaS sprawl is worse than you think

Here's how it happens: your marketing team signs up for Canva Pro, your sales team gets Calendly, design jumps on Figma, and engineering grabs another GitHub license. Meanwhile, IT is already paying for Adobe Creative Suite, Microsoft has calendar functionality, you've got design tools in your existing stack, and there's a company-wide GitHub Enterprise account sitting unused.

This isn't just wasteful spending. It's what we call SaaS sprawl, and it's quietly bleeding companies dry. Recent data shows organizations use an average of 112 SaaS applications, with large enterprises using up to 447 different tools. And I think this is actually underrepresented. When every department acts like its own startup, you end up with a technology Frankenstein that nobody can control or understand.

When you factor in that companies waste 30-50% of their SaaS budgets on unused licenses, and missed renewal dates can cost upwards of $200,000 per instance, it’s hard to understand why so many are not addressing this problem head on. When there's no centralized intake or contract visibility, things slip through. You renew tools no one's using. You pay above market rates because you don't benchmark. You get hit with surprise auto-renewals.

The AI acceleration problem

And, just when some companies thought they had SaaS sprawl under control, AI came along and hit the gas pedal. We're seeing the late 2010s SaaS explosion all over again, but this time it's powered by artificial intelligence.

We’re in the middle of a perfect storm. Leadership wants teams to be AI-enabled, to experiment, to learn. They're actively encouraging employees to test new tools and find ways to work more efficiently. Meanwhile, IT teams are desperately trying to control the sprawl that's already spiraling out of control.

Guess who wins? The credit card.

Employees are swiping corporate cards to try the latest AI writing tool, testing out OpenAI subscriptions, or spinning up Zapier automations without any security review or budget coordination. Each purchase seems small and reasonable. A $20 monthly subscription here, a $50 annual plan there. But multiply that across every department, every team, every curious employee, and you've got a massive problem.

The conflicting stories are everywhere. Leaders preach innovation and experimentation while finance teams watch budgets explode. IT departments create approval processes while employees find workarounds. Everyone wants to be AI-first, but nobody wants to be the one who says no to the next breakthrough tool.

Shadow IT: The innovation myth

Here's where things get interesting. Some people claim Shadow IT and now Shadow AI drives innovation. They're wrong. Anyone claiming Shadow IT drives innovation isn't actually fostering an innovative environment.

When 40% of IT spending happens outside formal oversight, that's not innovation. That's broken processes. Your procurement workflows are failing to meet company needs quickly enough, so people are going rogue.

Sure, it looks like innovation on the surface. Employees find new tools, solve problems quickly, and move fast. But here's what's really happening: you're diverting time, money, and focus from actual innovation and R&D investments that could drive the company forward.

Real innovation happens when teams can explore new ideas without bypassing controls. If the only way to get work done is to go around IT or procurement, that's not agility, it's dysfunction. And it's expensive.

The security nightmare we’re all ignoring

It’s not just pure budget that is the problem, Shadow IT and AI and SaaS Sprawl are all creating security holes that many are simply not addressing. Every unauthorized app is a potential entry point for bad actors. IBM found that one in three data breaches involved Shadow IT, with the average breach costing around $4.9 million.

When someone in engineering or marketing signs up for a random productivity tool using their work email, they're potentially exposing company data. No security review, no IT approval, no encryption standards. Just click, sign up, and hope for the best.

The compliance risks are equally terrifying. Use a non-GDPR-compliant tool for EU customer data? That's a potential fine. Healthcare company using a random file-sharing app? Hello, HIPAA violations. These types of risks are happening right now at companies that think they have things under control.

Where sprawl lives

Interestingly, SaaS sprawl doesn't always come from obscure tools. It often comes from the biggest names in tech. At Tropic, we’ve found that some of the most common drivers of tool overlap and Shadow IT include:

  • Zoom, Microsoft, Slack, Google – Multiple collaboration tools per organization
  • Figma, Canva, Adobe – Design tool overlap with no license governance
  • Salesforce, Calendly, DocuSign – Sales tools stacked on top of each other
  • GitHub, JetBrains, Atlassian – Dev tools used inconsistently across teams
  • Dropbox, Apple, Amazon, OpenAI – Personal subscriptions tied to work email

No one sets out to buy the same tool twice. But without visibility, it happens all the time. Every new vendor means more contracts to track, more renewals to manage, more security reviews to conduct, and more relationships to maintain. The administrative overhead alone can eat up significant resources.

When spreadsheets become expensive

A lot of finance and IT teams are still trying to manage all this complexity with spreadsheets. That's like trying to navigate a modern city with a paper map from 1995. Even a 1% error rate on $50 million of spend can waste $500,000 annually.

Dig deeper and this isn’t just a tooling issue, it's an ownership issue. Procurement or finance thinks IT is managing it. IT assumes finance has the numbers. Finance is tracking spend, but not usage. Legal might only get involved post-signature. So, things fall through the cracks.

Let's talk ROI

Here's something most people don't talk about enough: every dollar saved on procurement and purchasing has an immediate impact on the bottom line. Unlike new sales revenue, a dollar saved can be pure profit.

Reducing SaaS spend by just 6% delivers the same profit lift as a 20% increase in top-line revenue. And that's before you factor in the benefits of reduced risk, stronger compliance, and faster purchasing cycles.

We've seen companies recover hundreds of thousands—sometimes millions—just by tackling renewals earlier, consolidating tools, and validating usage.

What smart companies are doing instead

The fix isn't shutting down software purchases. Not only is that impossible, but you’d have a disgruntled workforce on your hands. It is, however, about enabling them with structure. The companies that are winning aren't locking down every software request. They're treating software spend like the strategic lever it is.

Here's what best-in-class companies are doing:

  • Centralizing intake. Giving teams one place to request or renew software.
  • Building a software inventory. Not just contracts, but owners, usage, and cost.
  • Reviewing renewals 90–180 days out. Not two weeks before expiration. Get ahead of things to determine if you need other tools and create savings.
  • Using benchmarking data. So, you don't overpay for tools that should cost less.
  • Measuring utilization. If you bought 500 seats and only used 320, ask why.

None of this slows people down. In fact, it makes it easier for teams to get what they need, faster because the path is clear, the data is ready, and approvals don't sit in a black hole.

The time to act

Every month you wait is money walking out the door. Those auto-renewals are happening whether you're paying attention or not. The unused licenses are accumulating. The security risks are multiplying.

But don’t fear. You don't need to solve everything at once. Start with visibility. Figure out what you're actually buying. Identify the obvious waste. Cancel the subscriptions nobody is using.

Software isn't slowing down. And with AI in the mix, things are only getting more complex. This is your moment to get control, not by over-regulating, but by creating the visibility and structure your teams need to move fast, spend wisely, and innovate securely.

Your choice is simple: act now, or pay later. The meter is running either way. You don't need 200 tools to move fast. You need the right 20 and a way to manage them well.

We've featured the best business plan software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

How AI and the age of hyper-personalization is reshaping business strategies

TechRadar News - Tue, 08/26/2025 - 05:45

With data becoming a more valuable business asset than ever, informing decision-making, enhancing operational efficiency, and enabling businesses to gain a competitive edge, hyper-personalization is transforming how businesses interact with customers. Taking data analysis to a new frontier through the deployment of AI at scale to identify hidden consumer patterns and preferences, hyper-personalization is fast becoming the new standard for businesses seeking to attract and retain customers.

However, with increased use of data comes more challenges and responsibilities. As more people become more digital savvy, they also become more conscious of their data and how it is used. This presents a necessary challenge for businesses: how to offer high-quality but ethical personalization offerings.

How hyper-personalization is delivering breakthrough value amid rapidly shifting market demands

Tailored offerings boost engagement, and foster customer loyalty, and as a result, hyper-personalization strategies are growing exponentially because of their high-value returns.

According to IBM, effective personalization programs can reduce customer acquisition costs by up to 50% as machine learning and advanced analytics can predict customer preferences and automate decision-making, with data and analytics turning raw information into meaningful insights that drive measurable performance.

Advanced AI systems further enhance these capabilities, supporting sophisticated personalization at scale and enabling continuous adaptation as new data becomes available.

These enhancements increase agility and business dynamism – a necessity for businesses operating in today's economic and geopolitical landscape. Hyper-personalization enables companies to make faster, smarter business decisions that align with rapidly changing market conditions.

By leveraging real-time data and advanced analytics, organizations can quickly identify and respond to emerging trends, customer preferences, and competitive threats, allowing them to adapt their strategies and operations in near real-time.

How businesses are approaching hyper-personalization

Having undergone a wide adoption, personalization strategies are offering businesses the ability to reshape everything from customer experience to product development, proving personalization to be one of the most strategic use cases of artificial intelligence.

Across industries, leading organizations are seizing the power of AI and advanced analytics and automation to deliver tailored experiences to remain competitive. Examples include product recommendations on online stores and streaming services based on previous search history. AI is also being utilized to humanize customer interactions in insurance, and tailoring treatment plans unique to patient biology in healthcare.

Tech native platforms, which have long leveraged the advantages of personalization, are accustomed to these strategies, but businesses across traditional industries – healthcare, manufacturing, retail, automotive – are increasingly investing in personalization to keep pace.

The legacy systems that defined these industries for decades are being succeeded by AI powered, data-integrated solutions – a testament to the growing recognition that these technologies provide actionable insights.

How businesses can navigate the complexities and opportunities of AI adoption at scale

Given the increasing circulation and use of data, as well as the commercial imperative to leverage detailed customer preference data, the scale up of hyper-personalization strategies is a complex process, calling for robust AI regulation and data privacy frameworks.

Strong data governance is inherent to a sustainable hyper-personalization scale-up, especially when attempting to elevate the AI profile within a business. This includes establishing clear policies on data collection, usage, and retention, as well as ensuring compliance with evolving privacy regulations such as GDPR, and implanting robust cybersecurity systems which mitigate data breaches.

Workforce transformation is also a critical consideration. There is a need to upskill and reframe workforce training to foster a culture of innovation that works in tandem with AI additions. Traditional sales and marketing roles are evolving and fundamentally changing at a rapid pace compared to operations that were commonplace only a decade ago. Now, there is greater emphasis placed on data analysis, ethical AI model development, and AI literacy.

Managing risk in AI adoption also emerges as intrinsic to a hyper-personalization scale-up. Adopting AI systems is only a start, but managing risk becomes the next focus, to mitigate algorithmic bias and false results. This involves regular auditing of AI models, monitoring for unintended consequences, and embedding ethical considerations into the AI lifecycle.

Businesses are operating on new terrain. Hyper-personalization has made reaching customers easier than ever – the real challenge now is how best to employ these tools to anticipate the needs of customers before they know themselves. There is a fine balance to strike here – companies need to invest money and manpower into the ethical growth of their AI and data strategies, or they risk eroding consumer trust.

We've featured the best AI chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

I am an AI expert, here’s how you can separate real AI innovation from marketing hype

TechRadar News - Tue, 08/26/2025 - 05:38

The rise of AI is reshaping business technology at an unprecedented pace. From IT to HR, finance to customer service, few departments remain untouched by the wave of automation and intelligence sweeping companies and industries today. However, with this surge in interest comes a growing challenge: distinguishing between truly transformative AI tools and those merely dressed up in buzzwords.

For CIOs and business leaders, the mandate has shifted from exploration to execution. Deploying the wrong AI solution doesn’t just stall progress; it burns time, budget, and internal credibility. The challenge now is clear: cut through the noise, ensure enterprise-grade security, and back only the AI that drives measurable impact.

Perception problems around AI

At a surface level, many AI solutions look the same: slick interfaces, automated responses, bold claims. But there is a distinct difference between basic AI bots and true agentic AI. Some AI products automate tasks only within rigid boundaries, while agentic AI is designed to think, act, and adapt with no intervention required.

The confusion often stems from how AI is marketed. Some platforms tout predictive insights but rely on limited or shallow data, resulting in misleading outputs. Others claim “full autonomy,” yet still depend heavily on human input. Most are wrappers for outdated automation, only a few are truly built to drive action, solve real problems, and evolve with your environment.

Similarly, many products only scratch the surface by simply passing user prompts to large language models (LLMs) through an API - what you might call a very thin layer of AI. They look impressive at first, but lack any meaningful depth.

This creates a perception problem. AI is either seen as a cure-all or dismissed as hype. In reality, the value lies between. Real improvements in productivity and efficiency come from using the right tools, not just any tools.

The shift from automation to autonomy

Although hype still surrounds AI, we’re also seeing real progress as it evolves from basic automation to true autonomy. In IT specifically, autonomous AI is starting to take on entire workflows from start to finish, including resolving low-level support tickets without any intervention from IT personnel, even though end users may still interact with the AI.

The depth of these solutions is critical. When AI systems layer orchestration, coordinate multiple processes, or use specialized agents for different tasks, they become much more than a simple interface to a language model. And when they can take informed action on real business systems, drawing on an organization’s unique data and historical context rather than merely offering recommendations, that’s when you see what can truly be considered a deep AI product.

The effect on an organization is threefold. For end users, it delivers a zero-time SLA experience: instant support, self-service resolution, and frictionless access to help anytime. This shift dramatically improves the digital employee experience (DEX), which is now a key driver of productivity and satisfaction in mature IT environments. For IT teams, it frees up hours each week, reduces backlog, and improves response times. For the organization, it cuts costs without compromising quality and enables scalable IT support without additional hiring. However, with this power comes responsibility. IT leaders must ensure these systems operate within clear guardrails, especially when interacting with sensitive data, employee devices, or live environments.

A central concept here is closed-loop AI. These systems are designed to ensure that inputs remain within the organization’s control. Unlike open models that may use your data to enhance results elsewhere, closed-loop systems are built with enterprise-grade governance in mind. This approach gives IT leaders greater confidence to adopt AI without compromising security or compliance.

Three warning signs of hype

To effectively evaluate AI tools, it’s important to look past the branding and focus on the core mechanics. Here are three common red flags:

Lack of specificity: If a product claims to “revolutionize business” but cannot point to a specific workflow or use case it improves, that is a concern.

No explainability: If you can’t trace how a decision was made, or what data was used to make it, that’s a sign of a black-box system. Trustworthy AI should be auditable and understandable, especially in high-stakes enterprise settings.

No real learning or depth: If the AI lacks any meaningful learning mechanism or only relies on a small, shallow set of data points, it’s unlikely to improve over time. True AI products get smarter by processing large, relevant datasets, whether through training robust models or continuously absorbing business context. Without this depth, you’re often looking at a thin layer that may impress in a demo but quickly fall short in the real world.

As more tools claim to offer autonomy, it’s more important than ever to understand what to look for in a reliable AI solution and what to avoid.

What to look for instead

Instead of getting distracted by flashy demos or inflated claims, decision-makers should evaluate AI tools based on three key pillars:

Relevance and integration: Is it trained on data that reflects your business context, and can it be customized to fit your company’s workflows, policies, and operational guidelines? Just as important, will it integrate with your existing tech stack or require major reengineering? AI works best when it adapts to how your organization already operates, not the other way around.

Transparency: Can you understand and control how it works?

Impact: Does it save time, reduce errors, or improve outcomes in measurable ways? Does it actually do the work? Are there any stats or data points that can show proven impact?

Ultimately, the strongest AI solutions build layers of capability, from orchestration to specialized agents to learning engines that can take real action, creating something far more valuable than tools that simply pass prompts to a language model. They don’t just mimic intelligence; they deliver tangible value by empowering teams to focus on strategic work, improving efficiency, and generating a clearly demonstrable return on investment.

The future Is functional, not flashy

The future of AI in enterprise technology will not be defined by the tools with the boldest announcements or the most dramatic demos. Instead, it will be shaped by smart, adaptable systems that take ownership of tasks from start to finish and operate independently within clearly defined parameters. These tools quietly improve everyday operations and deliver consistent results with minimal oversight.

AI on its own is no longer enough. To truly deliver value, it needs to be connected to real-time systems, historical data, and the operational context where it’s deployed. That’s what unlocks its full potential. When AI is paired with an on-the-ground agent and backed by rich historical insights, it can go beyond recommendations and solve problems autonomously. It’s the combination of real-time visibility, institutional memory, and intelligent execution that makes for a truly transformative solution.

For IT leaders, the goal is not to chase hype, but to make informed decisions by asking tough questions, demanding clarity from vendors, and staying focused on business outcomes.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology