Error message

  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6591 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

Amazfit Bip 6 Hands-On: At $80, This Watch Could Be a Steal

CNET News - Mon, 03/31/2025 - 08:00
For less than $100, the Amazfit Bip 6 checks all the boxes you'd expect out of a smartwatch, plus it works on both iPhone and Android phones.
Categories: Technology

The Oura Ring’s AI-powered wellness advisor just got a major upgrade, and I can’t wait to use it more

TechRadar News - Mon, 03/31/2025 - 08:00
  • Oura Ring is moving 'Oura Advisor' out of beta
  • The AI-powered wellness advisor launched in beta in July of 2024
  • It works alongside the classic Oura tracking experience by letting you ask about data points and get more context

I’ve worn an Oura Ring daily for well over three years at this point, and one of the best parts of the experience that keeps me coming back is excellent, accurate data that’s always contextualized.

Thanks to my sleep score, I know how well I slept the night before, how I’m looking forward to the day ahead, and how I can track my activity throughout the day. It’s quite handy and helps to make the whole health and activity tracking experience a bit more actionable.

That’s why I was so intrigued when Oura announced a beta AI-powered wellness chatbot, Oura Advisor, in July 2024. I’ve used it plenty since it launched in beta, and evidently, many other Oura users have too. It’s now ready for primetime, as the wellness brand is making it a full-fledged feature in the app for paying members.

I’ve used it quite a bit during its beta testing period – the company has dubbed its beta program Oura Labs – and I like how it can complement the regular Oura experience. Do you have a question about your readiness score, or want to provide more context for why you were a little less active? The AI Advisor can tell you.

You can chat with Oura Advisor by typing things out, and it can even make recommendations for activities to help. Though, of course, with any software or AI-powered health feature, it’s not a doctor.

It is designed, however, to be a conversation, controlled at your directive, about your health – combining the data tracked within Oura and the company's scientific models with generative AI.

(Image credit: Oura)

In the full release, Oura’s upping the experience, which makes me more excited to give it even more of a try. For one, rather than just referring to a trend it might have picked up on – more active on a given day – it can also pull up visuals and provide them as answers. Beyond just pulling up a chart, Oura says that Advisor now has ‘Trend Detection,’ allowing it to quickly pull data and access metric baselines or learned detected trends.

Oura’s expanding Advisor's memory function promises that beyond remembering something, you expressly tell it, say that you’re recovering from an injury, and it will better weave in information from previous chats into the current one. While echoing a film or a Disney theme park ride, Oura promises it will be a more coherent storyline or conversation.

When you set up Oura Advisor, you will still have three styles to choose from – supportive, mentoring, or goal-oriented – but it’s now dynamic, allowing the generated responses to be more empathetic, joyous, or even determined on the fly. It will be interesting to see how much improvement this offers to the service I tested in beta.

Just like during the Oura Labs testing period, you can set notifications to remind you to interact with Advisor or pull it up on-demand when in the Oura app for iOS or Android. In its full-launch mode, Advisor is available globally in English.

While the feature was in beta under Oura Labs, the company says that 60% of folks who enrolled in Advisor used it several times a week, and 20% used it daily. I’m in the former of that group, but these changes could have me calling on the Advisor for a conversation more.

You might also like
Categories: Technology

YouTube is testing a fix for the most annoying thing about subscriber notifications – and I'm fully on board

TechRadar News - Mon, 03/31/2025 - 08:00
  • YouTube is testing a new feature that stops sending you notifications from channels you no longer watch
  • The aim of the feature is to stop users from turning off push notifications altogether, instead adjusting certain alert settings for them.
  • It could be a blessing for subscribers who are bombarded with unwanted notifications but not so much for YouTube creators

Clearing your YouTube notifications is a chore in itself, and it’s not the easiest, especially when you’re subscribed to channels that upload constantly - but YouTube is working on fixing that. In a new test, YouTube is turning off notifications from channels you no longer engage with, and the days of overwhelming push notifications are almost behind us.

YouTube made the announcement a few days ago, and it's aimed specifically at subscribers who have their notifications set to ‘All’, but don’t open these alerts. Notifications will still appear in your notification box in the YouTube app, but the platform will turn off push alerts so that you’re not bombarded with unwanted updates. YouTube has gone into detail about how this will pan out, stating the following in its announcement;

“Viewers who haven’t recently engaged with a channel despite having been sent recent push notifications will not receive push notifications in the experiment. Notifications will still be available via the notification inbox in the YouTube app. Channels that upload infrequently will not have their notifications affected”. It’s not certain if users will be notified if they’re missing these alerts, nor the duration of this experiment.

When creators upload content to YouTube, one of the main ways for them to boost views and subscriber count is to encourage viewers to turn on notifications so that they can be informed when a new video has been uploaded. If you’re a frequent YouTube user and serial video watcher like me, then you’ve probably found yourself turning on notifications for every channel possible, which, in retrospect, results in an overwhelming wave of alerts - but the aim of YouTube’s test is more than simply diluting excessive notifications.

YouTube is going through many changes right now, the biggest one is the addition of its new YouTube Premium Lite subscription plan. (Image credit: Future) Another time-saving perk, but one that could cost creators

There’s no arguing that having a platform take control of your notification settings is unorthodox and crosses the line into an invasion of personal settings. But this test could result in another time-saving perk from YouTube following its recent playback queue experiments.

Although it’s easy to amend your notification settings to avoid an avalanche of alerts, it’s common for YouTube subscribers to disable these altogether instead of adjusting the settings per channel - I'd know, I’m guilty of this. With this latest experiment, YouTube aims to sway viewers from disabling notifications entirely just because their notifications inboxes are stacking up, which is a helpful feature and one that I’m certain to be thankful for in the long run, but I can see how this can be damaging to creators.

Push notifications are one of the main things YouTube creators rely on to get their views up and keep their audience engaged as they directly alert their subscribers when videos go live, so YouTube making the executive decision for viewers to no longer receive alerts from channels is a bold move.

The platform is going through a lot of changes right now, having just launched its YouTube Premium Lite subscription tier, but let's hope it pays equal attention to the needs of those who rely on their YouTube channel to make a living.

You might also like
Categories: Technology

Forget Samsung's new modular OLED panels – if this tech works on TVs we could get giant OLED TVs at half the price

TechRadar News - Mon, 03/31/2025 - 07:46
  • Samsung Display unveils modular OLED wall-mounted screen
  • The key tech change is 60% reduction in bezels on OLED displays
  • It's very similar to Samsung's The Wall micro-LED product, but OLED

Samsung Display has unveiled a new modular OLED screen concept, in which a screen of a particular size or shape can be built using a series of smaller OLED displays tiled together (via Tom's Guide).

If the idea sounds familiar, that's because it's a very similar idea to Samsung's The Wall micro-LED screen – but this is the first time we've seen the idea applied to OLED.

The key tech change that's made this possible is that Samsung says it can reduce the bezel space needed on OLED and QD-OLED panels by 40%, bringing them down to 0.6mm. Now, each panel has that bezel, so actually there's a 1.2mm border between the screens, which will be enough of a black line to be noticeable, which is probably part of why this remains a concept for now.

But let's assume that Samsung can keep improving the tech and can make the bezel even smaller, in which case I think that makes this tech extremely interesting for creating giant projector-matching OLED TV screens. But I'm not necessarily interested in the square modular concept shown above – the same tech could be applied in other, similar ways.

I immediately started thinking about how the prices of the best OLED TVs rise exponentially as the sets get larger, because of particular quirks of OLED production, and how combining smaller screens could make them much more cost effective.

It's a 97-inch OLED! And it's real expensive. (Image credit: LG) Four 55-inch TVs in a trenchcoat

To illustrate what I mean, I'm going to use LG's OLED TVs rather than Samsung's, because of the sizes involved. The LG G5 (the company's flagship) range includes a 97-inch model, and it costs $24,999 / £24,999.

That's literally 10 times the price of the 55-inch LG G5 model, which costs $2,499 / £2,399. And the 97-inch model actually has an inferior panel – it's a couple of generations behind, and will be nowhere near as bright as the 55-inch model's Primary RGB Tandem four-stack panel.

The reason for this is that it's incredibly hard to make large-scale OLED TV panels in a cost-effective way. OLED screens are produced on huge sheets called 'mother glass' that are then cut down to smaller sizes; so you can produce nearly four times as many 55-inch screens per mother-glass sheet as you can 97-inch screens.

But also, OLED production still has yield problems, meaning that a lot of screens are produced imperfectly, and this wastage is factored in to the cost of the displays. If you're producing a lot of panels per mother glass (if you're making phone screens for example), then the wastage doesn't matter too much – losing a panel only wastes a tiny amount of your material and time.

But if you're making 97-inch panels and there's a problem, you've lost a huge amount of material and time, and those costs are factored into the pricing of the good panels, effectively.

So the modular concept is immediately interesting because it solves that issue: combine smaller screens into one larger display and reduce the wastage problem massively.

Could Samsung combine multiple Samsung S95F QD-OLED TVs, like the one here, into (Image credit: Future)

So what I'm thinking is this: if the bezels can be reduced further, could we have a 110-inch OLED TV in the future that's actually four 55-inch TVs combined in one unit? That's even larger than the 97-inch model, and yet could cost a (relatively) mere $10,000, based on the cost of four 55-inch LG G5 TVs.

And I've used LG as my example because of the easy size comparison to existing TVs, but this new tech is coming from Samsung to potentially use in its QD-OLED TVs – and that's even better, because these panels don't go any larger than 77 inches currently, so doing this would enable it to offer a giant OLED for the first time.

It's not as simple as all that, of course. Combining four 4K OLED panels means we're talking about an 8K TV, though Samsung has plenty of experience with 8K processing.

And additionally, a lot of purists – i.e., the people most likely to want this TV – would reject any sign of a seam between the panels, so unless the bezels can be totally removed I'm probably talking about something of a pipe dream.

But it wouldn't be the first time that a secret dual-screen setup has been used to make cutting-edge tech more realistically priced – the first 5K display on an Apple iMac was literally two 1280 x 1440 displays powered by a custom display processor to treat them as one unit, with no seam down the middle.

Obviously, building a TV out of small squares also solves the screen size problem, but there's a simplicity to fixing four 4K TVs together that solves problems such as dealing with a non-standard resolution.

I don't think we're going to see a 110-inch OLED built in this way competing with the best TVs any time soon – but if this bezel-reducing tech keeps improving, it could make possible the home theater screen of your dreams, especially with micro-LED not looking like it'll become affordable in the near future.

You might also like…
Categories: Technology

Data centers are becoming an increasing emissions concern

TechRadar News - Mon, 03/31/2025 - 07:23
  • Report claims data centers and the aviation industry both account for 3% each of global carbon emissions
  • By 2026, the world’s data centers will use as much electricity as Japan
  • Germany is mandating highly efficient data centers from 2026

Increased artificial intelligence activity has led to skyrocketing demand for data centers, with new SPhotonix research claiming the facilities now account for the same amount of emissions as the global aviation industry.

Now accounting for 3% of global carbon emissions, the concern is AI and IoT will continue to drive data centers’ environmental impact up, and by 2030, they could consume as much as 13% of the world’s electricity.

SPhotonix says 149 zettabytes (ZB) of data was created in 2024 – after just four years, this annual figure could stand at 394ZB, around 2.6x more.

Data centers linked with growing emissions

Quantifying the concerns, SPhotonix revealed data centers currently consume 460TWh per year, but by just 2026, this could more than double to 1,000TWh, which is roughly the same as Japan’s total energy usage. By the end of the decade, these sites could account for 2.5 billion metric tonnes of CO2.

The research delves into different types of storage, and reveals HDD storage used for long-term cold data storage and archiving actually sues more data – in order to keep these HDDs at low temperatures for data preservation and drive health, large amounts of energy are required. Cold storage and archiving account for around three-fifths of all data stored today.

HDDs also tend to have a shorter lifespan than SSDs, meaning that they must be copied every seven to 10 years, which comes at the expense of high energy consumption and CO2 emissions.

“In an increasingly digital world, the environmental impact of data storage is quickly becoming a pressing concern with respective Governments and Regulatory bodies stepping in to enforce sustainability standards,” said SPhotonix Chief Science Officer Peter Kazansky.

Kazansky added that Germany will require new data centers to achieve a power usage efficiency of 1.2 or less from next year.

“Reliable data management plays a vital role in addressing energy challenges, enabling efficient resource allocation and long-term planning,” Kazansky concluded.

You might also like
Categories: Technology

Web hosting vs WordPress hosting: What's the difference?

TechRadar News - Mon, 03/31/2025 - 06:42

There is no difference between web hosting and WordPress hosting. This is like asking transport vs cars what’s the difference? Transport is a system or means that carries goods or people from one point to another and a car is a type of transport. Web hosting is a service that stores and makes a website accessible over the internet and WordPress hosting is hosting optimized for WordPress-based websites. WordPress hosting is not a type of hosting. Types of hosting include shared, VPS, dedicated and cloud hosting. Hosting can be optimized for WordPress on all types of hosting (read what the best WordPress hosts have to say about this). Below, I’ll go into the details about hosting in general to help you understand more about WordPress hosting and web hosting.

Advantages of WordPress hosting

As mentioned above WordPress hosting is not a type of hosting like shared hosting or VPS hosting but because WordPress powers over 40% of websites many hosts have hosting packages optimized for WordPress. On top of WordPress being pre-installed (and possibly WooCommerce too), optimizations for WordPress include tailored server environments that boost performance through PHP, caching, and database configurations specifically for WordPress. This results in faster loading time and a better user experience.

You also get WordPress-specific support through dedicated WordPress support teams that offer expert assistance with platform-related issues simplifying troubleshooting. Plus, for advanced users WP-CLI integration allows access to the WordPress Command-Line Interface (WP-CLI) which enables WordPress sites to be directly managed from the command line for efficient bulk actions, updates, and troubleshooting.

There is often enhanced security too. Pre-installed security features, such as malware scanning and web application firewall (WAF) tailored to WordPress environments, protect websites against WordPress-specific threats.

If you already have a WordPress site, WordPress automigration tools simplify the migration processes saving you time and reducing the risk of errors.

The main types of hosting

WordPress hosting can be optimised for all types of hosting and each will have their own pros and cons for a WordPress site.

Shared hosting

On shared hosting multiple sites share the same server resources, including CPU, memory, storage, and bandwidth. It’s very cost-effective and beginner-friendly. It’s Ideal for small businesses and personal websites with low-traffic and basic needs. In most cases web hosts optimize all shared servers for WordPress because WordPress is so popular. You’ll notice this if you switch between the shared and WordPress plans on a site. Often the only difference is that you get directed straight to WordPress dedicated support if you submit a ticket.

On a shared server, performance may suffer at times due to high traffic from other websites on the same server or your own high traffic being throttled. It also has the theoretical potential for unauthorized access to data as you share the same server with multiple websites. However, there has never really been a successful attack of this kind. Still, if you hold sensitive data there are regulations on whether you can use a shared server or not so this potential security risk is taken seriously.

VPS hosting

WordPress hosting on a VPS is your own server environment with its own resources. This gives you greater control and customisation with root access meaning you can upload whatever you like (within reason) to your server. It is suitable for websites that have higher resource and security requirements.

However, managing a VPS requires more technical knowledge than shared hosting. Unless you opt for managed VPS hosting, you'll be responsible for server maintenance, software updates, and security configurations.

Dedicated hosting

On a dedicated server you get the entire physical server with hardware that’s not shared or accessible by anyone else. It’s one of the most expensive types of hosting but offers full control and the highest uptime. It’s best for high-traffic, complex websites, or those with strict security needs.

Dedicated WordPress hosting is the most powerful option – you have exclusive access to all server resources, giving you complete control over the server environment, fastest loading times, and optimal performance. It also allows you to implement advanced security protocols to protect your website from threats.

However, all that comes with a high price tag. It also requires significant technical expertise, and you’ll also be responsible for server maintenance – unless you go with a managed dedicated server.

Cloud hosting

In cloud hosting resources are pulled from a pool of resources. WordPress hosting on the cloud provides a blend of scalability, flexibility, and reliability. Multiple copies of your website can be stored in different places so if there is a disaster another copy of your website is ready to go.

Sometimes hosts just call their products cloud without any cloud benefits. At other times their infrastructure is based on cloud architecture but servers are packaged as traditional hosting solutions like shared and VPS. WordPress optimised hosting on cloud packages works the same way as shared or VPS hosting but with the added benefit of reliability.

WordPress hosting vs web hosting summary

In summary, WordPress hosting is not a type of hosting but rather hosting optimized for WordPress. Hosting for WordPress can be optimized on all types of web hosting and each has its own pros and cons. For most people hosting a WordPress site on a shared server is fine but for those that have sites that require more resources and reliability like online stores WordPress hosting on a VPS or cloud VPS server might be more suitable.

Categories: Technology

The Pixel Weather app's radar map has mysteriously disappeared, but Google is now rolling out a fix

TechRadar News - Mon, 03/31/2025 - 05:16
  • The radar map has gone missing from Pixel Weather
  • All phone models seem to be affected
  • Google hasn't yet said anything about the change

Update April 4, 2025: Google has been in touch with TechRadar, and though it hasn't said what went wrong, it has said everything is now sorted. "We’re rolling out a fix that restores the weather map in the Pixel Weather App," is the statement.

Our original story follows below...

If you've fired up the Weather app on your Pixel phone in recent days and found the radar map strangely absent, you're not alone: it seems to have disappeared across all Pixel phones for all users, and no one is sure why.

As noted by Android Authority, this seems to be a server-side change, which means Google has apparently tweaked something on its end – there hasn't been any update to the app that's taken the weather map away.

If you're unfamiliar with the Weather app on the Pixel, the radar map shows rainfall across any area for the next few hours – it's a lot like Dark Sky, if you remember the popular iPhone weather app that Apple bought in 2020 and later shut down.

The map view was part of a major revamp for the Weather app on Pixels, a revamp that first appeared on the Google Pixel 9 series, before making its way to older devices. There's a lot of other information in the app, but the real time map is a big part of its appeal.

We've contacted Google to find out if it has an official answer for the disappearance and will update this story if we hear back.

Where's it gone? Google Weather app radar is gone. Anyone else? from r/GooglePixel

You can find numerous complaints about the vanishing of the radar map over on Reddit and the official Pixel Phone Help forums. It looks to have disappeared across every Pixel phone, and it seems many users were big fans of its functionality.

I've loaded up the Weather app on my own Pixel phone, and can confirm the map widget is nowhere to be found. Everything else seems to be working as normal, but there's no map panel on the interface – and no indication that it was ever there.

Google hasn't said anything officially about this, and until it does, we're in the dark about what's gone on here. Is there a bug Google is fixing? Has it decided to remove the radar map for good? Was it pulled by mistake? Right now, we just don't know.

Unless this was a genuine error, it seems like bad form for Google to remove such a popular feature in a core Pixel app without any warning. Let's hope Google gets back to us with some official comment in the near future – and the return of the map.

You might also like
Categories: Technology

Phishing Emails Aren't as Obvious Anymore. Here's How to Spot Them

CNET News - Mon, 03/31/2025 - 05:00
New research shows that instead of attention-grabbing subject lines, scammers are going with more subtle pitches to get you to click.
Categories: Technology

An Apple a day? Your iPhone could soon have an AI Doctor thanks to a new iOS 19 Health app

TechRadar News - Mon, 03/31/2025 - 04:49
  • Apple is reportedly prepping a big Health app revamp for iOS 19
  • It will feature a new health coach feature
  • A new rumor says it will be powered by AI and could replicate a real doctor 'at least to some extent'

Following reports that Apple is planning a major overhaul of its Health app in iOS 19, fresh and more detailed information has revealed that it might feature an AI agent that would act like a virtual doctor.

Earlier this year, Bloomberg's Mark Gurman reported that Apple "is planning a revamped Health app – as well as an AI-based coaching service."

Now, the same source says Apple is working on a health coach that could replicate your doctor.

Writing in his latest Power On Newsletter, Gurman notes that some of the company's grander health plans – notably, blood glucose monitoring – are still a way off. As such, he says Apple has turned to something that could arrive much sooner.

"The initiative is called Project Mulberry, and it involves a completely revamped Health app plus a health coach," he writes. Most notably, Gurman says, "The service would be powered by a new AI agent that would replicate – at least to some extent – a real doctor."

Apple's AI Doctor

Apple Watch is sure to play a key role in the Apple Health revamp (Image credit: Future)

According to Gurman's report, we can expect this major health revamp "as early as iOS 19.4". Sadly, that means it's unlikely to feature on the best iPhones until next year.

Gurman says the Health app will collate data from your iPhone, Apple Watch, earbuds (such as future AirPods with heart rate monitoring), and other third-party products.

Then, the AI coach "will use that information to offer tailor-made recommendations about ways to improve health."

Apple is reportedly training its AI agent with data from physicians who work at Apple, and the company wants to bring in other doctors with expertise in sleep, nutrition, mental health, and more. Gurman says Apple will create videos to serve as explainers about certain conditions, along with pointers to make lifestyle improvements.

The videos are being filmed at a facility in Oakland, California, according to the report, and Apple is "also seeking to find a major doctor personality to serve as a host of sorts for the new service, which some within Apple have tentatively dubbed 'Health+.'"

Another big part of the app will be food tracking, with Apple taking on the best fitness apps like MyFitnessPal. With Apple Watch integration a certainty, it's likely that some of these Health upgrades will find their way into watchOS 12, too.

Gurman says the app is the top priority of Apple's health team, and it may even lean on data from cameras on devices in the future.

While a revamp of the Apple Health app is an exciting prospect, it's one that probably won't arrive in time for the iPhone 17 in September, although it's possible that some features could debut earlier, with the AI-powered agent following later down the line.

With Apple delaying other Apple Intelligence features like its big Siri upgrade, the company needs a big AI win and fast. Is an AI-powered doctor the answer?

You may also like
Categories: Technology

Data needed for GenAI is putting businesses at risk

TechRadar News - Mon, 03/31/2025 - 04:45
  • Report claims enterprises share 7.7GB/month with GenAI apps, up from 250MB in 12 months
  • Netskope finds three-quarters of employees use personal accounts for AI tools
  • A stark rise in shadow AI has been observed

Enterprises have seen a staggering 30x increase in the amount of data they share with generative AI apps in the past year alone, highlighting the huge potential for vulnerabilities without the right amount of protection, new research has declared.

Findings from Netskope claim the average organization now shares 7.7GB per month with such apps, up from 250MB in 12 months.

Among the data shared is sensitive information, like source code, IP, regulated data, passwords and keys - with Netskope now urging businesses to consider how they share their data with third parties.

Sharing sensitive data with GenAI apps

Although many instances relate to the proper use of AI tools, Netskope’s research highlights the alarming rise of shadow AI, defined as artificial intelligence tools that employees use without authorization or approval from their companies. Nearly three in four (72%) enterprise users use GenAI apps with personal accounts for work.

On the whole, enterprises are struggling to keep up with the pace of AI tool adoption, and a great example of this is DeepSeek AI. Nine in 10 (91%) enterprise users had used DeepSeek within weeks of launch, but most companies lacked robust security policies.

Netskope also criticized enterprises for adopting a block-by-default approach – by restricting user access to AI tools, workers are more likely to engage with shadow AI, putting businesses at an even higher risk. Instead, companies should consider safely enabling them.

Interestingly, the report reveals a notable trend toward local GenAI infrastructure, with 54% of organizations running local versions compared with fewer than 1% a year ago.

“[GenAI is] becoming increasingly integrated into everything from dedicated apps to backend integrations,” said Ray Canzanese, Director of Netskope Threat Labs.

“This ubiquity presents a growing cybersecurity challenge, demanding organizations adopt a comprehensive approach to risk management or risk having their sensitive data exposed to third parties who may use it to train new AI models, creating opportunities for even more widespread data exposures,” Canzanese added.

Looking ahead, Netskope is urging enterprises to assess their tools, users and use cases to provide safer and more personalized solutions.

You might also like
Categories: Technology

New Superman cast rumor links Guardians of the Galaxy star with big cameo role in James Gunn's DCU film

TechRadar News - Mon, 03/31/2025 - 04:43
  • Superman will reportedly feature a cameo from a Guardians of the Galaxy star
  • The actor is supposed to be playing Jor-El, aka Superman's Kryptonian father
  • Director James Gunn hasn't debunked the rumor yet

A Guardians of the Galaxy (GotG) alumnus is set to have a small but important role in this year's Superman movie, according to a new rumor.

Taking to Reddit last Friday (March 28), industry insider ViewerAnon suggested that one Marvel Cinematic Universe star is crossing over into the DC Universe (DCU). The actor in question is reportedly going to play Jor-El, aka the Kryptonian father of Kal-El/Clark Kent, in a cameo capacity.

Potentially big spoilers immediately follow for James Gunn's Superman movie. Do not proceed if you don't want to know who's supposedly playing Jor-El!

Has Bradley Cooper joined the cast of James Gunn's Superman film? Comment from r/DCULeaks

As you'll have seen in the above Reddit post, ViewerAnon claims Bradley Cooper will play one of Superman's biological parents in the DCU Chapter One movie. The leaker revealed the cameo appearance after getting into an online spat with a fellow Reddit user.

It's unclear if Cooper, who voiced Rocket Raccoon in Gunn's GotG movie trilogy, is actually part of Superman's extensive ensemble. Other notable insiders haven't backed up or debunked ViewerAnon's leak, so it's difficult to discern if there's any truth to this particular piece of gossip.

There are, though, some signs that there may be more than a shred of credibility to this. For one, Gunn hasn't quashed the rumor on social media. The DC Studios co-chief is usually quick to disprove inaccurate information about DCU projects that gets leaked online. As of the time of publication, though, Gunn hasn't commented on Cooper's apparent involvement.

Superman had a test screening and from what I hear people loved it.March 27, 2025

Then, there's the fact that Superman reportedly had a test screening late last week. According to another leaker in Daniel RPK, one of the DCU Chapter One projects we're most excited for was shown to some lucky people on March 28. If that's true, it's no coincidence – to me, anyway – that the Cooper casting rumor appeared online mere hours after a secret screening for one of 2025's most anticipated new movies was held.

With CinemaCon 2025 set to take place this week (March 31 to April 3), there are bound to be some new details –maybe we'll be treated to a new trailer or sizzle reel? – about Superman in the days ahead. Indeed, the DCU film will have a sizeable presence at CinemaCon and should feature in some capacity during Warner Bros' 'Big Picture' presentation tomorrow (April 1). Just don't expect to hear anything about Cooper's apparent Superman involvement during said panel.

You might also like
Categories: Technology

CDs Offer Guaranteed Earnings, Even When the Economy Is Uncertain. Today's CD Rates, March 31, 2025

CNET News - Mon, 03/31/2025 - 04:30
Anxious about the economic headlines? A CD can provide much-needed peace of mind.
Categories: Technology

The Samsung Galaxy S26 could go back to Exynos in some regions – but is that such a bad thing?

TechRadar News - Mon, 03/31/2025 - 04:18
  • Some models of the Samsung Galaxy S26 will reportedly use an Exynos 2600 chipset
  • However, it sounds like others will still have a Snapdragon chipset
  • Snapdragon chipsets are generally more powerful, but the gap might be closing

The Samsung Galaxy S25 series has been a bit of an anomaly, as while Samsung usually uses two different chipsets for its Galaxy S phones (depending on the model and region), this year it used the Snapdragon 8 Elite across the board. Next year, though, we might see a chipset split again.

This is according to reputable leaker @Jukanlosreve, who claimed on X that “the Exynos 2600 is definitely back and it will be used in the S26.” However, it doesn’t sound like this will be a complete switch from Snapdragon, as they added that chip volume is apparently limited.

So, in other words, Samsung might not be able to produce enough Exynos 2600 chipsets to equip every Samsung Galaxy S26 model with one, meaning that some models and/or some regions will probably get the Snapdragon 8 Elite Gen 2, or whatever Qualcomm ends up calling its next-generation chipset.

The Exynos 2600 is definitely back and it will be used in the S26.But the chip volume is so limited that it’ll likely be similar to the Exynos 990 situation.I’m not sure if SF2 is actually any good.March 30, 2025

Typically, when Samsung uses two different chipsets, it equips US versions with a Snapdragon one, while elsewhere the base and Plus models get Exynos, and the Ultra still gets Snapdragon. So, based on past form, the Samsung Galaxy S26 and Samsung Galaxy S26 Plus might use the Snapdragon 8 Elite Gen 2 in the US and the Exynos 2600 elsewhere, while the Samsung Galaxy S26 Ultra might use the Snapdragon 8 Elite Gen 2 globally.

Differences in power and efficiency

What does that mean for you? Well, Qualcomm’s Snapdragon chipsets tend to outperform Samsung’s Exynos ones, so we were quite happy to see the Snapdragon 8 Elite used globally with the Galaxy S25 series. But in fact, the Exynos 2400 used by the Samsung Galaxy S24 series in some regions wasn’t drastically far behind its Snapdragon rival.

In our own tests, comparing an Exynos-powered Samsung Galaxy S24 with a Snapdragon-powered Samsung Galaxy S24 Ultra, we found that “you won’t notice this performance difference in general use, as both of these phones are way faster than they need to be.”

And there are some areas in which the Exynos 2400 actually appears to outperform that generation’s Snapdragon chipset. For example, Android Authority found that battery life was better with the Exynos.

So, if you live in a region where the Samsung Galaxy S26 gets an Exynos 2600 chipset, that might not actually be such a bad thing.

In any case, it’s very early days for Samsung Galaxy S26 leaks, as these phones probably won’t launch until 2026. So, although this isn’t the first time we’ve heard that the Exynos 2600 might be used in next year's lineup, we can’t be at all confident of which chipset or chipsets will be used just yet anyway. For now, then, we wouldn’t worry too much about this possibility.

You might also like
Categories: Technology

Over 2500 TechRadar readers took our survey - and there was one clear favorite online backup service

TechRadar News - Mon, 03/31/2025 - 04:02
  • We used our WhatsApp channel to ask which online backup service our readers use
  • 2,664 readers voted, and a staggering 1,300 choose Google Drive
  • Let us know in the comments which service you use

To mark World Backup Day, we asked our TechRadar Pro readers which services they use through our WhatsApp channel (which you can join here!).

Most of us choose to backup our files online. It saves you from buying physical hard drives every time you run out of storage, and your storage is just one click away, making it super accessible. You’ve plenty of options too - with Apple Cloud, Microsoft OneDrive, Google Photos, and iDrive all amongst the most popular choices.

From our readers' responses, one cloud provider is by far the most popular, and that's Google Drive, with over 1,300 respondents choosing the service - that's over 50%. We’ve reviewed Google Drive, so this is not too much of a surprise - as the service is quick and simple to use.

Worryingly, the survey revealed 10% of our readers still don’t back up their files, so if you're one of those who need convincing, we’ve listed 5 reasons why you should backup your data to the cloud - be sure to keep your data safe!

Backup options

Apple Cloud also ranks very well, with 21% of our readers using this service. This service is very Apple centric, with only limited support for Windows and Android users, but is a great option for anyone with an ever more popular Apple ecosystem! Apple offers a tiered subscription service, and keeps your files safe with an encryption - check out our full review here to see just why this service is the most popular.

Another popular choice is OneDrive, with 377 respondents choosing this service (14%). From our testing, OneDrive has improved a great deal recently, and is particularly useful for anyone who already spends a lot of time using Windows. OneDrive has fantastic mobile app experience, and is integrated closely with Windows and Microsoft 365 - so check it out if you use these regularly.

Surprisingly, DropBox received fewer than 100 votes, despite being one of our best rated backup services. Although not the cheapest option, DropBox has plenty of useful file sharing options and an exceptionally smooth user experience.

Similarly, IDrive only received 16 votes from our readers, but is a very highly rated service by our experts. IDrive offers end-to-end encryption and a host of backup methods and device options, so take a look at our rating if you're looking for something new.

Whichever service you choose, making sure your data is protected should be a priority for both personal and business users.

"Data is the lifeblood of today's businesses, which means data loss can lead to catastrophic business failure," notes Alexander Huang, Director of Product and Customer Support at Laserfiche. "World Backup Day is a reminder to act now - before it’s too late. Organizations should evaluate their backup strategy today and safeguard your business against data disasters.”

You might also like
Categories: Technology

I tried Samsung's pricey new AI vacuum, and I now know there's no need for an AI-powered vacuum to exist

TechRadar News - Mon, 03/31/2025 - 03:59

Samsung has launched a new vacuum that uses AI to identify the exact kind of floor it's on, and it has convinced me – if there could be any doubt – that not every appliance or gadget needs AI.

The Samsung Bespoke AI Jet Ultra is designed to be able to sense its exact environment, so it can determine whether it's on a hard floor, a regular carpet, a deep pile carpet, or a rug. It can also sense when it's close to the corner of a room, and if it's been lifted up. It uses all this information to adjust its suction and brushroll speed for the most efficient clean.

It's more complicated than simply increasing suction on thicker floor types. The adjustments are designed to deliver the equivalent of medium power mode, while ensuring the vacuum is still easy to push forwards – super suction is one thing, but it's no good if it's so strong that you can't move the floorhead. The idea is that it puts the rest of the best vacuums on the market to shame by delivering the smartest, most battery-efficient, effective clean possible.

The Samsung Bespoke AI Jet Ultra is designed to be able to tell between many different floor types (Image credit: Future)

In theory, it makes sense. Deep-pile carpet requires greater suction to lift away dirt than hard flooring, where the dust and debris will just be resting on top. However, I'm not convinced that it's much more granular than that, or that a vacuum changing its suction when moving from a carpet to a thicker carpet will lead to a noticeable improvement in cleaning efficiency.

That's assuming it all works as it's meant to – and, based on my tests at least, it doesn't. You can get the full low-down by reading my Samsung Bespoke AI Jet Ultra review, but the short version is that such adjustments were hit-and-miss.

While I could hear suction changing when moving from my lino-floored kitchen to my carpeted lounge, I couldn't hear a change when moving from my carpeted floor to my bath mat (placed in my lounge for the testing purposes). Nor was there a message on the vacuum's screen to reassure me that the vacuum was adjusting based on environment. The vacuum also failed to recognize when it was in a corner.

Easy cleaning

Samsung isn't the only vacuum maker to offer automatic suction adjustment. Shark has gone all-in in this area, with its Shark PowerDetect Cordless and Shark Detect Pro Cordless vacuums both using sensors to determine whether they're on hard floor or carpet, when they're cleaning at the edge of a room (where dust can collect), and if the floor they're on is especially dirty, increasing or decreasing power in response.

Using these vacuums, I could hear power adjusting reliably in all of these situations. You can see the PowerDetect in action in the video clip below.

Dyson uses automatic adjustment, too. Its Gen5detect and V15 Detect vacuums use sensors to measure resistance (as an indicator of floor type) and the size and number of particles being sucked up (as an indicator of how dirty said floor is) and raise or lower suction power in response.

Further to that, they'll report on exactly what going up the wand in real time, via an ever-changing graph on the vacuum's LCD screen. This isn't strictly necessary, but it does reassure me that the vacuum is doing what Dyson says it's meant to. And it's kind of fun. (If you want to see exactly how the new Samsung compares to Dyson's priciest vacuum, you can find out in my Dyson Gen5detect vs Samsung Bespoke AI Jet Ultra article.)

The Dyson Gen5detect reports back on what it's sucking up (Image credit: Future)

These are some of the best cordless vacuums we've tested. Automatic adjustment isn't just a gimmick – done well, it means less effort is required on your part (no messing around changing settings), the floors are reliably cleaned, and you're not wasting valuable battery life.

But I don't think Samsung has it quite right yet. Focusing on adjustment based on dirt levels, as Shark and Dyson do, makes more sense to me than being so fixated on the specifics of floor type. Not least because it opens up more possibilities for failure… which, unfortunately, appears to be the case with Samsung's latest offering.

A case for AI?

Samsung doesn't even have the excuse of it being brand-new tech – this is the second manual vacuum in its lineup to use what the brand credits as AI, following the Samsung Bespoke Jet AI from 2023. That model was designed to sense carpet, hard floor and mats, and when it had been lifted up. Samsung chose to add long-pile / dense carpet, and corner detection, to its newer release.

While Dyson and Shark's marketing material says their automation features are based on sensors, while Samsung credits AI. In reality, I suspect all three are perhaps based on similar combinations of clever sensors and software

I say "what the brand credits as AI" above, because I'm not entirely convinced it’s anything more than a combination of very clever sensors and software, like those employed by Dyson and Shark. But I guess 'smart' and 'AI' are pretty much interchangeable these days, and few brands will pass up the chance to add ‘AI’ to their product names and marketing blurb. It also ties into Samsung's current 'AI for all' initiative. Even giving Samsung the benefit of the doubt here, the Bespoke AI Jet Ultra has done little to convince me that AI is the future of vacuum cleaners.

You might also like...
Categories: Technology

The unsung heroes of the AI revolution: a quiet force shaping the future

TechRadar News - Mon, 03/31/2025 - 03:54

AI is poised to change our world at a breakneck pace, with enterprises and governments pouring billions into AI tools, assistants, and agents. Whether it’s Trump’s bold $500 billion Stargate plan or the UK’s AI Opportunities Action Plan, AI investment is the new arms race. And if the scale of investment isn’t visible yet, tech giants plan to invest $320 billion in AI in 2025 alone. The technology is dominating headlines, driving economic strategies and topping boardroom agendas.

However, most discussions focus largely on the dazzling potential of AI models. Putting these exciting AI innovations to work requires an army of unsung individual heroes largely toiling in the dark - data engineers, integration specialists, and automation experts.

These professionals are the critical silent enablers of the AI wave we are witnessing. Their expertise is critical to ensure machine learning models have the clean and structured data they require to function effectively and that their outputs can fold into complex enterprise architectures in a seamless and effective manner.

Looking beyond algorithms: behind the scenes of AI success

AI may be the headline act, but its algorithms and models are simply the tip of the iceberg. Beneath the surface, vast work goes into data preparation, developing IT infrastructure, and integration to the wider enterprise landscape. Data engineers have a crucial role in cleaning and structuring data so that AI models get accurate, unbiased, and high-quality inputs. Without high-quality data, even the most advanced AI models cannot deliver meaningful outputs.

Likewise, integration specialists weave AI models into the enterprise systems they interact with, ensuring data flows seamlessly across different environments - whether in the cloud, on-premise, or hybrid. These experts help companies leverage AI’s full potential by connecting disparate data sources, application endpoints and critical user experiences, allowing for real-time analysis and decision-making.

Finally, automation experts design smart workflows that enable AI agents to both be carefully orchestrated, and where possible, operate independently. From data to application integration, their work eliminates bottlenecks, boosts productivity, and allows businesses to deploy AI-driven solutions at scale.

Redefining traditional tech roles: AI changing data professions

AI is not just shifting industries; it’s also reimagining these very jobs that build and support this growth. Traditional tech paths are already evolving into future-oriented AI careers. Data engineers are stepping up as ‘AI trainers’, working with subject matter experts to curate datasets for improved model accuracy. Integration specialists now take on the role of AI infrastructure architects, harmonizing the latest API-centric AI technology with previous generations of application frameworks.

Automation experts are moving toward total AI orchestration, overseeing the large-scale deployment of AI agents across various business operations. What’s more, these roles themselves are converging into new hybrid positions like AI workflow engineers, prompt engineers, and agent developers. This means enterprises are now looking for multi-dimensional professionals capable of managing entire AI ecosystems instead of just performing isolated tasks.

AI has made a pivotal shift in involving tech professionals in critical business decisions, which means future AI roles will require a blend of strategic vision and technical know-how. Professionals who can connect AI capabilities with enterprise needs will be highly sought after.

Humanizing your AI journey: do not leave people behind

AI, for all its promise and potential, is not infallible. Challenges like bias in training data, privacy requirements, and ethical concerns surrounding AI deployment require careful human oversight. Data, integration, and automation professionals are indispensable in mitigating these risks, ensuring that AI applications are transparent, fair, and reliable.

Our own research recently revealed that fewer than 40% of IT leaders in the UK trust AI agents more than a human to do an effective job, which makes it apparent that there's no straightforward human to AI switch for the tech sector anytime soon. Despite the vast troves of information embedded in the latest models, enterprise data has not been available for AI model training, meaning that AI alone cannot interpret business context, making humans all the more essential in guiding AI’s applications.

The most effective leading enterprises are finding that automation enhances decision-making, and though there are clear opportunities for AI-centric process automation, AI doesn’t replace human critical thinking altogether. The best use cases of AI implementation are those where humans and machines collaborate, harnessing AI’s efficiency whilst maintaining human judgement.

Fate of data professionals: displacement or evolution?

A pressing question lingers: will AI make data-centric roles obsolete, or will it redefine them? The answer is nuanced.

As AI automates repetitive tasks and lower-level data operations, it simultaneously opens new doors for higher-order problem-solving and frees up time to be spent on strategic innovation and value-driven work.

For example, with old data cleaning tasks increasingly becoming automated, data engineers are free to focus on optimizing data architectures and ensuring AI models work with precision across heterogeneous system landscapes. Automation experts will go from basic workflow automation to developing self-learning AI systems that adapt to changing business needs in real-time.

AI’s true power isn’t in the technology alone. It’s in the people who are making it happen and laying the building blocks. It is not a threat to these roles, but a catalyst for their evolution. By acknowledging these contributions, investing in upskilling, and building a culture of AI literacy, we can ensure that AI serves as a force for progress, innovation, and human-AI collaboration.

We list the best IT management tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

How UK businesses can prepare for tariff whiplash

TechRadar News - Mon, 03/31/2025 - 02:29

It’s hard to believe Donald Trump has only been back in office since midway through January. Whether it’s additional tariffs on the import of materials like steel and aluminium or plans to target countries with ‘reciprocal tariffs’, the rate of change and potential impact on supply chains is almost too fast and far-reaching for any one person to keep pace with.

With fresh tariffs emerging from the US on an almost daily basis, it’s not possible for any business – especially those reliant on global trade – to predict every new announcement, or have a bespoke response plan ready for each possible scenario.

Instead, businesses with the potential to be hit by tariff whiplash must be able to respond swiftly to developments as they happen, or better still rehearse for potential changes ahead of time, many of which may be entirely unprecedented.

To do this, they need to have the technological tools that enable them to see beyond the horizon to identify plausible scenarios and their potential impacts at all levels of the supply chain, from the network to the individual warehouse shop floor before their rivals - thus turning seismic change into competitive advantage.

Ultimately, when a storm of unexpected tariffs or unprecedented disruption strikes, the businesses that can adapt flexibly and with speed will be best placed to ride it out.

Which industries are particularly vulnerable?

In the UK, many businesses are already feeling whiplash from a month of tariff changes under the new US administration. The US is Britain’s largest single export market, with more than £60bn worth of goods exported there in 2023 – 15.3% of the UK’s global total.

Industries heavily reliant on exports – most notably, machinery and transport – are those facing the most risk.

Take the machinery and transport sector, which is worth more than £200bn across the UK and the EU. Car manufacturers – particularly in Germany, Europe’s dominant manufacturing force and leading exporter to the US and Mexico – are already facing a substantial hit.

And if the ‘reciprocal tariffs’ mooted last week by the US President take effect, which would impose minimum tariffs across the board added to each nation’s VAT rate, the UK would be the fourth most impacted country.

Simply put, many businesses in the UK remain underprepared to deal with the impact of such a scenario. Too many still rely on outdated methods to assess the impact that limit agility, in a trading environment which only seems to be growing more volatile.

Barriers to supply chain agility

The reason that many supply chains remain vulnerable to sudden tariff changes or trade policies is because their approaches to operations tend to be reactive rather than proactive.

In recent times, Artificial Intelligence (AI) has significantly improved forecasting capabilities. But when used alone, it is not enough; AI fundamentally learns from past events, meaning it can overlook entirely plausible but unprecedented scenarios.

Indeed, one of the key barriers to true agility is an over-reliance on historical data to drive decision-making. For example, it is certainly useful context that in trade battles in Trump’s last term in office, the US targeted famous consumer goods including French wines and cheeses, Italian luxury goods and Scottish and Irish whiskies.

But reliance on this historical context alone fails to account for the new, more aggressive trade policy of a second Trump presidency, and the possibility of an economic policy with tariffs as its cornerstone targeting new industries. To achieve truly agile, flexible response capabilities, businesses must have access to insights which go beyond simple derivations of past events.

Another major challenge is the speed of response, especially with so much uncertainty around the future about when tariff changes will hit, or who will be impacted. On average, it takes two weeks for a business to react to supply chain disruption – delays that, over a decade, can erode nearly six months’ worth of profits. Without scenario modelling and strategic foresight, companies will remain on the back foot, forced into crisis-mode decision-making rather than pre-emptive adaptation.

A tariff-proof supply chain combines AI with simulation technology

To effectively anticipate and plan for tariff change in today’s volatile geopolitical context, businesses need to be capable of using AI tools in combination with simulation technology - intelligent simulation.

Doing so gets the best out of both technologies. With intelligent simulation, AI can model and prioritize countless ‘what if’ scenarios, providing supply chain and logistics teams with actionable insights before disruptions occur.

Whether it's understanding the impact of potential tariff changes, identifying alternative suppliers, or assessing new market opportunities, AI in combination with simulation allows businesses to remain agile and act quickly before the unprecedented strikes.

Businesses that integrate AI in combination with simulation technology into their supply chain strategies will not only navigate tariff whiplash with greater ease but will also establish a competitive edge in global trade.

This is because combining simulations with AI allow companies to explore both the network and the individual warehouse or distribution center-level impact of complex, tailored counterfactuals about the future with which they can plan better than ever.

Ultimately, operations leaders need to understand the impact of a tariff change on a granular level as well as the network level. With this approach, they can overcome the limitations of sparse real-world information and generate new training data for AI technology so that it can deliver comprehensive, reliable forward-looking insights during periods of trade unpredictability.

With this level of insight, different tariff scenarios can be focused on and planned for and responses rehearsed accordingly - so that when the time comes in real life, they can respond with flexibility and agility.

The reality is that economic unpredictability is here to stay. The only question that remains is whether businesses will continue with a reactive approach – or choose a prepared, pre-emptive approach instead.

if operators are to take action, they need to understand the impact of a tariff change not just at the network level but down to the more granular level for individual warehouse for example

We feature the best Enterprise Resource Planning (ERP) software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

The paradox of AI: problem vs. opportunity in web innovation

TechRadar News - Mon, 03/31/2025 - 01:39

AI has dominated headlines, product strategies, and investment for the past two years, but as businesses reflect heading into 2025, an uncomfortable question lingers: where is AI’s financial impact, really?

Amidst the AI hype climate, businesses have been eager to invest in emerging technologies that promise the world. So eager, in fact, that the market is now saturated with hastily-developed products designed more to showcase adoption than to deliver measurable impact.

While AI tools have existed for some time, the rise of generative AI — starting with the release of ChatGPT just over two years ago — has captured broader attention and rekindled a frenzy of innovation, akin to the dot-com boom of the late 1990s. Generative AI's accessibility is lowering barriers to entry, sparking both a rush to investment and concern across industries.

When applied strategically, it’s clear that AI can revolutionize user experiences at places like websites, where the potential for customer experience enhancement is unparalleled. But to keep up with lofty AI predictions and heavy investor demands, many businesses today are investing first and looking for ways to measure return on that investment later — an approach that’s led to over-promised and under-delivered initiatives, followed by disappointment among customers and teams alike.

AI has the potential to deliver transformative outcomes when businesses align it with strategic goals, such as improving website functionality and user satisfaction. Rather than integrating AI first and looking for a problem to solve later, it’s time to return to the tried and true formula for innovation: find a problem, figure out how to solve it.

The AI Paradox: ROI vs. FOMO

ROI must be the central factor in AI investment decision-making.

While the number of senior business leaders investing $10 million or more in AI is set to double next year, a Gartner report found that at least 30% of AI projects will be abandoned by the end of 2025. These circumstances — high costs and low success rates — make prioritizing business needs and ROI critical.

AI is an expensive, time consuming endeavor — so in order for an AI product to be worth it, especially in consumer-facing applications, it must add real value to customers. Rushing to bring high-potential technology to market can often hinder, rather than enhance, user experience — particularly in the case of websites, where users increasingly demand seamless interaction.

The early surge of businesses racing to adopt AI chatbots is a prime example. In the push to get the latest feature onto their websites, a critical question was often overlooked: will this actually improve our customer service? Despite their high-potential, chatbots were introduced widely before the technology was developed to the point of adding proper value, often resulting in frustrating user experiences and failing to provide accessible support.

Consider Watsonville Chevy’s viral failure, where a chatbot offered to sell a customer a brand new Chevy vehicle for only $1. Rather than helping customers buy cars, the under-developed technology — despite its high-potential — caused an embarrassing headache for the dealership. More than the technology itself, this failure underscores the critical importance of businesses putting appropriate guardrails in place. Effective AI implementation requires not only understanding the potential of off-the-shelf solutions, but ensuring they are adapted to the specific needs and limitations of the business environment.

In another recent counterproductive AI use case, Spotify whiffed on its perennially popular Wrapped feature by going all-in on AI. While removing features like top genres, the music streaming giant opted to add experiences like an AI-generated podcast. Listeners, predictably, were critical, highlighting the importance of using AI to enhance user experience rather than diminishing features that made products popular in the first place.

Businesses must ask themselves: is this AI use case truly adding to the customer experience? Investments must prioritize functionality and customer needs over hype. By focusing on thoughtful, ROI-driven AI adoption, businesses can avoid costly mistakes and improve outcomes.

Solving Problems, Driving Results

But of course, not every AI investment is destined to fail or hinder customer experience. There are many examples of how AI brings clear value-adds when implemented strategically, especially in ecommerce and content.

In a crowded online environment, frustrated users are a sales killer — customers have access to limitless products and content online, so when their search fails to draw results, they leave.

Netflix, Google and Amazon have all dominated their respective verticals for several years in no small part because of their use of natural language processing. In 2017, more than 80% of the TV shows users watched on Netflix were discovered through its recommendation system. In 2012, 35% of purchases on Amazon came from product recommendations. Google has utilized AI since 2015 to process and provide more relevant search results.

All of these use cases have had a tangible impact on customer experiences — an impact that has long differentiated them from competitors with less resources.

More recently, the rise of generative AI is amplifying this trend — as the current AI frenzy has led to not only the development of cutting-edge technology by major AI players, but also made it more accessible for businesses of all sizes. These advancements enable even mid-market players to leverage tools once exclusive to industry giants, creating new opportunities for differentiation and growth.

In short, these technologies are no longer limited to just big tech like FAANG — AI democratization is unlocking more cost effective tools for SMBs and mid-market companies to optimize their websites. The potential of AI is immense when businesses choose the right product, and legacy examples in e-commerce and user experience, like Netflix’s recommendation engine or Amazon’s personalized shopping, offer a blueprint for websites leveraging AI to create transformative, ROI-driven outcomes. AI-driven smart search and recommendation technologies already exist; businesses just need to evaluate problems and responsibly implement solutions to unlock AI's transformative potential.

Starting with the pain point — such as low conversion rates — and answering with AI empowers better business outcomes. Implementing AI technologies can help bridge the gap between mid-market companies and enterprises, drive higher conversion rates and justifiable ROI for the cost of AI.

Thoughtful AI integration has the potential to revolutionize industries, and in some cases, already has. These use cases underscore AI’s potential to add real value, making a tangible impact on conversion rates and customer experience.

We've listed the best sales pipeline software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Today's NYT Mini Crossword Answers for Monday, March 31

CNET News - Sun, 03/30/2025 - 21:34
Here are the answers for The New York Times Mini Crossword for March 31.
Categories: Technology

This is the world's first Thunderbolt 5 LTO tape drive and I can't understand why it exists in the first place

TechRadar News - Sun, 03/30/2025 - 15:26
  • MagStor's Thunderbolt 5 LTO drive builds on its 2020 Thunderbolt 3 model
  • TB5 certainly adds speed but what's the real-world benefit for tape?
  • There's no word on pricing, but it's unlikely to be cheap

MagStor introduced the world’s first Thunderbolt 3 LTO tape drive back in 2020, blending traditional tape-based storage with modern connectivity, and now, the company has announced the world’s first Thunderbolt 5 LTO tape drive.

The company describes its latest product as the next step in offering flexible, high-speed backup and archival solutions for professionals working with large volumes of data.

Tape storage continues to be a standard for long-term archival needs due to its durability and capacity, and the Thunderbolt 5 LTO drive is designed for use in data-heavy environments such as media production and enterprise IT. By integrating Thunderbolt 5, MagStor hopes to offer a faster, more streamlined connection between tape hardware and modern computing systems.

Increased speed

The new drive works with both macOS and Windows and while Thunderbolt 5 offers higher bandwidth than previous versions, tape speeds remain limited by the format itself.

Although MagStor hasn’t provided many technical specifications, it’s a given the new product will support LTO-9 tapes (18TB native / 45TB compressed capacity), as its predecessor does.

There's no confirmation yet on compatibility with next-generation LTO-10 tapes, expected to arrive in the second quarter of 2025, which offer up to 36TB native and 90TB compressed capacity, but it would be a missed opportunity if that support isn’t included.

Thunderbolt 5 achieves data transfer rates of up to 80Gbps (10GB/s) bi-directionally in standard mode, and up to 120Gbps in one direction when using Bandwidth Boost mode.

LTO-10 is expected to deliver read speeds of around 472MB/s, which is a step up from LTO-8 at 360MB/s and LTO-9 at 400MB/s.

MagStor says the Thunderbolt 5 LTO drive will be released by the end of 2025. Pricing has not yet been announced, but it's unlikely to be cheap.

The company’s LTO-9 Thunderbolt 3 drive retails for $6,299 and whether the added speed of Thunderbolt 5 will justify the inevitable price hike remains to be seen.

“At MagStor, we are committed to pushing the boundaries of what’s possible in data storage,” said Tim Gerhard, VP of Product at MagStor. “After revolutionizing the market with the first-ever Thunderbolt 3 LTO drive, we’re excited to raise the bar again with Thunderbolt 5, ensuring our customers have access to the most powerful and flexible storage solutions available.”

You might also like
Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology