Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

Apple's VisionOS 26 Hands-On: Virtual Me and 3D Memories Are Stunning

CNET News - Wed, 06/11/2025 - 12:02
Apple didn't add more AI to its $3,500 mixed reality headset yet, but the collaborative and visual upgrades are better than I expected.
Categories: Technology

First look – Apple's visionOS 26 fixes my biggest Persona problem and takes the mixed reality headset to unexpected places

TechRadar News - Wed, 06/11/2025 - 12:00

Apple Vision Pro is unquestionably one of the most powerful pieces of consumer hardware Apple has ever built, but the pricey gadget is still struggling to connect with consumers. And that's a shame because the generational-leaping visionOS 26 adds even more eye-popping features to the $3,500 headset, which I think you'd struggle to find with any other mixed reality gear.

Apple unveiled the latest Vision Pro platform this week as part of its wide-ranging WWDC 2025 keynote, which also introduced a year-OS naming system. For some platforms like iOS, the leap from, say, 18 to 26 wasn't huge, but for the toddler visionOS 2, it was instantly thrust into adulthood and rechristened visionOS 26.

This is not a reimaging of visionOS, and that's probably because its glassiness has been amply spread across all other Apple platforms in the form of Liquid Glass. It is, though, a deepening of its core attributes, especially around spatial computing and imagery.

I had a chance to get an early hands-on experience with the platform, which is notable because Vision Pro owners will not be seeing a visionOS 26 Public beta. Which means that while iPhone, iPad, Apple Watch, and Apple TV owners are test-driving OS 26 platform updates on their favorite hardware, Vision Pro owners will have a longer wait, perhaps not seeing these enhancements until the fall. In the interim, developers will, of course, have access for testing.

Since much of the Vision Pro visionOS 26 interface has not changed from the current public OS, I'll focus on the most interesting and impactful updates.

See "me"

(Image credit: Apple)

During the keynote, Apple showed off how visionOS 26 Personas radically moves the state of the art forward by visually comparing a current Persona with a new one. A Vision Pro Persona is a virtual, live, 3D rendering of your head that tracks your movements, facial expressions, and voice. It can be used for communicating with other people wearing the headgear, and it's useful for calls and group activities.

Apple has been gradually improving Personas, but visionOS 26 is a noticeable leap, and in more ways than one.

You still capture your Persona using the front-facing 3D camera system. I removed my eyeglasses and held the headset in front of my face. The system still guides you, but now the process seems more precise. I followed the audio guidance and looked slowly up, down, left, and right. I smiled and raised my eyebrows. I could see a version of my face faintly on the Vision Pro front display. It's still a bit creepy.

(Image credit: Future)

I then put the headset back on and waited less than a minute for it to generate my new Persona. What I saw both distressed and blew me away.

I was distressed because I hate how I look without my glasses. I was blown away because it looked almost exactly like me, almost entirely removing the disturbing "uncanny valley" look of the previous iterations. If you ever wonder what it would be like to talk to yourself (aside from staring at a mirror and having a twin), this is it.

There was a bit of stiffness and, yes, it fixed my teeth even though part of my setup process included a big smile.

It was easy enough to fix the glasses. The Personas interface lets you choose glasses, and now the selection is far wider and with more shades. I quickly found something that looked almost just like mine.

With that, I had my digital doppelganger that tracked my expressions and voice. I turned my head from side to side and was impressed to see just how far the illusion went.

Facing the wall

(Image credit: Apple)

One of the most intriguing moments of the WWDC Keynote was when they demonstrated visionOS 26's new widget capabilities.

Widgets are a familiar feature on iPhones, iPads, and Macs, and, to an extent, they work similarly on Vision Pro, but the spatial environment takes or at least puts them in new and unexpected places.

In my visionOS 26 demo experience, I turned toward a blank wall and then used the new widget setup to pin a clock widget to the wall. It looked like an actual clock hanging on the wall, and with a flip of one setting, I made it look like it was inset into the wall. It looked real.

On another wall, I found a music widget with Lady Gaga on it. As I stepped closer, a play button appeared in the virtual poster. Naturally, I played a little Abracadabra.

Another wall had multiple widgets, including one that looked like a window to Mount Fiji; it was actually an immersive photo. I instinctively moved forward to "look out" the window. As the vista spread out before me, the Vision Pro warned me I was getting too close to an object (the wall).

I like Widgets, but temper the excitement with the realization that it's unlikely I will be walking from room to room while wearing Vision Pro. On the other hand, it would be nice to virtually redecorate my home office.

An extra dimension

(Image credit: Apple)

The key to Vision Pro's utility is making its spatial capabilities useful across all aspects of information and interaction.

visionOS 26 does that for the Web with spatial browsing, which basically can turn any page into a floating wall of text and spatially-enhanced photos called Spatial Scenes.

(Image credit: Apple)

visionOS 26 handles the last bit on the fly, and it's tied to what the platform can do for any 2D photo. It uses AI to create computational depth out of information it can glean from your flat image. It'll work with virtually any photo from any source, with the only limitation being the source image's original resolution. If the resolution is too low, it won't work.

I marveled at how, when staring at one of these converted photos, you could see detail behind a subject or, say, an outcropping of rock that was not captured in the original image but is inexplicably there.

It's such a cool effect, and I'm sure Vision Pro owners will want to show friends how they can turn almost all their photos into stereoscopic images.

Space time

I love Vision Pro's excellent mixed reality capabilities, but there's nothing quite like the fully immersive experience. One of the best examples of that is the environments that you enable by rotating the crown until the real world is replaced by a 360-degree environment.

visionOS 26 adds what may be the best environment yet: a view of Jupiter from one of its moons, Amalthea. It's beautiful, but the best part of the new environment is the control that lets you scroll back and forth through time to watch sunrises and sunsets, the planet's rotation, and Jupiter's dramatic storms.

This is a place I'd like to hang out.

Of course, this is still a developer's beta and subject to significant change before the final version arrives later this year. It's also another great showcase for a powerful mixed reality headset that many consumers have yet to try. Perhaps visionOS 26 will be the game changer.

You might also like
Categories: Technology

ChatGPT's 10-hour outage has given me a new perspective on AI – it's genuinely helping millions of people get through life

TechRadar News - Wed, 06/11/2025 - 11:35

OpenAI servers experienced mass downtime yesterday, causing chaos among its most loyal users for well over 10 hours.

For six hours straight, I sat at my desk live-blogging the fiasco here on TechRadar, trying to give as many updates as possible to an outage that felt, for many, as if they had lost a piece of themselves.

You see, I write about consumer AI, highlighting all the best ways to use AI tools like ChatGPT, Gemini, and Apple Intelligence, yet outside of work, these incredibly impressive platforms have yet to truly make an impact on my life.

As someone who’s constantly surrounded by AI news, whether that’s the launch of new Large Language Models or the latest all-encompassing artificial intelligence hardware, the last thing I want to do outside of work is use AI. The thing is, the more AI develops at this rapid pace, the more impossible it becomes to turn a blind eye to the abilities that it unlocks.

In the creative world, you’ll stumble across more AI skeptics than people who shout from the rooftops about how great it is. And that’s understandable, there’s a fear of how AI will impact the jobs of journalists like me, and there’s also a disdain for the sanitized world it’s creating via AI-slop or robotically-written copy.

But the same skepticism often overlooks the positives of this ever-evolving technology that gives humans new ways to work, collect their thoughts, and create.

After six hours of live blogging and thousands of readers reaching out with their worries surrounding the ChatGPT server chaos, as well as discussing what they use the chatbot for, I’ve come away with a completely new perspective on AI.

Yes, there are scary elements; the unknown is always scary, but there are people who are truly benefiting from AI, and some in ways that had never even crossed my mind.

More than a chatbot

An hour into live blogging the ChatGPT outage, and I was getting bored of repeating, “It’s still down” in multiple different ways. That was when I had an idea: if so many people were reading the article, they must care enough to share their own reasons for doing so.

Within minutes of asking readers for their opinions on the ChatGPT outage, my inbox was inundated with people from around the globe telling me how hard it was to cope without access to their trusty OpenAI-powered chatbot.

From Canada to New Zealand, Malaysia to the Netherlands, ChatGPT users shared their worries and explained why AI means so much to them.

Some relied on ChatGPT to study, finding it almost impossible to get homework done without access to the chatbot. Others used ChatGPT to help them with online dating, discussing conversations from apps like Tinder or Hinge to ensure the perfect match. And a lot of people reached out to say that they spent hours a day speaking with ChatGPT, filling a void, getting help with rationalizing thoughts, and even helping them to sleep at night.

One reader wrote me a long email, which they prefaced by saying, “I haven’t written an email without AI in months, so I’m sorry if what I’m trying to say is a bit all over the place.”

Those of us who don’t interact with AI on a regular basis have a basic understanding of what it can do, often simplifying its ability down to answering questions (often wrongly), searching the web, creating images, or writing like a robot.

But that’s such an unfair assessment of AI and the way that people use it in the real world. From using ChatGPT to help with coding, allowing people who have never been able to build a program an opportunity to do so, to giving those who can’t afford a professional outlet for their thoughts a place to speak, ChatGPT is more capable than many want to accept.

(Image credit: Shutterstock/Rokas Tenys)

ChatGPT and other AI tools are giving people all around the world access to something that, when used correctly, can completely change their lives, whether that’s by unlocking their productivity or by bringing them comfort.

There’s a deeply rooted fear of AI in the world, and rightfully so. After all, we hear on a regular basis how artificial intelligence will replace us in our jobs, take away human creativity, and mark the beginning of the robot uprising.

But would we collectively accept it more if those fears were answered? If the billionaires at the top were to focus on highlighting how AI will improve the lives of the billions of people struggling to cope in this hectic world?

AI should be viewed as the key to unlocking human creativity, freeing up our time, and letting us do less of the mundane and more of enjoying our short time on this planet. Instead, the AI renaissance feels like a way to make us work harder, not smarter, and with that comes an intense amount of skepticism.

After seeing just how much ChatGPT has impacted the lives of so many, I can’t help but feel like AI not only deserves less criticism, but it deserves more of an understanding. It’s not all black and white, AI has its flaws, of course it does, but it’s also providing real practical help to millions of people like nothing I've seen before.

You might also like
Categories: Technology

Tariff Rates Against China Still Historically High as Trump Touts New Trade Deal

CNET News - Wed, 06/11/2025 - 11:20
A tentative agreement between the White House House and Beijing would, among other things, restore US access to valuable materials from China.
Categories: Technology

Meow Wolf's Weird Physical Universe Is Planning to Extend Into Augmented Reality

CNET News - Wed, 06/11/2025 - 11:15
Exclusive: The creators of Pokemon Go and the artists behind a series of hallucinatory installation exhibits are combining to create an augmented-reality portal universe. They talked to me about what's coming.
Categories: Technology

You’ll Earn This Much Interest if You Deposit $5,000 Into a CD Today

CNET News - Wed, 06/11/2025 - 11:00
A CD is a safe, easy way to grow your money at a predictable rate.
Categories: Technology

Windows 10 might be at death’s door, but Microsoft hasn’t finished trying to force Bing and Edge on its users

TechRadar News - Wed, 06/11/2025 - 10:59
  • Windows 10 has a new update that adds a couple of features
  • Unfortunately, one of these is focused on promoting Bing and Edge
  • Microsoft is pushing its search engine and browser via the calendar panel off the taskbar

Windows 10 has a new update and it actually introduces a new feature – although you might wish it didn’t when you discover what this latest addition is.

That said, the freshly-released update for June (which is KB5060533 for Windows 10 22H2) does come with a tweak that could raise a smile, namely that the clock in the taskbar now displays the seconds when you click to view the time in the calendar panel.

Quite why Microsoft ditched that in the first place is beyond me, but anyway, while that might be a pleasing return of a feature for some, there’s a sting in the tail further down in said calendar flyout – namely that Bing has crept into the mix here.

Not overtly, mind, but as Windows Latest explains, there’s been a change to the bottom section of the calendar panel where normally you’ll see your own events or reminders – if you have any, that is. If you don’t, this used to be blank, but as of the June update you’ll see popular public events and their dates.

Of course, pretty much every day is now dedicated to something – for example, today, June 11, is ‘National Corn on the Cob Day’ (apparently) – and reminders for these events will now appear in the calendar panel.

How does Bing figure in this? Well, if you click on said event, you’ll get information on it fired up in… wait for it… yes, Bing search engine. And what web browser will that appear in? Microsoft Edge, of course. Why promote one service, when you can promote two, after all?

(Image credit: Marjan Apostolovic / Shutterstock)Analysis: Why risk the besmirchment?

This is a bit sneaky as it’s far from clear that you’re invoking Bing and Edge when you click something on the calendar flyout out of curiosity. Moreover, this happens despite the Windows 10 preferences you’ve chosen for your default search engine or browser, which again is an unwelcome twist.

This is the kind of behavior that impacts negatively on Microsoft’s reputation and it doesn’t help that the tweak isn’t mentioned in the update notes. We’re only told that the June patch provides a “rich calendar experience” (well, it’s making someone rich, or at least a little richer, possibly – but not you).

The kicker here is that Windows 10 is only four months from being declared a dead operating system, with its life support removed (unless you pay for additional security patches for an extra year). So, why even bother making changes like this when Windows 10 is facing its final curtain? Why take any risks at all that could cause reputational damage?

Well, one thought occurs: maybe Microsoft isn’t convinced that floods of people are going to be leaving Windows 10 when the End of Life deadline rolls around in October 2025. After all, an alarmingly hefty number of diehards are still clinging on to the older operating system. In which case, perhaps Microsoft sees the value and worth in still bugging Windows 10 users for the foreseeable, while they stick around either paying for support, or risking their unpatched PC being compromised while refusing (or being unable) to upgrade to Windows 11.

Oh well. At least we’ve got the seconds back on the calendar clock display, hurray.

You might also like...
Categories: Technology

The Alexa+ rollout is finally happening – here's what early testers love and hate about it

TechRadar News - Wed, 06/11/2025 - 10:54
  • Alexa+ is still rolling out to test users, reaching 1 million users after hitting 100,000 back in May
  • Those with access have been taking to Reddit to share their experiences with Amazon's new AI-enhanced voice assistant
  • So far it's garnered mixed reactions, but most users seem to be satisfied

Amazon has been taking its time with the roll out of Alexa+, the company’s new voice assistant with a big AI revamp, but after hitting the 100,000 user milestone in May, Alexa+ has now reached 1 million test users – a huge jump in just a few weeks.

When the company announced Alexa’s first major upgrade in February, it said that Alexa+ would be US-only for now before being rolled out widely, though this date is still unknown.

It’s been difficult to find a person or close friend who has early access, but now more users are sharing that Alexa+ has been activated on selected Echo devices. Additionally, users are sharing their first impressions of the new AI features – which has garnered a range of mixed reactions.

Alexa+ activated from r/alexaAlexa’s new voice causes a divide

After deep diving through different Reddit threads, the main feature that has divided early access users is the new Alexa+ voice, whose major redesign aims to offer a less robotic inflection (similar to ChatGPT) with improved recognition capabilities.

Some users have been impressed by its ability to carry out a straight conversation without having to pre-prompt it, with one user stating that it provides a natural back-and-forth flow.

However, the user also highlighted its similarities with other voice assistants, adding: "It's early days, but it feels a tiny bit closer to what I have with ChatGPT." By the sounds of it, Alexa+ will have to offer something a little different if it wants users to stick around.

So I am actually liking Alexa+ from r/alexa

Another user went even further, describing the new Alexa+ voice as "obnoxious", but they also highlighted the fact that the new assistant has the ability to change its tone: "I asked if she could soften her voice, and she offered to make it more 'feminine' (her words not mine)."

I’ve spent a few days with Alexa Plus from r/alexa

Testers have been generally pleased with how Alexa+ stands above other LLMs with its detailed explanations and clear understanding of voice prompts. One Reddit user was impressed that "[it] understands everything regardless of how you stumble on words."

Comment from r/alexa

Another user shared a similar positive experience, but went on to explain that Alexa+ would fall into the trap of contradicting itself, admitting to it and apologizing when the called out.

Comment from r/alexa

Despite a few hiccups with the new Alexa+, the response from testers has been generally positive which makes us intrigued to see what it will be like once it’s widely available. So far it’s not enough for users to fully subscribe to, but time will tell.

You might also like
Categories: Technology

Hackers are now pretending to be jobseekers to spread malware

TechRadar News - Wed, 06/11/2025 - 10:27
  • DomainTools spots hackers creating fake job seeker personas
  • They target recruiters and HR managers with the More Eggs backdoor
  • The backdoor can steal credentials and execute commands

Hackers are now pretending to be jobseekers, targeting recruiters and organizations with dangerous backdoor malware, experts have warned.

Cybersecurity researchers DomainTools recently spotted a threat actor known as FIN6 using this method in the wild, noting the hackers would first create fake personas on LinkedIn, and create fake resume websites to go along.

The website domains are bought anonymously via GoDaddy, and are hosted on Amazon Web Services (AWS), to avoid being flagged or quickly taken down.

More Eggs

The hackers would then reach out to recruiters, HR managers, and business owners on LinkedIn, building a rapport before moving the conversation to email. Then, they would share the resume website which filters visitors based on their operating system and other parameters. For example, people coming through VPN or cloud connections, as well as those running macOS or Linux, are served benign content.

Those that are deemed a good fit are first served a fake CAPTCHA, after which they are offered a .ZIP archive for download. This archive, in what the recruiters believe is the resume, actually drops a disguised Windows shortcut file (LNK) that runs a script which downloads the "More Eggs" backdoor.

More Eggs is a modular backdoor that can execute commands, steal login credentials, deliver additional payloads, and execute PowerShell in a simple yet effective attack relying on social engineering and advanced evasion.

AWS has since came forward to thank the security community for the findings, and to stress that campaigns like this one violate its terms of service and are frequently removed from the platform.

“AWS has clear terms that require our customers to use our services in compliance with applicable laws," an AWS spokesperson said.

"When we receive reports of potential violations of our terms, we act quickly to review and take steps to disable prohibited content. We value collaboration with the security research community and encourage researchers to report suspected abuse to AWS Trust & Safety through our dedicated abuse reporting process."

Via BleepingComputer

You might also like
Categories: Technology

Adobe launches new Express tool for small businesses - and I spoke exclusively to its chief to find out the top 5 things you need to know

TechRadar News - Wed, 06/11/2025 - 10:17
  • Adobe Express for Ads is now live - I spoke to Express SVP to find out more
  • Content integration with Google Ads, LinkedIn, TikTok
  • Includes Social Safe Zone to refine ads for each platform

Adobe Express has introduced a new tool designed to help small businesses create and monitor online ads across popular social media channels.

Since Adobe Max London, the design platform has seen a host of new AI updates like Clip Maker and Generate Similar for spinning out new content based on existing images. Now, Express for Ads brings even more options for marketers and small businesses to scale up content production and track performance.

In an exclusive TechRadar Pro interview, I spoke to Adobe Express SVP Govind Balakrishnan to find out what users can expect from the new ad platform - and what else we can look forward to in the coming months.

What’s new in Adobe Express and what is Express for Ads?

This isn’t the first foray into social media content creation for Express, which has long offered the ability to create ad templates, and schedule and publish directly to platforms.

But the platform is giving users a jump-start of scaling up ad creation across core advertising platforms. You can check out the new Express tools by clicking here - but here’s what you can expect.

(Image credit: Adobe)
  • 1. Ad platform support

What this new update adds is the ability to create content workflows specifically for Google, LinkedIn, Meta, TikTok, and later down the line, Amazon, too.

As Balakrishnan told me, “What we have now done though is also bring in the tools and capabilities to make it incredibly easy for you to create content that performs well for the critical or prominent ad platforms like Google Ads, LinkedIn, Meta, and more coming in the not too distant future. We've essentially made it easy for you to start with the template or even generate a template, and create content using the best in class tools that we have available in Express.”

  • 2. Better best practices

Alongside expanded ad platform support, users can now also use what Adobe’s calling a Social Safe Zone.

This is effectively a set of best practices to prevent the dreaded rejection of ads - and it’s currently supported for Facebook Stories, Instagram Reels, and LinkedIn Videos. There are plans to support additional formats soon.

“We've added a capability called Social Safe Zone,” said Balakrishnan. “It’s essentially a set of guidelines or guard rails that are incorporated as you're creating your content to ensure that the key visual elements that you have in your content are not obstructed by the various social media platforms. So, it helps you essentially create content to ensure that the visual elements that you care most about are front and centre, and are optimized to be best-performing for each of the social media platforms that you're targeting.”

  • 3. A one-stop shop for ad creation

In a bid to improve the creative workflow, Adobe is now letting users play in the Express sand-box without having to move out to other apps.

Balakrishnan calls it a one-stop shop, adding: “We have made it incredibly easy to publish straight to the ad platforms, so we have made it. Express can establish a connection with Google Ads, LinkedIn Ads, and Tiktok. You can go from Express directly to each of these ad platforms.”

Of course, Adobe Express has long offered the option to resize templates, but in this latest update, the company has gone further.

“We have now ensured that [Resize] works for these ad platforms,” Balakrishnan told me. “Essentially, you start with the template, you have Safe Zones to ensure that your content looks great for each, and now you have the ability to publish straight into these ad platforms. So, Express becomes this one-stop shop where you start with an intent, you create your content, you publish to various platforms, and you get your insights back right there. You don't have to jump between various tools, various platforms.”

It’s an area that Balakrishnan is most excited for, telling me, “I am most excited about the fact that you can create for a specific ad platform and resize seamlessly for other platforms. As we all know, most marketing marketers are trying to reach multiple platforms and struggle to do that because they have to recreate a lot of their content over and over again for multiple app platforms. The fact that they can fairly quickly and easily create the best possible content for each of those ad platforms, I think, is incredibly exciting.”

  • 4. Improved metrics and tracking

One of the best updates, I think, coming with Express for Ads, is the ability to now monitor ad performance across supported platforms, delivering much-needed feedback to refine future ideation and creation. With that in mind, Express now has included Metricool and Bitly add-ons.

Expanding on this, Balakrishnan said, “We've added the ability to get metrics and analytics on the content and how the content is performing through integrations with Metricool and Bitly. These are two recent integrations that we have launched where, once you post your content to these platforms, you now have the ability to get feedback on how your content is performing, in addition to obviously seeing how it maps to current trends and current fads that may be in play.”

And it turns out Adobe might’ve underestimated just how many users are welcoming this update.

Balakrishnan said, “I'm finding that a number of our users are excited about the Metricool integration. I don't know if we had fully realised how compelling this could be, but as we have gotten deeper into the integration and as we have engaged with more of our user base, it has become clear that it is an integration that a large number of our users are incredibly excited about because they then get the insights from how their content is performing right there in the tool without having to leave the tool and go somewhere else.”

  • 5. The future of Adobe Express

As Express continues to evolve, I couldn’t resist finding out what users can expect later down the line. Here, Balakrishnan teased a couple of future updates.

“The next stage that we are incredibly excited about, and I know it's not necessarily related to the ads creation scenario today, but it will be relevant in the not too distant future is the ability to completely reimagine creativity or the opportunity to completely reimagine creativity through agentic AI. The idea there would be that you just enter a prompt and you get you start with a blank screen, you enter a prompt, and you interact through a prompt to essentially generate full-fledged designs from scratch. We are now making it even easier for anyone to come in and describe what's in their mind's eye and have that show up on a digital screen in seconds.”

That will come as little surprise for followers of Adobe, where agentic AI is fast becoming de rigueur across the company’s apps. But it’s not the only area where Balakrishnan envisions AI advancements. He confirmed he’d like to see “more advancements in the realm of generative AI” for Express users who don’t want to see a lowering of the barrier to entry via agentic AI.

And, as you’d expect from a platform that integrates across the Creative Cloud suite, the team is looking at further integrations with Adobe Acrobat.

Balakrishnan explained: “We are seeing an increasing trend, so to speak, where creativity and productivity are coming closer together and we see some incredible opportunities to leverage the very broad base of Acrobat users and give them the tools and capabilities to add more richness to PDF and Acrobat documents. And we're doing that by building seamless integrations and workflows from Acrobat into Express, where you're if you're in Acrobat, if you're in [Acrobat] Reader, if you're viewing a regular PDF document, we are now giving you the ability to edit images, generate images to stylize your document all from within Acrobat.”

You can find out more in Adobe's latest blog.

Want to start creating your next ad campaign now? Check out the new Adobe Express for Ads right now. It's free to use with plans for teams and business users, and you'll also find it included as part of an add-on alongside other Adobe apps like Photoshop.

Click here to find out more.

You might also like
Categories: Technology

Pragmata Is One of the Most Exciting Games at Summer Game Fest

CNET News - Wed, 06/11/2025 - 10:00
Capcom had a short but sweet demo of its upcoming sci-fi shooter.
Categories: Technology

Apple’s new Liquid Glass UI design unveiled at WWDC 2025 is nothing new - I can see right through it

TechRadar News - Wed, 06/11/2025 - 10:00

The old saying, “if you wait long enough, everything comes back into style eventually,” is usually attributed to the fashion industry, but it seems to apply to pretty much anything, especially mobile phone interface design.

So, while my younger colleagues are getting all hot and bothered about Apple’s new Liquid Glass design for its operating systems, like iOS, macOS, iPadOS, and tvOS, forgive me if I can’t help but be a little less enthusiastic, because I’ve seen all this before.

The crux of the new Liquid Glass design is that the “material” (an odd choice of words from Apple to describe something that’s purely digital) used for the background to menus, and out of which icons are “crafted”, behaves like glass would in the real world, if it also flowed like a liquid.

That obviously means you can see through it, which is what people are getting very excited about.

Those of us who have been using tech for a while now will realize that we’ve been here before. Back in 2007, Microsoft introduced the Aero design in Windows Vista, which contained menu borders that had a level of transparency to them and icons with rounded edges.

This transparent look and feel persisted into Windows 7, which had a transparent taskbar, but it was eventually dropped in favour of the more 2D and square-looking Windows 8 interface.

Microsoft has recently brought back transparency in Windows 11.

Windows Vista introduced transparency to the borders of its windows. The fundamentals of design

It all comes back to the fundamentals of design and what companies are trying to achieve with a mobile phone interface.

When iOS first came out in 2007, skeuomorphism was the order of the day, which means the icons and interface elements tried to resemble real-world objects as much as possible.

This had the advantage of making them look accessible, but it also felt unnecessarily fussy, especially since we were dealing with digital images, which didn’t need to conform to the same laws as physical objects.

And so a conflict was born. A kind of design war broke out between those who thought that interface design should reflect the real world as closely as possible and those who preferred to think of design as functional first: interface design should be legible, easily accessible, and practical before all else.

Eventually, the latter group won out, but it took a long time and required the death of one of skeuomorphism's strongest advocates.

The old skeuomorphic design of iOS. (Image credit: OldOS - Zane Kleinberg)Farewell Steve Jobs

Apple’s then CEO, Steve Jobs, was a huge fan of the skeuomorphic approach to design. That’s why the icon for the Notes app in iOS looked like actual note paper, for instance. It's also why the Calculator looked like a real calculator, and the Calendar app looked like a real calendar.

On the iPhone, there were rounded, glossy edges to all the icons, with shadowing and a slight 3D effect thrown in.

Sadly, Steve Jobs passed away in 2011, and Apple’s other leading design light, Jony Ive, was given free rein to come up with something different for iOS 7 in 2013. What Ive produced perhaps was a little too far in the other direction. It was best described as very, very flat in comparison to what had come before.

In iOS 7, the 3D skeuomorphic elements were banished in favour of, well, not quite 2D, but a very flat-looking design with very bright, colorful icons that stood out a mile from the phone.

Ive, who was responsible for Apple’s increasingly minimalist approach to product design, had a very strong design aesthetic, and it showed.

iOS 7 introduced brave new visual design elements. All clear on the Apple front

You can view Apple's new digital Glass as the final rejection of Ive’s iOS 7 vision for the iPhone.

Being able to see through everything is very futuristic, and I’m sure it works great in sci-fi movies, TV shows, and in AR headsets like Apple Vision Pro, but on a small device in my hand, it doesn’t increase legibility at all. In fact, it makes text harder to read.

As somebody who already has to put on reading glasses to do most things on my iPhone, this isn’t going to help. And what about all the people who have other kinds of visual impairment?

At WWDC 2025, Apple was very keen to show off how the buttons that cover video playback in the new Liquid Glass design are now transparent, so they don’t distract from the video you’re watching. Well, that’s great, but what if you want to actually read the text that’s written on or next to the buttons?

Even worse, the new “all clear” style (shown below), which drains all color from your icons so they all look like they’re made from glass, is very stylish, but is it functional?

Will it make it easier to find the app you’re looking for or just harder? I’ll have to reserve my final judgement until I’ve tried the finished version of iOS 26, but I think I already know my answer - no, it won’t.

Apple's new "all clear" style in Liquid Glass drains the color from all your icons. (Image credit: Apple)Give it another 15 years

Jony Ive, the designer’s designer, knew what he was doing with iOS 7 when he introduced such a bold, confident new look. Perhaps it was a bit too much of a shock to the system for some people, but the fact that Google instantly copied it in Android is a tribute to how it changed mobile phone interface design for the better.

Since then, Apple has been picking away at Ive’s original vision, which has been easier to do since he left the company, and watering it down with each new iOS release, but now it has really thrown it in the trash with the new Liquid Glass.

So, in 2026, we’re back to transparency, darker tints, rounded corners, and 3D effects.

Remember, these things run in cycles. Give it another 15 years and I think we’ll be back to bold, bright colors and flatter icons. Mark my words.

You may also like
Categories: Technology

I Played Resident Evil 9 Requiem at Summer Game Fest, and It's Extremely Messed Up

CNET News - Wed, 06/11/2025 - 10:00
The upcoming horror game was the most disgusting thing I played at Summer Game Fest.
Categories: Technology

Resident Evil Requiem will feature multiple viewpoints, letting you switch between first and third-person frights

TechRadar News - Wed, 06/11/2025 - 10:00
  • Resident Evil Requiem will feature both a first-person and third-person camera
  • Players will likely be able to switch between the two
  • This was confirmed by a behind-closed-doors demo

TechRadar Gaming can confirm that the upcoming horror game Resident Evil Requiem will feature both first-person and third-person viewpoints.

This information comes from a behind-closed-doors Resident Evil Requiem demo shown to the press at Summer Game Fest 2025, which ended with footage of the player entering the menus and showing a toggle button that changes the perspective between first and third-person gameplay, followed by a glimpse of the latter. As a result, it seems as though players will be able to readily choose which to use via an option in the settings menu.

The main entries in the Resident Evil series have traditionally been played from a third-person perspective, though the recent Resident Evil 7: Biohazard and Resident Evil Village switched up the formula by introducing a more intimate first-person camera.

A third-person option was eventually added to Resident Evil Village as part of the post-launch Winters' Expansion, and it can be toggled via a new 'View Mode' option in the camera settings menu. It looks like it will be similarly implemented in Resident Evil Requiem, though I expect that more information on how it works will be revealed in the build-up to launch.

Resident Evil Requiem was first revealed as part of the Summer Game Fest 2025 main show, with a gripping trailer that introduced us to protagonist Grace Ashcroft. An FBI agent, Ashcroft will investigate a series of strange killings connected to a sinister hotel where her mother was murdered eight years ago.

The game is set to release on February 27, 2026, for Xbox Series X and Series S, PlayStation 5, and PC.

You might also like...
Categories: Technology

Why We Dream and What Our Dreams Mean, According to Sleep Experts

CNET News - Wed, 06/11/2025 - 09:48
You dream every night, whether you remember it or not. Wondering what your dreams mean? We found the common dream interpretations and what they might mean for you.
Categories: Technology

Home theater fans rejoice! The Apple TV 4K's next free update will give you a much-wanted option for elite audio systems

TechRadar News - Wed, 06/11/2025 - 09:29
  • Passthrough audio option is coming to tvOS 26
  • It's in the audio framework, but developers need to use it
  • tvOS 26 is expected later this year

Apple's tvOS doesn't get quite as much attention during WWDC as the bigger-selling products such as the iPhone, and that means it usually takes a few days for some of the most interesting Apple TV news to emerge even beyond the top five tvOS 26 feature you need to know about. And that's the case this year, because an important audio change is coming in tvOS that wasn't mentioned during the event.

AppleInsider reports that there's a new reference on the Apple Developer documentation for AVFAudio that has a "passthrough" setting. AVFAudio is Apple's framework for playing, recording and processing audio on tvOS as well as iOS, macOS and watchOS.

AppleInsider has apparently checked in with Apple, and Apple has confirmed that yes, there will be passthrough in the tvOS upgrade. And that's a big change for serious audio equipment.

It doesn't matter what's at the other end of your HDMI: Apple TV processes the audio from apps. (Image credit: Apple)Why passthrough was wanted by hardcore home theater fans

When you use a streaming app on Apple TV, whether it's Apple's own Apple TV+, Netflix, Prime Video or any one of the best streaming services, the audio decoding and initial processing is handled by your Apple TV before being output to your system – so even if you have really high-end home theater hardware at the end of your HDMI, you're stuck with Apple's processed audio as your input.

Apple TV's audio processing is really good: I often burst into a grin when there's a particularly impressive bit of Atmos action. But higher-spec hardware is likely to be even better, so it's good to have the option to let that hardware handle 100% of the sound processing rather than leave it to Apple.

This isn't limited to Apple TV; the same framework handles the audio on iPhones, Macs and even Apple Watches. But of course it's on the Apple TV where it's likely to matter most, so you can pass the raw audio data to a beefy AV receiver.

For now, though, passthrough is only a possibility: it may be in the framework, but it's up to developers to implement it in their apps, or for Apple to make it selectable in the Apple TV settings; so far at least, the latter option isn't in the developer beta of tvOS 26, but would maybe be the ideal end result.

You might also like
Categories: Technology

How to tackle shadow IT when AI is everywhere

TechRadar News - Wed, 06/11/2025 - 09:20

You’re no doubt familiar with shadow IT — the practice of employees using software, applications and other tech tools that aren’t sanctioned by IT. And if IT doesn’t know about something, they can’t regulate it or defend against it. Clearly, this creates a massive security risk and headaches for both IT and security.

Now, with generative AI tools flooding the workplace, that headache is turning into a full-blown migraine.

The rush to adopt AI productivity tools has opened a Pandora's box of security vulnerabilities that most organizations are completely unprepared for. These tools are expanding existing visibility gaps while simultaneously creating a constant stream of new ones.

Invisibility — in plain sight

“Patchy” (pun very much intended) visibility is only part of the problem. There’s a widespread awareness of the threat posed by unregulated tools, but there’s a startling gap in translating that awareness into concrete readiness. According to Ivanti’s 2025 State of Cybersecurity Report, 52% of IT and security professionals view API-related and software vulnerabilities as high- or critical-level threats. So, why do only 31% and 33%, respectively, consider themselves very prepared to address these risks? It’s the difference between theory and practice.

Making that shift to readiness is easier said than done, given the widespread and elusive nature of shadow IT practices. Software that employees use, including shadow IT, ranks as the number one area where IT and security leaders report insufficient data to make informed security decisions — a problem affecting 45% of organizations.

Let that sink in: nearly half of security teams are operating without visibility into the applications running within their own networks. Not good. At all.

The Gen AI multiplier effect

Generative AI has created a perfect storm for the proliferation of shadow IT. Employees eager to boost productivity are installing AI tools with little thought to security implications, while security teams struggle to keep pace.

The ubiquity and ease of access to these tools mean they can appear in your environment faster than traditional software ever could. A text summarization tool here, a code generation platform there — each creating new pathways for data leakage and potential breaches.

What makes this particularly dangerous is how these tools operate. Unlike traditional shadow IT applications, which often store data locally, generative AI solutions typically send corporate data to external cloud environments for processing. Once your sensitive information leaves your controlled environment, all bets are off.

The root of the shadow IT problem…

This isn’t a manifesto on regulating employee behavior. It’s genuinely understandable, at least to me, why employees would seek out tools that help them do their work more efficiently. That is to say, shadow IT isn’t always done out of malice, but rather because something is lacking in the organizational structure.

Specifically, data silos between security and IT teams create perfect conditions for shadow IT to flourish.

These divisions manifest in a few different ways, for example:

  • Security data and IT data are walled off from each other in 55% of organizations
  • 62% report that siloed data slows security response times
  • 53% claim these silos actively weaken their organization's security posture

When IT lacks visibility into security threats and security lacks visibility into IT operations, shadow IT thrives in the gaps between.

…and how to solve it

Addressing the shadow IT challenge, particularly in this AI-centric era, requires a totally different approach from what IT and security teams might have tried in the past. Instead of attempting to eliminate shadow IT entirely — a likely futile effort — organizations need to build frameworks that provide visibility and control.

Breaking down those data silos that separate IT and security teams is a critical first step. This means implementing unified platforms that provide comprehensive visibility across the entire attack surface, including shadow IT and the vulnerabilities it creates.

With proper integration between security and IT data, organizations can move from reactive firefighting to proactive defense. They can identify unsanctioned AI tools as they appear, assess their risk levels and implement appropriate controls — all without hampering the productivity gains these tools offer.

Of course, dismantling silos is an oversimplified directive. There needs to be an ongoing culture shift where employees no longer feel the need to engage in shadow IT practices covertly. Employers should listen to employees about what tech-related barriers they face. Employee-preferred tools should be evaluated for potential inclusion. Employees must be trained on risks and understand how their choices directly impact business outcomes.

Micromanagement is certainly not the solution, nor is AI itself the problem. AI is a reality of our current workplace, and a lot of good stems from many of the new AI tools. The problem comes when employers fail to dismantle silos, tackle visibility gaps, bring shadow IT into the open and proactively prepare for the attack vectors that come with these tools.

Ignoring the problem will not make it go away. As generative AI continues to gain prevalence and capability, the problem will only worsen.

We've featured the best online cybersecurity course.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Xreal just teased its Android XR specs, and they boast a massive upgrade over its other AR smart glasses

TechRadar News - Wed, 06/11/2025 - 09:18
  • Xreal detailed Project Aura at AWE 2025
  • It will have a massive 70-degree field of view
  • It will also be tethered to a spatial computing puck running Android XR

While we'd suspected that Android XR would be a key component of Google I/O 2025, we couldn’t have predicted some of the partners Google announced that it would be working with, which include the gadget makers at the top of our best smart glasses list: Xreal.

As promised by Xreal at I/O, it has taken to the Augmented World Expo 2025 stage in Long Beach California to provide us with new details on its Project Aura glasses, and it’s shaping up to be one impressive device.

For me, the most important detail is that the device will apparently boast a 70-degree field of view, which is absolutely huge.

The 50-degree field of view of the Xreal One already felt large, and the 57-degree Xreal One Pro is a noticeable step up size-wise (you’ll need to wait a little longer for our full review). 70-degrees will be massive.

The field-of-view upgrade suggests – Xreal hasn’t confirmed this yet – that the Aura specs will borrow the Xreal One Pro’s new optic engine (and perhaps even upgrade it further) including its flat-prism lenses, as one of its key advantages is that it enables a greater FOV .

This optic system comes with other upgrades as well, which could help to make the Android XR glasses much easier to use all day as you walk around.

(Image credit: Google)

Another interesting tidbit is that these specs – like Xreal’s other glasses – are tethered, meaning they’re powered by an external device which they’re connected to via a cable.

We already knew Aura wouldn’t be standalone, but Xreal has revealed that the new compute device shipping with Aura won’t just be your standard phone, or the Xreal Beam Pro.

It’s something all-new, running Android XR, and powered by a Snapdragon chip from Qualcomm – which seems to be making all of the Android XR processors.

Xreal isn’t abandoning its own chipset however. Aura itself will sport an upgraded X1S chip that’s a “modified version of X1 with even more power under the hood.” The X1 chipset is what’s inside the Xreal One and Xreal One Pro specs.

A new X1 chipset is coming (Image credit: Future)

Xreal has yet to confirm if it will sell the puck and glasses separately, but if it does then I'll be interested to see what that decision means for its approach to the upgradability of its tech going forward.

At the moment you can pick up a pair of Xreal glasses and a Beam spatial computer as a bundle, and then upgrade either or both over time. Newer glasses offer better visuals and audio if that’s your main concern, while the new Beam Pro offers improved processing and spatial features.

This is a less wasteful and generally more affordable design philosophy, as you only need to replace the one component that’s holding you back. However, as I mentioned, Xreal has yet to confirm if it will sell the puck and glasses separately. Its current wording calls Project Aura “one solution made up of the wired XR glasses and a dedicated compute device” suggesting they might also be one complete, non-upgradable package.

As for a launch date, Xreal is still keeping us mostly in the dark, though it has said Project Aura is coming in 2026, so we hopefully won’t be waiting for too long.

Xreal One Pro dead on arrival?

The Xreal One Pro (Image credit: Future)

Following this announcement some fans are starting to wonder if their Xreal One Pro purchase was a good idea – if they'd waited a year or so longer and they could have snagged an Xreal Android XR setup instead.

I’ll concede that for some Xreal One Pro purchasers waiting may indeed have been the better approach, but I think others can rest easier, as while the Aura and One Pro will likely share similarities I suspect they’ll be very different devices.

For a start, while Xreal’s glasses are often at their best with the Beam Pro add-on, it isn’t required. You can use the specs with a range of USB-C compatible devices, and even many HDMI devices with the right cables.

Based on Xreal’s descriptions so far Project Aura isn’t just a wearable display for entertainment; it’s a complete spatial computing package with all the nifty Android XR features we’ve been shown.

This won’t just mean that Aura’s purpose is different from Xreal’s other glasses; I expect its price will be very different too.

Right now an Xreal Beam Pro and Xreal One Pro would cost you $848 / £768 (before factoring in any bundle or limited-time discounts). For what sounds like it will be greatly improved hardware I imagine Project Aura will cost closer to $1,000 / £1,000, if not more.

The Xreal Beam Pro (Image credit: Xreal)

And remember, you can buy the Xreal One Pro separately for just $649 / £579.

Better tech is always on the horizon at any given time, but this (for now) doesn’t look set to be a repeat of the Meta Quest Pro / Meta Quest 3 fiasco, which saw the latter, far superior product launch at less than half the price of the former.

Instead Project Aura looks set to be more of a diagonal shift, with new hardware boasting better specs and a different purpose.

If you want to wait for Project Aura you absolutely should, as you might also be tempted by any of the various Android XR, Meta smart glasses, and new Snap spectacles set to be launching in the next year or so. But choosing not to wait won't a bad option either – the Xreal One Pro certainly isn’t going to turn out to be dead on arrival as some might fear.

You might also like
Categories: Technology

I Can't Wait for Apple's F1 Movie. Its Haptic iPhone Trailer Has Me Even More Excited

CNET News - Wed, 06/11/2025 - 09:17
Feel the F1 cars rumble and buzz with the magic of the iPhone's Taptic Engine.
Categories: Technology

Why Process Intelligence is vital for success with Agentic AI

TechRadar News - Wed, 06/11/2025 - 09:09

The pace of change in AI has felt bewilderingly fast over the past 12 months, with new technologies emerging and seemingly being usurped on a weekly basis. For decision-makers, this can be a daunting challenge. However, the encouraging news is that AI development is largely iterative, each new tool builds on the foundations laid by its predecessors.

This has brought us to the next phase of the AI revolution, Agentic AI. This latest development describes the development and implementation of autonomous software agents, grounded in Generative AI, that can make decisions and take action independently of human input. According to Gartner, by 2028, 33% of enterprise software applications will include AI agents, and 15% of work decisions will be made autonomously. Forward-thinking organizations are already using AI agents to uncover business value and achieve goals such as accelerating software development.

Yet, just as Generative AI needs training data to be truly effective, AI agents need a clear understanding of business context. How can leaders ensure that AI agents comprehend how their businesses operate? The answer lies in Process Intelligence (PI). PI takes data from systems such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) software to track how events progress within an organization. It creates a dynamic, living digital twin of business operations, offering a holistic view of how work gets done. This makes it a foundational tool for implementing AI in ways that actually deliver value.

Why AI agents?

Agentic AI refers to autonomous ‘agents’ that can handle complex tasks independently. Many agents are armed with access to Large Language Models (LLMs), along with access to business-specific data (for instance, knowledge base articles or the order information). Employees can interact with many of them using natural language, asking them to then rapidly analyze business data to work out what the next step of a process should be, and even take follow on actions automatically.

AI agents are not, however, a one-size-fits-all technology panacea that can solve every business problem right out of the box. For AI agents to succeed, they must be built to solve specific problems and they need insight into how the business really functions.

This is where PI plays a critical role. It gathers together fragmented data from across dozens or hundreds of business processes, offering AI agents a ‘common language’ to understand events such as invoicing and shipping, and offering high-quality, timely data which can enable AI agents to make better decisions.

With a ‘digital twin’ of business operations in hand, AI agents can analyze how processes truly impact each other across the whole business, and uncover opportunities to drive efficiency.

Putting AI agents to work

Businesses are already creating AI agents built to harness the power of PI and seeing tangible results. One customer has worked with Celonis to develop an AI-driven inventory to track parts and materials. Within two months the AI tools had identified that many purchase orders were raised for spare parts that were already in stock as well as highlighting that a significant portion of spare parts were over eight years old.

An additional AI Agent uses the inventory to optimize spare part availability for plant engineers, with users able to describe the parts they need using technical descriptions or common industry terms, eliminating the need for exact part numbers.

In another case, PI and Agentic AI helped a company double the speed of software delivery by improving predictability and cutting stage waiting times by 30–40%. AI-driven tools pinpointed bottlenecks, offered predictive alerts, and suggested mitigations ranked by potential impact. Leaders could ask simple, natural-language questions to uncover delays and risks, using an AI copilot that translated complex data into clear, actionable insights.

Why AI needs PI

Agentic AI holds the potential to revolutionize enterprise operations, but its effectiveness depends on the quality of data agents have access to. PI ‘bridges the gap’ to provide AI with the input it needs, offering oversight of the totality of the business’s processes. PI is thus a vital tool for optimizing enterprise processes.

Enterprise customers who try to improve their processes using AI without the vital insights from PI all too often fail. In fact 89% of business leaders globally we surveyed recently said that giving AI the context of how their business runs is crucial if it is going to deliver meaningful results.

That is why we believe there can be no effective enterprise AI without PI. Process intelligence is integrated into live systems, so even when systems change, it offers AI agents real time access to the current state of processes. Think of it like the mapping data for a GPS.

Without a map, you’re just following a line on a blank screen. You won’t know why you were turning left and it would be all too easy to take a wrong turn. Similarly, Process Intelligence gives AI agents the essential context to navigate business complexity reliably.

A smarter future

Agentic AI is set to become increasingly central to enterprise success. But its impact depends on access to timely, accurate, and contextual data. Process Intelligence provides this foundation—enabling AI agents to drive meaningful change across business functions, from software development to finance.

The message is clear: Agentic AI needs the right data, and the right context. That’s exactly what Process Intelligence delivers.

We've featured the best AI chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology