Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

Attention SAVE Borrowers: Don't Expect Student Loan Payments to Resume This Year. Do This While You Wait

CNET News - 1 hour 2 min ago
The Department of Education's website says the SAVE payment pause will last until at least this fall. Experts think it will last longer.
Categories: Technology

The programming language that defines the internet is 30 years old today: Happy birthday, Java

TechRadar News - 1 hour 5 min ago
  • Java is 30 today, but remains one of the most widely used programming languages globally
  • Java’s design philosophy prioritizes stability and backwards compatibility over flashy language trends
  • The JVM remains Java’s secret weapon, enabling true cross-platform execution for decades

On May 23, 1995, a seemingly modest programming language called Java was released by Sun Microsystems.

At the time, it introduced the catchy promise of "write once, run anywhere" - a proposition that, despite sounding ambitious, resonated deeply with a generation of developers navigating a fragmented and rapidly evolving computing landscape.

Thirty years later, Java remains one of the most widely used programming languages in the world, embedded in everything from enterprise servers to cloud-native applications. But how did a language from the mid-'90s maintain its relevance amid relentless technological shifts?

A runtime built for endurance, not fashion

As Java turns 30, it’s worth re-examining its trajectory not just through celebratory anecdotes but also through the lens of its actual utility, structural longevity, and measured adaptability.

The occasion may call for cake and nostalgia, but the real story lies in the language’s persistent grip on serious computing tasks, and the skepticism it continues to attract from those who see it as either too slow to evolve or overly burdened by its own legacy.

Java's defining characteristic has always been platform independence. It achieved this through the Java Virtual Machine (JVM), which runs compiled bytecode on any operating system equipped with a compatible JVM.

This design helped Java flourish in the heterogeneous IT environments of the late '90s and early 2000s. Unlike many languages that depend on direct compilation for each target system, Java's intermediary form allowed for smoother portability.

Over the decades, Java's APIs and class libraries expanded, but with an unusual level of care: backward compatibility was always a priority. Developers weren’t required to rewrite code with every version upgrade.

This is a crucial advantage in enterprise systems, where uptime and reliability often outweigh syntactic novelty. Today, applications written decades ago can still run with minimal modification on modern JVMs, a level of continuity that few languages offer.

A cautious evolution of language features

Java has seen gradual enhancements, often arriving later than similar features in more agile languages. Lambda expressions, for example, only became part of Java with version 8 in 2014, long after functional programming had become mainstream elsewhere.

In its early years (1995–2000s), Java established itself in enterprise and mobile development with the introduction of Java 2, which included J2SE, J2EE, and J2ME. J2EE became the standard for web and enterprise applications, while J2ME gained popularity on mobile devices.

Java 5, released in 2004, marked a turning point with the addition of generics, enhanced for-loops, and annotations, moving Java closer to modern programming practices.

From Java 9 onward, the language has evolved steadily. The module system (Java 9), local variable type inference with var (Java 10), pattern matching (Java 16), and improvements in memory management reinforced Java’s adaptability.

Java 17, a long-term support release, reaffirmed the platform’s role as a robust and modern choice for software development.

Java in the cloud and beyond

Despite its age, Java has found a second wind in cloud computing. It is particularly well-suited for cloud-native applications, thanks in part to the emergence of GraalVM, a runtime that compiles Java into native machine code.

GraalVM’s native images can dramatically reduce startup times and memory usage, a key consideration for containerized environments and serverless platforms like AWS Lambda.

Java has also extended its reach into machine learning and high-performance computing through projects like Panama, which improves interoperability with native C/C++ libraries.

With tools like Jextract, Java developers can generate bindings to foreign code easily, sidestepping the clunky and error-prone Java Native Interface (JNI).

This technical depth is part of the reason Java continues to power complex systems. It's not flashy, but it's functional, and in enterprise environments, functionality beats fashion every time.

Projects shaping Java’s future and the evolution of syntax

The OpenJDK community has multiple projects aiming to refine Java’s performance and usability.

Project Leyden focuses on optimizing startup times and reducing memory footprints. Project Lilliput is exploring ways to shrink the object header to as little as 32 bits. Several other projects are underway, though not all have yielded immediate results.

Some, like Project Amber, show incremental but slow progress, while others, like Babylon, seem to outpace current implementations.

Nevertheless, one of the more welcome modernizations has been the addition of record types, which reduce boilerplate in data-holding classes. This improvement, introduced via JEP 359, aligns with the goals of the Valhalla project.

Pattern matching and enhanced switch statements are also nudging Java closer to functional programming languages in expressiveness.

However, these changes are often incremental and restricted to preview status for multiple releases before becoming permanent.

James Gosling, the creator of Java

Java’s 30th anniversary also brings renewed attention to 70-year-old James Gosling, the language’s creator.

His reflections are both proud and critical. Gosling has expressed satisfaction in hearing from developers whose careers were built on Java.

Looking back on Java’s evolution, he noted that features like lambdas, introduced in JDK 8, were ones he wished had been part of the language from the start.

Still, he emphasized the importance of thoughtful language design, explaining, “I never wanted to put in something that was not right.”

On AI, he’s blunt: “It’s mostly a scam,” he said, adding, “The number of grifters and hypesters in the tech industry is mind-rotting.”

His views on AI-assisted coding tools are similarly sharp. While he finds them amusing for basic tasks, he notes that “as soon as your project gets even slightly complicated, they pretty much always blow their brains out.”

Conclusion: Longevity through caution and clarity

Java’s 30th birthday is more than a symbolic milestone, it highlights a rare achievement in software engineering: staying relevant without constant reinvention.

While newer languages come with sleek syntax and flashy tooling, Java remains a trusted workhorse in sectors where stability, security, and predictability matter most.

Whether it's running a logistics backend, a financial system, or a cloud-native microservice, Java’s design ethos - pragmatism over novelty - continues to prove itself.

Its legacy isn’t built on hype, but on solving real problems at scale. And in that regard, it may very well be just getting started.

You might also like
Categories: Technology

Nintendo Switch 2 GameChat will require a mobile number

TechRadar News - 1 hour 6 min ago
  • Nintendo Switch 2 GameChat will require a phone number to use
  • This will presumably help prevent children from accessing the service without permission
  • It offers voice calls out of the box or video calls with the optional Nintendo Switch 2 camera accessory

The Nintendo Switch 2 GameChat feature will require a mobile phone number to use.

As spotted by Notebookcheck, this was disclosed on the 'Ask the Developer Vol. 17, GameChat – Chapter 1' interview on the Nintendo website.

"Mobile phone number registration required to use GameChat. Children must get approval from a parent or guardian via the Nintendo Switch Parental Controls app to use GameChat," a small notice towards the top of the page reads.

The US GameChat section of the site expands on this a little further, explaining that "as an additional security measure, text message verification is required to set up GameChat."

This is the same phone number registered to your Nintendo account. Presumably, those who are banned from using GameChat for poor behavior would be unable to use the same phone number to access it on another account.

The requirement is also likely intended to help prevent children from accessing the service without parental permission, which is required for those under the age of 16. Children that young are unlikely to have access to a mobile phone, potentially easing some parental concerns that GameChat could be used to communicate with strangers online.

Although it can be used via the console's in-built microphone, GameChat is also compatible with the Nintendo Switch 2 camera accessory. Sold separately, the Nintendo Switch 2 camera allows for video chat functionality.

You are only able to start GameChat sessions with people on your Nintendo friends list, who must be invited to the session.

The Nintendo Switch 2 launches globally on June 5. UK pre-orders and US pre-orders are now live.

You might also like...
Categories: Technology

These outrageously cheap dual-driver headphones promise affordable Hi-Res Audio thrills, and come from a reliable company

TechRadar News - 1 hour 36 min ago
  • The Earfun Tune Pro cost $69.99 / £59.99 (about AU$108) at launch
  • Hi-Res Audio, dual drivers and Bluetooth 5.4
  • Hybrid ANC to block 45dB of unwanted audio

EarFun is carving out an enviable reputation for its high-quality and low-priced headphones such as the 4.5-star EarFun Wave Pro. And now there's an even higher spec pair of over-ears with a refreshingly low price.

The new EarFun Tune Pro have an impressive specification and cost just $69.99 /£59.99 at launch thanks to a coupon that knocks $20 / £20 off their official $89.99 / £79.99 price – and that means this lower price is effectively the 'real' price, and whenever they're the higher price, that just means a deal is coming soon and retailers want to show a nice discount amount.

Earfun Tune Pro for $69.99 at Amazon US
Earfun Tune Pro for £59.99 at Amazon UK

That lower price for these kind of features, and a dual-driver speaker setup, is extremely tempting.

We wouldn't expect these headphones to go head to head with something like a pair of Bowers & Wilkins Px7 S3. But we've been consistently impressed by EarFun's value for money, and if these new 'phones surpass the "admirable" sound quality of the Wave Pro (as described by our review) they could be a great budget buy.

If they're really good, they might even restore EarFun's crown as maker of the best noise cancelling headphones for budget buyers, an honor that was recently passed to EarFun's arch-rival 1More.

EarFun Tune Pro: key features

The Tune Pro are dual-driver headphones with a large 40mm PET composite film driver and an additional 10mm LCP polymer driver for the higher-frequency sounds.

The idea of a driver pair like this is the that the large driver can focus more on low-end sounds, and the smaller driver can focus on the upper end, and the overall depth of sound should be improved compared to a single driver. But as with all audio engineering, it depends on execution.

Also included is a new Theater Mode sound profile for "enhanced 2-channel stereo and 360-degree spatial sound formats".

There's hybrid ANC promising noise reduction of up to 45dB, and a five-microphone setup with AI for clear voice calls. Battery life is up to 120 hours (presumably with ANC off, but that's a very impressive number in any case), and you can listen in cabled mode as well as wireless.

Bluetooth is 5.4 with multi-point and a low-latency mode for gaming, and the headphones are Hi-Res Audio certified, although EarFun hasn't published details of supported audio quality or wireless codecs; if you squint at the promotional images you'll see the small print that Hi-Res certification only applies to listening in wired mode.

The new EarFun Tune Pro headphones are available now from EarFun and from Amazon.

Earfun Tune Pro for $69.99 at Amazon US
Earfun Tune Pro for £59.99 at Amazon UK

You might also like
Categories: Technology

Sam Altman and Jony Ive’s ChatGPT device is probably going to look like an iPod Shuffle you can wear around your neck - report reveals more about the hyped AI hardware

TechRadar News - 1 hour 38 min ago
  • Ive and Altman announced their company, io, was purchased by OpenAI yesterday
  • The two entrepreneurs are working on creating the next generation of AI hardware
  • A new report claims the device will look like an iPod Shuffle and can be worn around your neck, like a necklace

Jony Ive and Sam Altman just announced an AI device made by their company, io, is in the works. Now we've got even more info about the mysterious product, and it's rumored to look like an iPod Shuffle.

According to industry insider and renowned leaker Ming-Chi-Kuo, the current OpenAI ChatGPT hardware prototype "is slightly larger than the AI Pin, with a form factor as compact and elegant as an iPod Shuffle."

Kuo revealed multiple new insights into the product on X, detailing that io's product is expected to enter mass production at the start of 2027.

Kuo says while the design and specifications may change before the device enters mass production, it's expected to "have cameras and microphones for environmental detection, with no display functionality."

Not only is the device expected to look like an iPod Shuffle that can be worn around your neck, but it is also "expected to connect to smartphones and PCs, utilizing their computing and display capabilities."

This information gives us a much deeper insight into what the io product actually is, following OpenAI's $6.5 billion acquisition of the company.

In the announcement video, Ive, famous for designing the first iPhone, and Altman, OpenAI's CEO, talked for nine-minutes in a café without really giving information on what the product is, other than it's "an extraordinary moment", and that whatever the pair are working on is going to completely revolutionize the way we interact with artificial intelligence.

My industry research indicates the following regarding the new AI hardware device from Jony Ive's collaboration with OpenAI:1. Mass production is expected to start in 2027.2. Assembly and shipping will occur outside China to reduce geopolitical risks, with Vietnam currently the… pic.twitter.com/5IELYEjNyVMay 22, 2025

So... It's basically an AI Pin

After reading Kuo's report, it's now clearer than ever that this upcoming product is essentially going to be a better version of the Humane AI Pin. Essentially, it's ChatGPT in a small product you can throw in your pocket or wear around your neck.

While this gives us further indication into what to expect, there's still a long time before io's first product enters mass production and that could mean major changes over the next year.

Kuo says, "AI integrated into real-world applications, often termed "physical AI," is widely recognized as the next critical trend", and while we may not understand the necessity for these products yet, in two years time everyone might be interacting with ChatGPT and other AI models in a whole new way.

You might also like
Categories: Technology

Rumored Nvidia RTX 5080 Super specs disappoint some gamers, but I don’t think there’s anything to worry about with this GPU

TechRadar News - 1 hour 58 min ago
  • A leak has detailed the claimed specs of Nvidia’s RTX 5080 Super
  • Some gamers might see this refreshed GPU as underwhelming – it doesn’t add any extra cores into the mix, notably
  • However, there are robust upgrades elsewhere with the video memory and also clock speeds

Another rumor about Nvidia’s RTX 5080 Super has been aired and we’ve got a look at what are supposedly the full specs of this GPU.

As VideoCardz pointed out, leaker Kopite7kimi has posted the claimed specs for the rumored graphics card on X, and that may mean Nvidia has just provided said details to its graphics card making partners (and they leaked from there). Or, it might mean precisely nothing, because as ever, rumors, much like demons, need considerable salting.

GeForce RTX 5080 SuperPG147-SKU35GB203-450-A110752FP32256-bit GDDR7 24G 32Gbps400+WMay 20, 2025

The key parts of the specifications are that the RTX 5080 Super will supposedly use the same GPU as the RTX 5080, which is the GB203 chip. As the RTX 5080 has already maxed out the cores on that chip, the core count will be the same with the Super version of this graphics card – there’s no room to maneuver to increase it.

The big upgrade comes from the leap from 16GB to 24GB of video RAM (VRAM), and as well as that 50% uplift, the leaker believes Nvidia is going to use faster memory modules here (32Gbps rather than 30Gbps).

We’re also told that the TDP of the RTX 5080 Super is going to sit at 400W, or it might use even more power than that.

Analysis: Crunching the specs and not forgetting about clocks

(Image credit: Future / John Loeffler)

Looking at those specs, you might think: how is the RTX 5080 Super going to be a tempting upgrade on the vanilla version of the GPU? It has the same CUDA core count, and somewhat faster video memory, but only around 7% more VRAM bandwidth than the RTX 5080. So, what gives?

Well, don’t forget that added to that VRAM boost, the RTX 5080 Super is expected to have considerably faster clock speeds. Pushing those clocks faster is why this incoming GPU is going to chug more than 400W (perhaps a fair bit more) compared to 360W for the plain RTX 5080.

So, if you’re worried that the RTX 5080 Super may represent an underwhelming prospect in terms of an upgrade over the RTX 5080, don’t be. (Although you may have concerns about your PC’s power supply instead). All this is in line with previous speculation that we’ll see something like a 10% performance boost with the RTX 5080 Super versus the basic version of the GPU, or maybe even slightly more (up towards 15%, even).

Plus that much bigger allocation of 24GB of VRAM is going to make a difference in some scenarios where 4K gaming coupled with very high graphics settings gets more demanding with certain games. (A situation that’s only going to get worse as time rolls on, if you’re thinking about future-proofing, which should always be something of a consideration).

On top of this is the fact that Nvidia is falling out of favor in the consumer GPU world, with AMD’s RDNA 4 graphics cards making a seriously positive impact on Team Red’s chances – and sales. The latest RX 9060 XT reveal has pretty much gone down a treat, too, so I don’t think Nvidia can risk damaging its standing with PC gamers any further, frankly, by pushing out subpar Super refreshes.

Speaking of refreshes – with the emphasis on the plural – previous rumors have also theorized an RTX 5070 Super graphics card with 18GB of VRAM is on the boil, but that’s notably absent from Kopite7kimi’s post here. That doesn’t mean it isn’t happening, but it could be read as a sign that the RTX 5080 Super is going to arrive first.

Again, previous spinning from the rumor mill indicates a very broad 2025 release timeframe for the RTX 5080 Super, but if the specs really are decided on at this stage – and it’s a huge if – that suggests Nvidia intends to deploy this GPU sooner, rather than later, this year.

You might also like
Categories: Technology

Samsung's new cheaper earbuds with a tempting battery upgrade seem to be imminent, after leaking on Samsung's websites

TechRadar News - 2 hours 10 sec ago
  • Samsung's next budget earbuds have leaked again
  • Regulatory filings show much bigger batteries than the Galaxy Buds FE
  • Expect pricing around $99 / £99 / AU$149

Earlier this month we reported that the incoming new pair of Samsung affordable earbuds – possibly called the Samsung Galaxy Buds Core and the likely successor to the Galaxy Buds FE – could deliver a much-needed battery boost, with significantly enhanced battery capacity in both of the buds and in the case too. That information came via leaked regulatory filings, and now another leak adds more confirmation.

This time the leaks are from Samsung. As Sammobile reports, support pages for the imminent earbuds are now live on Samsung's portals including the ones in Russia, Turkey and the UAE.

And in a sheer coincidence, the Samsung Galaxy Buds FE appear to be out of stock in most of those markets.

There's some speculation that the new earbuds will more closely resemble the Galaxy Buds 3 (Image credit: Future)Samsung Galaxy Buds Core: what we know so far

It looks like the battery capacity is up from 60mAh per bud to 100mAh, and from 479mAh to 500mAh for the case. Factor in the expected chipset improvements from newer hardware, and that could mean a significant boost to the buds' playback time. The current Buds FE deliver about six hours with ANC on and nine with it off.

The new model number is SM-R410 (the Galaxy Buds FE were SM-400) and there is speculation that we'll see a new design, possibly closer visually to the Samsung Galaxy Buds 3; that would make room for those bigger batteries.

Samsung hasn't announced these buds yet, so we don't know pricing or availability, but clearly if support pages are going up then a product launch can't be too far away.

We'd expect the new buds to be priced similarly to the Galaxy Buds FE, subject to tariff-related hikes: those launched at $99 / £99 / AU$149 in 2023.

You might also like
Categories: Technology

Here's how much the Samsung tri-fold could cost – though you probably won't get chance to buy it

TechRadar News - 2 hours 3 min ago
  • The tri-fold Samsung phone might cost $3,000-$3,500
  • However, it's likely to be limited to South Korea and China
  • A launch has been predicted for sometime in July

We've been ready and waiting for the Samsung tri-fold phone for months now – remember it was officially teased back in January – and as its launch gets closer, there's a new leak hinting at a high price for the foldable.

This comes from well-respected tipster Yogesh Brar, who says we can expect a price tag of around $3,000-$3,500. With a straight currency conversion at today's rates (which Samsung won't use), that's £2,225-£2,595 or AU$4,650-AU$5,425.

However, if you live in a country using any of those currencies, it sounds like you're not going to be able to spend your cash on this device. Brar reckons the handset is launching in "limited quantities", and only in South Korea and China (as previously rumored).

Samsung has previous form for this, because last year's Samsung Galaxy Z Fold Special Edition has also been limited to South Korea and China. Perhaps the company isn't sure what the demand for these very expensive foldables would be like globally.

One more fold

Galaxy Tri-fold all set to launch in Q3 this yearSamsung is only launching it in 2 markets : South Korea & ChinaLimited quantities with a price between $3000 - 3500May 21, 2025

That high price isn't much of a surprise of course. As our Samsung Galaxy Z Fold 6 review will tell you, that phone launched at a starting price of $1,899.99 / £1,799 / AU$2,749, and the new model will come with a bigger screen and an extra hinge.

Then there's the Huawei Mate XT, which costs 19,999 yuan in China. That's roughly $2,775 / £2,060 / AU$4,305 at today's conversion rates. These are clearly expensive and difficult to make, and that means high prices and limited production runs.

Since rumors of a Samsung tri-fold first started swirling, we've heard that the handset could be called the Galaxy G Fold, and that it'll share the same hinge technology expected to appear in the upcoming Samsung Galaxy Z Fold 7.

The tri-fold, the Galaxy Z Fold 7, and the Galaxy Z Flip 7 are all expected to be announced at an Unpacked event sometime in July, though on-sale dates may vary. At the same launch, we should also see the new Samsung Galaxy Watch 8.

You might also like
Categories: Technology

Openreach declares aim to accelerate UK high-speed broadband rollout

TechRadar News - 2 hours 3 min ago
  • BT-owned Openreach confirms plans to speed up full fibre rollout
  • 18 million properties are connected – 25 million by 2026, 30 million by 2030
  • BT also confirmed a deal to target "hard-to-reach" properties in South and West Wales

UK broadband network and infrastructure giant Openreach has committed to rolling out full fibre broadband across the UK more quickly after acknowledging that only 37% of customers are connected to the network.

The news coincides with an undisclosed "increased investment" from BT Group – Openreach's owner.

According to the company, more than 18 million homes and businesses nationwide have benefitted from new infrastructure, including four million in the past year, but with an extra cash injection from BT, it hopes to e

Openreach wants more homes and businesses to have full fibre

Openreach "now expects to accelerate towards its target of reaching 25 million premises by December 2026," a press release reads, noting how the company's build rate is expected to increase by 20%.

The BT-owned network and infrastructure firm says it's seen record demand over the past year, connecting one customer to its full fibre network every 17 seconds.

"We’re bringing life changing connectivity to all corners of the country, and we’re determined to go further and faster, so we’re proud of the confidence being shown in us through this investment," Openreach CEO Clive Selley said.

That growth is expected to continue into the end of the decade. Openreach envisions 30 million properties being connected to its full fibre network by 2030, adding a further five million after its December 2026 target.

BT recently confirmed a £9.8 million contract to extend its full fibre network to 1,800 "hard-to-reach sites" in Pembrokeshire, Swansea, Neath Port Talbot and Carmarthenshire.

BT Group CEO Allison Kirkby added: "Our new network is helping to grow the economy, create jobs, delight customers and deliver value to our shareholders."

You might also like
Categories: Technology

The Nintendo Switch 2 Pro Controller is already doing one thing better than the DualSense Edge and other premium gamepads

TechRadar News - 2 hours 6 min ago
  • The Nintendo Switch 2 Pro Controller's GL/GR buttons have some handy features
  • You can remap them without exiting your game session
  • Furthermore, the assignments will be saved on a per-game basis

The Nintendo Switch 2 Pro Controller has a bit of an ace up its sleeve, and it relates to the remappable GL/GR buttons found on the rear of the pad.

A spotlight for the Nintendo Switch 2 Pro Controller was featured on the Nintendo Today mobile app (spotted by GamesRadar), showcasing some of the functionality of these extra buttons.

It confirmed that the GL/GR buttons have a couple of fantastic quality-of-life features that are sorely missing from the likes of the DualSense Edge and Xbox Elite Wireless Controller Series 2 - two premium gamepads that also house additional remappable buttons.

With the Nintendo Switch 2 Pro Controller, the major difference is that the GL/GR remappable buttons can be assigned (and reassigned) without backing out of your current play session.

By holding down the Home button, you'll gain access to a 'quick settings' menu, within which you can assign the GL/GR buttons instantaneously. Furthermore, the controller will remember which inputs have been assigned to these buttons on a per-game basis.

This differs greatly from, for example, the DualSense Edge. While Sony's controller has a pair of exceptionally handy Function switches that let you swap button profiles on the fly, said profiles still need setting up in a separate menu on your PS5's dashboard.

For Nintendo Switch 2 games, this makes it incredibly easy to quickly assign a secondary input to the GL/GR buttons, but also test it out immediately to see how it feels in-game.

Quick remappable button assignment, in and of itself, is nothing new. Plenty of the best Nintendo Switch controllers feature button combination macros that let you remap on the fly. The downside here, though, is that this can be quite cumbersome, and you'll often need to dig into a controller's instruction manual to figure out what these macros are.

We're now less than a couple of weeks away from the Nintendo Switch 2's launch on June 5. Be sure to check out TechRadar Gaming around that time, as we'll have plenty of coverage on the console, its hardware, and games in the months to come.

You might also like...
Categories: Technology

Marvel delays release of Avengers: Doomsday and its sequel, and now I've got two big questions about what happens next

TechRadar News - 2 hours 17 min ago
  • Marvel has delayed the release of Avengers: Doomsday and Avengers: Secret Wars
  • The movies were supposed to arrive in May 2026 and May 2027
  • Their worldwide launch dates have been pushed back seven months

Marvel has delayed the release of Avengers: Doomsday and its sequel.

In a move that won't come as a surprise to many, the comic titan has pushed back the launch dates for Doomsday and its follow-up Avengers: Secret Wars.

The pair had been slated to land in theaters on May 1, 2026 and May 7, 2027. Now, you can expect to see Doomsday release in theaters worldwide seven months later than planned, with Avengers 5 now set to arrive on December 18, 2026 and Secret Wars' launch pushed to December 17, 2027.

The next two Avengers movies are set to be the biggest undertakings in Marvel Studios' history. Per Deadline, sources close to the production of both films say they're among the most ambitious projects that parent company Disney has ever produced, too. To quote Thanos, then, it was inevitable that Marvel would need more time to make both flicks.

Why Avengers 5 and 6's release-date delays are so significant

Marvel hasn't said what impact Doomsday's delayed release will have on its other projects (Image credit: Marvel Studios)

Make no mistake, Disney and Marvel have made the right call to delay the release of Doomsday and Secret Wars. The overall response to Marvel Cinematic Universe (MCU) projects since 2019's Avengers: Endgame has been mixed. While some films and Disney+ shows have been critical and commercial successes, others haven't been greeted as enthusiastically or made as much money as Marvel would have hoped.

Disney and Marvel can't afford to fumble the proverbial bag with Doomsday and Secret Wars, especially given the amount of money it'll collectively cost to make them. Add in the talent behind and in front of the camera – Avengers: Doomsday's initial cast alone is 27-deep – and the pressure to deliver two more top-tier Avengers movies is most certainly on.

The release of Spider-Man's next MCU adventure could be pushed back, too (Image credit: Sony Pictures/Marvel Entertainment)

Their release date postponements also raise other potential issues.

For starters, Doomsday and Secret Wars' delay could have a significant impact on Spider-Man: Brand New Day. The webslinger's next big-screen adventure was set to arrive between the pair, with its initial launch date penciled in for July 24, 2026. Spider-Man 4 suffered its own release setback in February, but its launch was only delayed by a week to July 31, 2026.

The big question now is whether Brand New Day will swing into cinemas on that revised date. Depending on which online rumors you believe, Spider-Man 4 will either be a multiverse-style movie like Spider-Man: No Way Home was, or a more grounded, street-level flick.

If it's the former, and if Brand New Day's plot is dependent on events that occur in, or run parallel to, Avengers: Doomsday, the next Spider-Man movie's launch date will likely have to be pushed back again.

Should Brand New Day be moved into 2027, we could see a repeat of 2023 when only one MCU film – Deadpool and Wolverine – landed in theaters, with 2026's sole Marvel movie being Doomsday. That's on the basis that Avengers 5, aka the second Marvel Phase 6 film, doesn't suffer another release date setback.

Will Marvel decide to move some of its 2025 Disney+ offerings into early 2026? (Image credit: Marvel Television/Disney Plus)

These delays could have a huge knock-on effect for Marvel's small-screen offerings, too.

If Brand New Day keeps its mid-2026 launch date, a whole year will have passed between the final MCU film of 2025 – The Fantastic Four: First Steps, which arrives on July 25 – and Tom Holland's next outing as Peter Parker's superhero alias. That's not necessarily a bad thing, but it means MCU devotees will look to Disney+, aka one of the world's best streaming services, for their Marvel fix.

Fortunately, Marvel has plenty of TV-based MCU content in the pipeline. From Ironheart's release in late June to Daredevil: Born Again season 2's launch next March, there are currently five live-action and animated series set to debut on Disney's primary streamer.

In light of Doomsday's delay, though, will Marvel tweak its Disney+ lineup and further spread out its small-screen content to fill the void?

Right now, Born Again's second season is the only series confirmed to arrive in 2026. There are other shows in the works that are expected to debut next year, but they aren't likely to be ready until mid- to late 2026. To offset a potentially months-long barren spell in the MCU that Doomsday's delayed release has caused, Marvel might opt to push animated series Eyes of Wakanda or Wonder Man, the final live-action MCU TV show of 2025, into early 2026.

I guess we'll find out more about any further release-schedule changes when Marvel takes to the Hall H stage for its now-annual presentation at San Diego Comic-Con, the 2025 edition of which runs from July 24-27.

You might also like
Categories: Technology

It's National Don't Fry Day. Here's How to Check Your Skin for Signs of Cancer

CNET News - 2 hours 56 min ago
In honor of National Don't Fry Day, here's what you should know about checking for skin cancer at home and when to see a doctor.
Categories: Technology

Rethinking power: how AI is reshaping energy demands in data centers

TechRadar News - 3 hours 15 min ago

Today, artificial intelligence is revolutionizing virtually every industry, but its rapid adoption also comes with a significant challenge: energy consumption.

Data centers are racing to accommodate the surge in AI-driven demand and are consuming significant amounts of electricity to support High-Performance Computing, cloud computing services, and the many digital products and services we rely on every day.

Why are we seeing such a spike in energy use? One reason is heavy reliance on graphics processing unit (GPU) chips, which are much faster and more effective than processing tasks. More than just an advantage, this efficiency has now made GPUs the new standard for training and running AI models and workloads.

Yet it also comes at a high cost: soaring energy consumption. Each GPU now requires up to four times more electricity than a standard CPU, an exponential increase that is quickly – and dramatically – changing demands for energy in the data center.

For example, consider these recent findings:

The New York Times recently described how OpenAI hopes to build five new data centers that would consume more electricity than the three million households in Massachusetts.

According to the Center on Global Energy Policy, GPUs and their servers could make up as much as 27 percent of the planned new generation capacity for 2027 and 14 percent of total commercial energy needs that year.

A Forbes article predicted that Nvidia’s Blackwell chipset will boost power consumption even further – a 300% increase in power consumption across one generation of GPUs with AI systems increasing power consumption at a higher rate.

These findings raise important power-related questions: Is AI growth outpacing the ability of utilities to supply the required energy? Are there other energy options data centers should consider? And maybe most importantly, what will data center’s energy use look like in both the short- and long-term future?

Navigating Power Supply and Demand in the AI Era

Despite growing concerns, AI has not yet surpassed the grid’s capabilities. In fact, some advancements suggest that AI energy consumption could even decreases. Many AI companies expended vast amounts of processing power to train their initial models, but newer players like DeepSeek now claim that their systems operate far more efficiently, requiring less computing power and energy.

However, AI’s sudden rise is only one factor in a perfect storm of energy demands. For example, the larger electrification movement, which has introduced millions of electric vehicles to the grid, and the reshoring of manufacturing to the U.S., is also straining resources. AI adds another layer to this complex equation, raising urgent questions about whether existing utilities can keep pace with demand.

Data centers, as commercial real estate, are also subject to the age-old adage, “location, location, location.” Many power generation sites – especially those harnessing solar and wind – are located in rural places in the United States, but transmission bottlenecks make it difficult to move. That power to urban centers where demand is highest. Thus far, geodiversity and urban demand have not yet driven data centers to these remote areas.

This could soon change. Hyperscalers have already demonstrated their willingness and agility in building data centers in the Arctic Circle to take advantage of natural cooling to reduce energy use and costs. A similar shift may take hold in the U.S., with data center operators eyeing locations in New Mexico, rural Texas, Wyoming, and other rural markets to capitalize on similar benefits.

Exploring Alternative Energy Solutions

As strain on the grid intensifies, alternative energy solutions are gaining traction as a means of ensuring a stable and sustainable power supply.

One promising development is the evolution of battery technology. Aluminum-ion batteries, for example, offer several advantages over lithium-based alternatives. Aluminum is more abundant, sourced from conflict-free regions, and free from the geopolitical challenges associated with lithium and cobalt mining. These batteries also boast a solid-state design, reducing flammability risks, and their higher energy density enables more efficient storage, which helps smooth out fluctuations in energy supply and demand – often visualized as the daily “duck curve.”

Nuclear energy is also re-emerging as a viable solution for long-term, reliable power generation. Advanced small modular reactors (SMRs) offer a scalable, low-carbon alternative that can provide consistent energy without the intermittency of renewables.

However, while test sites are under development, SMRs have yet to begin generating power and may still be five or more years away from large-scale deployment. Public perception remains a key challenge, as strict regulations often require plants to be situated far from populated areas, and the long-term management of nuclear waste continues to be a concern.

Additionally, virtual power plants (VPPs) are revolutionizing the energy landscape by connecting and coordinating thousands of decentralized batteries to function as a unified power source. By optimizing the generation, storage, and distribution of renewable energy, VPPs enhance grid stability and efficiency. Unlike traditional power plants, VPPs do not rely on a single energy source or location, making them inherently more flexible and resilient.

Securing a Sustainable Power Future for AI and Data Centers

While it’s hard to predict what lies ahead for AI and how much more demand we’ll see, the pressure is on to secure reliable, sustainable power, now and into the future.

As the adoption of AI tools accelerates, data centers must proactively seek sustainable and resilient energy solutions. Embracing alternative power sources, modernizing grid infrastructure, and leveraging cutting-edge innovations will be critical in ensuring that the power needs of AI-driven industries can be met – now and in the years to come.

We list the best web hosting services.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Data: the linchpin of lucrative SaaS exits

TechRadar News - 4 hours 16 min ago

For SaaS businesses eyeing a successful exit, particularly when engaging with sophisticated Private Equity (PE) and tech investors, the era of simply showcasing impressive top-line growth is over.

Today, data reigns supreme. It's the bedrock upon which compelling value stories are built, the lens through which operational efficiency and scalability are scrutinized, and ultimately, the key to unlocking those coveted higher valuation multiples.

A robust data strategy, coupled with the ability to extract meaningful insights, is no longer a ‘nice-to-have’ but a fundamental requirement for securing a lucrative exit in today’s competitive landscape.

What investors are looking for

So, what exactly are these discerning investors looking for in the data of a prospective SaaS acquisition? The foundation, without a doubt, remains the ARR bridge, or what can be referred to as the ‘revenue snowball’. This isn't just about presenting a static ARR figure; it’s about demonstrating how that recurring revenue has evolved over time. Investors will dissect this data from every angle – group-wide, segmented by product, customer cohort, and geography.

They want to see the trajectory, understand the drivers of growth and churn, and identify any potential vulnerabilities. Therefore, your ARR bridge needs to be more than just a spreadsheet; it needs to be a dynamic, drillable, and rigorously stress-tested tool that can withstand the intense scrutiny of due diligence.

Beyond the ARR bridge, several other key insights are paramount. Sales pipeline reporting provides a crucial forward-looking perspective. Investors want to see a healthy, well-managed pipeline with clearly defined stages, realistic conversion rates, and accurate forecasting. This demonstrates the predictability and sustainability of future revenue growth. Similarly, classic FP&A reports remain essential, offering a historical view of financial performance, profitability trends, and cost management.

However, some SaaS firms are now also looking to leverage product usage insights to a greater extent than ever before. Understanding how customers are interacting with the platform, identifying power users, and tracking feature adoption provides invaluable insights into customer stickiness, potential for upselling, and overall product value.

Looking ahead

Looking ahead, the role of data in shaping SaaS valuations will only intensify. We anticipate that the level of scrutiny and the expectation for data maturity and insightful analysis will continue to rise. Gone are the days of presenting high-level metric summaries; investors will increasingly demand granular insights and a clear understanding of the ‘why’ behind the numbers. When it comes to performance and trends; just saying profitability has grown by X% year on year is now not enough - it needs to be evidenced by granular data and solid analytics.

Investors want to know what’s working now and how your company can scale post-acquisition. By providing the context behind the metrics, it makes it easier to showcase opportunities for further growth, with potential investors being able to leverage these data “assets” to underpin their investment cases. With higher investor expectations, those who fail to do so risk undermining their valuation potential or, worse still, failing to secure the deal.

Furthermore, I believe that companies will need to start demonstrating how they are leveraging data to capitalize on the value that advanced analytics can bring. This could range from using AI-powered analytics to identify at-risk customers to employing machine learning to drive new business growth and customer expansion.

Even while there may be applications of AI tools in the SaaS space that aren’t necessarily tied to a firm’s data, most of these revenue-driving applications of advanced analytics and machine learning are only possible when the fundamentals are already firmly in place.

Building compelling value

So, how can SaaS firms proactively use data to build a compelling value story that resonates with potential acquirers? It boils down to not just making data a strategic priority but building the data policies, expertise and infrastructure you need into the fabric of your SaaS business.

Everything does not have to be in place from day one, rather you need to create a strategy that will enable you to ramp up to gathering all the critical data points you will need to answer every question an investor will ultimately ask. Doing this also lays the foundations to take advantage of the latest generative AI advances. As mentioned, AI applied to a shaky data foundation is unlikely to get you results, but applied to the right data foundations can transform the value of your business.

Luckily, the data points that PE firms and other potential investors now really value are the same insights that will make a fundamental improvement to how effectively you make decisions as your SaaS startup scales. The important thing to remember with any data project is to start with the questions you want to answer. This means understanding modern investors. Ask yourself, what metrics, beyond simple revenue figures, will tell the story of your company’s success and potential?

Aside from the core metrics already mentioned, it could be there are further opportunities to demonstrate differentiation. It could be the diversity of your customer base - both geographically and by sector. It could be that the cost of serving an additional customer and the automation of key processes can provide compelling evidence of scalability.

When you have a clear picture of where your real strength and USP exists, the next step is to develop the data collection, management and analysis systems and policies that will prove what you know to investors.

Further down the line

Further down the line it’s likely that there will also be a strong business case for investment in upskilling and retraining staff across the board

This should include everyone, including all senior teams. Even today, it still surprises me how few founders and business owners can understand and interpret their core business data, instead relying on a handful of experts. After all, it’s impossible to know what you don’t know – and a second-hand account of somebody else’s understanding, no matter how advanced it may be, could never substitute for your own personal analysis.

By building up your own expertise now, you and your senior team will be best positioned to demonstrate a compelling equity narrative that results in the highest possible valuation at the point of exit.

We've compiled a list of the best business intelligence platforms.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Today's NYT Connections: Sports Edition Hints and Answers for May 23, #242

CNET News - 5 hours 43 min ago
Hints and answers for the NYT Connections: Sports Edition puzzle, No. 242, for May 23.
Categories: Technology

Today's NYT Mini Crossword Answers for Friday, May 23

CNET News - 5 hours 49 min ago
Here are the answers for The New York Times Mini Crossword for May 23.
Categories: Technology

Marvel Rivals' Sharknado Team-Up Ability Cements the Game's Fun Direction

CNET News - Thu, 05/22/2025 - 21:00
This is a wacky, wild superhero game at its core -- and superheroes aren't perfectly balanced.
Categories: Technology

Enterprise Communication Evolution: Avaya's Infinity Platform Bridges the Gap Between What’s Needed Today and Expected Tomorrow

TechRadar News - Thu, 05/22/2025 - 20:50

Enterprises find themselves at a pivotal moment in communication technology, facing a difficult decision: embrace modern technology or protect their investments in existing systems. This has created a divide between all-new cloud solutions and approaches that work with the infrastructure organizations already have in place. Avaya's new Infinity platform solves this dilemma by offering a way to do both.

Bridging Technological Divides

The enterprise communication technology landscape has fragmented into distinct camps. On one side stand cloud-native solutions promising flexibility and innovation but requiring complete system replacement. On the other hand, traditional vendors offer incremental improvements to on-premise systems without fundamentally reimagining their architecture.

Our approach with Avaya Infinity platform targets the substantial middle ground with a hybrid solution for enterprises seeking modernization without abandoning functional infrastructure investments. This hybrid model acknowledges a fundamental reality: most large organizations operate complex technology ecosystems built over decades, making complete replacements impractical regardless of the benefits.

Differentiated Architecture

What differentiates Avaya Infinity platform is its architectural approach. It’s a secure platform that ensures compliance, deployment flexibility, and top-tier performance — a single code base across on-prem, cloud, and hybrid environments. Rather than forcing customers into two distinct choices, Avaya Infinity platform offers:

  • Modern, Secure, Ready-to-Use Architecture: The unified platform with a single code base delivers the flexibility, security, and control that large enterprises expect. This approach ensures data privacy, regulatory compliance, and unmatched scalability. The modular design enables organizations to activate specific capabilities without implementing the entire platform—essential for phased adoption strategies.
  • Layered, Intelligent Orchestration: AI capabilities function as an enhancement layer across both cloud and on-premises components, end-to-end, allowing intelligence to flow throughout the platform regardless of where components physically reside. It unifies AI, native applications, and disparate systems, whether they’re from Avaya, our partners, or enterprises’ existing own infrastructures. This empowers enterprises with a seamless, single-source approach to business agility and desired outcomes.
  • Data-Driven Customization: Enterprises can customize experiences for their customers, contact center agents, and employees by leveraging rich data insights. With intelligent engagement tools, this platform enables hyper personalization at every touchpoint, driving satisfaction and loyalty.

This architecture addresses the realities enterprises face in the contact center. The vast majority of organizations simply cannot afford operational disruption during technological transformation, yet they’re also unable to ignore competitive pressure to implement AI-powered experiences.

The Strategic Benefits

Avaya Infinity platform offers a hybrid solution that enables organizations to:

  • Extend the value of existing investments while incrementally introducing new capabilities
  • Deploy AI capabilities selectively based on specific business needs and readiness
  • Scale cloud adoption at a pace aligned with organizational change capacity
  • Maintain operational stability throughout transformation processes

For those managing customer experience strategies, this approach transforms the contact center into a connection center ─ connecting channels (voice and digital), connecting insights (data and behavior), connecting technologies (unifying AI, applications and disparate systems), and connecting workflows (delivering hyper personalized experiences). When customer interactions generate not just service outcomes but actionable intelligence, every conversation becomes a source of competitive advantage.

Balancing Innovation and Stability

The enterprise technology landscape has historically swung between innovation cycles and stability periods. Today's environment is unique in demanding both simultaneously—rapid innovation in customer experience alongside operational stability in core systems.

Avaya Infinity platform embraces this hybrid reality offer a compelling vision: transformation without operational upheaval. Its architecture is enabled by existing investments while enabling future capabilities, indicating that for most enterprises, technology evolution occurs on a continuum rather than through discrete revolutions.

The Path Forward

Avaya Infinity platform supports sustainable transformation strategies using on-premise investments while systematically introducing AI-powered innovations. It delivers what enterprises need today and expect tomorrow.

Watch this video to learn more about Avaya Infinity platform and contact an Avaya expert to request a demo here.

Categories: Technology

Yes, an Elden Ring Live-Action Movie Directed by Alex Garland Is Coming

CNET News - Thu, 05/22/2025 - 20:26
No details or release date, but boy will it be cool to see Malenia wipe the floor with someone else for a change.
Categories: Technology

Why Google working with Warby Parker and Gentle Monster gives me confidence about the future of smart glasses

TechRadar News - Thu, 05/22/2025 - 19:30

Google's unveiling of a new line of AI-fueled smart glasses built on the Android XR platform was only one of dozens of announcements at Google I/O this year. Even so, one facet in particular caught my eye as more important than it might have seemed to a casual viewer.

While the idea of wearing AI-powered lenses that can whisper directions into your ears while projecting your to-do list onto a mountain vista is exciting, it's how you'll look while you use them that grabbed my attention. Specifically, Google's partnership with Warby Parker and Gentle Monster to design their new smart glasses.

The spectre of Google Glass and the shadow cast by the so-called Glassholes weraring them went unmentioned, but it's not hard to see the partnerships as part of a deliberate strategy to avoid repeating the mistakes made a decade ago. Wearing Google Glass might have said, “I’m wearing the future,” but it also hinted, “I might be filming you without your consent.” No one will think that Google didn't consider the fashion aspect of smart glasses this time. Meta’s Ray-Ban collaboration is based on a similar impulse.

If you want people to wear computers on their faces, you have to make them look good. Warby Parker and Gentle Monster are known for creating glasses that appeal to millennials and Gen Z, both in look and price.

"Warby Parker is an incredible brand, and they've been really innovative not only with the designs that they have but also with their consumer retail experience. So we're thrilled to be partnered with them," said Sameer Samat, president of Google’s Android Ecosystem, in an interview with Bloomberg. "I think between Gentle Monster and Warby Parker, they're going to be great designs. First and foremost, people want to wear these and feel proud to wear them."

Smart fashion

Wearables are not mini smartphones, and treating them that way has proven to be a mistake. Just because you want to scroll through AR-enhanced dog videos doesn't mean you don't want to look good simultaneously.

Plus, smart glasses may be the best way to integrate generative AI like Google Gemini into hardware. Compared to the struggles of the Humane AI Pin, the Rabbit R1, and the Plaud.ai NotePin, smart glasses feel like a much safer bet.

We already live in a world saturated with wearable tech. Smartwatches are ubiquitous, and wireless earbuds also have microphones and biometric sensors. Glasses occupy a lot of your face's real estate, though. They're a way people identify you far more than your watch. Augmented reality devices sitting on your nose need to be appealing, no matter which side of the lenses you look at.

Combine that with what the smart glasses offer wearers, and you have a much stronger product. They don't have to do everything, just enough to justify wearing them. The better they look, the less justification you need for the tech features.

Teaming up with two companies that actually understand design shows that Google understands that. Google isn’t pretending to be a fashion house. They’re outsourcing style strategies to people who know what they're doing. Google seems to have learned that if smart glasses are going to work as a product, they need to blend in with other glasses, not proclaim to the world that someone is wearing them.

How much they cost will matter, as setting smart glasses prices to match high-end smartphones will slow adoption. But if Google leverages Warby Parker and Gentle Monster’s direct-to-consumer experience to keep prices reasonable, they might entice a lot more people, and possibly undercut their rivals. People are used to spending a few hundred dollars on prescription glasses a reasonably sized extra charge for AI will be just another perk, like polarized prescription sunglasses.

Success here might also ripple out to smaller, but fashionable eyewear brands. Your favorite boutique frame designer might eventually offer 'smart' as a category, like they do with transition lenses today. Google is making a bet that people will choose to wear technology if it looks like something they would choose to wear anyway, and a bet on people wanting to look good is about as safe a bet I can imagine.

You might also like...
Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology