Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 49 min 26 sec ago

The hidden mathematics of AI: why your GPU bills don't add up

Mon, 08/11/2025 - 03:40

There's a calculation that every AI executive should know by heart, but most have never done: an on-premises GPU server costs roughly the same as six to nine months of renting equivalent cloud capacity.

Given that hardware typically runs for three to five years, the mathematics are stark, yet somehow this isn't common knowledge in boardrooms making million-pound infrastructure decisions.

The issue stems from a fundamental mismatch between how we think about AI costs and how they actually accumulate. The operational expenditure over capital expenditure model feels intuitive when you pay as you go, scale as needed, and avoid big upfront commitments.

But AI workloads break these assumptions in ways that make traditional cloud economics misleading.

What the cloud isn't telling you

For example, renting a single NVIDIA H100 GPU instance from a hyperscaler cloud provider can cost around $8/hour, or over $5500 per month. Over 12 months, that's upwards of $65,000.

By contrast, purchasing equivalent hardware outright might cost around $30,000 to $35,000, with three to five years of usable life. Add power, cooling, and maintenance and you still come out ahead after just 6 to 9 months of usage. Plus, you own the hardware so you don’t have to return it after 12 months.

But the pricing hierarchy is more complex than it appears. While neocloud providers like Fluidstack offer H100s at that $2/hour rate, hyperscalers charge closer to $8/hour, making the on-premises case even stronger.

The real-world comparison gets harder to ignore when you consider actual deployments: 8xH100 systems from Dell or Supermicro cost around $250,000, versus $825,000 for three years of equivalent hyperscaler capacity (even with reserved pricing). NVIDIA's own DGX systems carry a punishing 50-100% markup over these already substantial prices.

The missing numbers in most AI budgeting conversations represent real savings, not theoretical ones. The problem compounds when you examine specific use cases.

Consider training runs. Most cloud providers only guarantee access to large GPU clusters if you reserve capacity for a year or more. If your training only needs two weeks, you're still paying for the other 50.

Meanwhile, inference demands create their own mathematical puzzle. Token-based pricing for large language models means costs fluctuate with the unpredictability of the models themselves, making budget forecasting feel more like weather prediction than financial planning.

Elasticity, but with fine print

The cloud’s promise of elastic scale feels tailor-made for AI – until you realize that scale is constrained by quota limits, GPU availability, and cost unpredictability. What’s elastic in theory often requires pre-booking in practice and cash upfront to make costs acceptable.

And once your usage grows, discounts come with multi-year commitments that mirror the CapEx models cloud was meant to replace.

It's not that the cloud isn't scalable. It's that the version of scale AI teams need (cost-efficient, high-throughput, burstable compute) isn’t always what’s on offer.

The irony runs deeper than pricing. Cloud providers market flexibility as their core value proposition, yet AI workloads, which are the most computationally demanding applications of our time, often require the least flexible arrangements.

Long-term reservations, capacity planning, and predictable baseline loads start to look suspiciously like the traditional IT procurement cycles cloud computing was supposed to eliminate. The revolution becomes circular.

Hidden costs, visible friction

The hidden complexity emerges in the details. Teams preparing for usage spikes often reserve more capacity than they use, paying for idle compute "just in case."

Data migration between providers can consume non-trivial amounts of engineering time, representing an opportunity cost that rarely appears on infrastructure budgets but significantly impacts small, time-constrained teams.

These opportunity costs compound over time. When teams switch between cloud providers – driven by pricing changes, performance issues or compliance needs, they often face weeks of rewrites, re-optimizations, and revalidations.

It’s not just the IT infrastructure that changes, but all the code that manages it, internal expertise in that provider disappears and deployment pipeline needs to be rewritten. For lean teams, this can mean delayed product updates or missed go-to-market windows, which rarely get factored into the headline GPU bill.

Perhaps most surprisingly, the operational burden of managing on-premises infrastructure has been systematically overstated. Unless you're operating at extreme scale, the complexity is entirely manageable through in-house expertise or through managed service providers.

The difference is that this complexity is visible and planned for, rather than hidden in monthly bills that fluctuate unpredictably.

From budgeting to strategy

Smart companies are increasingly adopting hybrid approaches that play to each infrastructure model's strengths. They use owned hardware for predictable baseline loads like the steady-state inference that forms the backbone of their service.

Cloud resources handle the spikes: time-of-day variations, customer campaign surges, or experimental workloads where spot pricing can soften the blow.

Companies taking this approach have moved beyond anti-cloud thinking toward financially literate engineering.

The cloud remains invaluable for rapid experimentation, geographic scaling, and genuinely unpredictable workloads. But treating it as the default choice for all AI infrastructure ignores the mathematical reality of how these systems actually get used.

Companies getting this calculation right are doing more than saving money. They're building more sustainable, predictable foundations for long-term innovation.

These conversations aren’t just technical, they’re strategic. CFOs may favor cloud for its clean OpEx line, while engineers feel the pain of FinOps teams desperately chasing them to delete resources as month-end cost spikes and poor support hit.

That disconnect can lead to infrastructure decisions driven more by accounting conventions than real performance or user experience. Organizations getting this right are the ones where finance and engineering sit at the same table, reviewing not just cost, but throughput, reliability, and long-term flexibility. In AI, aligning financial and technical truths is the real unlock.

Understanding these hidden mathematics won’t just help you budget better, it’ll make sure you’re building infrastructure that works the way AI actually does, freeing up headspace to focus on what matters most: building better, faster, and more resilient AI products.

We list the best IT management tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Build an online store in seconds? One of our favorite website builders is adding ecommerce capabilities to its vibe coding platform

Mon, 08/11/2025 - 03:33
  • Hostinger Horizons integrates ecommerce directly, removing reliance on plugins or third-party connections.
  • The update uses a tested ecommerce engine from Hostinger’s existing website builder system.
  • Users can add up to 600 products with no extra transaction fees charged.

Hostinger has introduced built-in ecommerce platform functionality to its Horizons vibe coding platform, making it the first such tool in its category to offer a complete online store setup without relying on third-party integrations.

The website builder company claims this update removes the need for plugins, lengthy configuration, or technical expertise, which allows users to establish an online retail presence in minutes.

Users can list up to 600 products, configure over 100 payment gateway options, set up shipping methods, and apply taxes or discounts as needed.

Reducing time and effort for digital storefronts

“We’re building Horizons for people who don’t want to worry about technical setup or to have to figure out how ecommerce works," said Dainius Kavoliūnas, Head of Hostinger Horizons.

"Creating an online store was technically possible before, but it took too much time and effort - fortunately, a tested solution was right next door – our Hostinger Website Builder team already had a powerful ecommerce engine. We just needed to integrate it."

Vibe coding is a relatively new approach to web development that replaces manual coding with conversational AI prompts - all users need to do is describe their desired website or application in natural language, and the platform generates a ready-to-publish version.

Additionally, the Horizons update integrates an ecommerce platform directly into the interface, enabling seamless store management without needing to leave the system.

Hostinger states there are no additional transaction fees, and inventory management can be done manually without consuming paid AI prompts.

While this eliminates recurring costs for simple updates like price changes or stock adjustments, scaling beyond the provided capacity or customizing complex workflows may still require additional resources.

The inclusion of Hostinger’s existing e-commerce engine, previously part of its standalone website builder, suggests the company is repurposing proven infrastructure rather than introducing an untested solution.

This could offer some reliability, but whether it meets the expectations of experienced merchants remains to be seen.

Although AI can be used for storefront customization, such as rearranging products or altering visual elements, the long-term success of any online shop will still rely heavily on marketing, product quality, and customer service

These factors are not automatically solved by a fast setup process.

Hostinger launched Horizons in March 2025 to enable non-technical users to build and publish websites or applications through simple text prompts.

Earlier updates included generative engine optimization, manual editing tools, free automatic error correction, and database integration.

“After analyzing 200,000 prompts, we learned that business websites are the most popular use case among Hostinger Horizons clients, representing around a third of all projects built with the AI tool.”

“Understanding that our clients want to sell online, we delivered an easy, intuitive ecommerce feature,” Kavoliūnas added.

You might also like
Categories: Technology

Don't stop at basic protections; make ongoing training a priority

Mon, 08/11/2025 - 02:36

Fifty years ago, it was heists like the one that hit the Baker Street Bank that had the power to shock the nation. Now, in the digital world, heists look starkly different and cybersecurity threats are constant, with banks like NatWest facing a “continuous arms race” with around 100 million cyber-attacks every month. What used to be gangs of robbers digging tunnels and smuggling deposit boxes full of cash are now groups of hackers sending phishing emails and holding some of the most notable companies to ransom for hundreds of millions of dollars.

This transition from physical to digital theft is evident. No longer confined to vaults and getaway cars, today's high-stake heists are executed remotely, by online threat actors. These modern-day criminals operate across borders, targeting vulnerabilities in systems and human behavior to extract data and money. The sheer volume and relentless nature of these digital assaults, as exemplified by financial institutions battling millions of cyber-attacks monthly, highlight a new era of crime.

The growing problem of cyber-attacks

Cyber-attacks are a growing problem, amongst a growing number of sectors, and confronting this escalating issue is vital. It’s not just banks that are facing the constant threat of cyber-attacks; cyber threats are growing at an exponential rate, while becoming increasingly sophisticated and targeted.

Data breaches have hit a myriad of industries: from luxury brands like Dior and supermarkets like M&S, to cryptocurrency exchange Coinbase and UK government organization Legal Aid.

The dangers to personal data are being felt across all sectors, at all digital touchpoints. Amid this battleground of immediate cyber threats comes a growing demand for robust security solutions that address company concerns.

From advanced antivirus technologies to endpoint backup software, AI-powered security is evolving rapidly to stay ahead of such attacks - and it’s essential that companies invest in these defenses in order to stay more than one step ahead.

Evolution of technology

As technology evolves at a rapid pace, companies must keep up with advancements made by cyber-attackers. As businesses of all sizes continue to embrace digital transformation, the need to strengthen their cybersecurity grows increasingly critical.

The UK Government’s recently published Cyber Governance Code of Practice highlights that management of cyber risks is vital for modern businesses to function, and effective management requires collective input from across an organization. This Code of Practice and governance framework package guides boards and directors in managing digital risks and safeguarding their businesses and organizations from cyberattacks.

The framework encourages companies to take four employee-focused actions: foster a cybersecurity culture; ensure clear policies support a positive cybersecurity culture; improve their own cyber literacy through training; and use suitable metrics to verify the organization has an effective cybersecurity training, education, and awareness program.

The report is a clear reminder that the human firewall, that is, the employees who encounter an attack and respond, is just as important as technological defenses.

More than a simple fix, a culture shift is needed

It’s not enough to roll out generic training. The reality is that in today’s world, one wrong click can bring a business to a complete halt. According to the latest insights, the approximate amount of ransoms paid globally in 2024 reached $813.55 million.

When requested to pay a ransom, companies know that refusing to do so runs the risk of their customers’ personal information being leaked publicly, which would additionally require them to pay the associated financial penalties and legal payouts, not to mention reputational damage.

Addressing the threat of cyber-attacks must be embedded in a company’s culture, given the fact that if threat actors are successful, the impact of their actions would be felt not only company-wide but also by the ecosystem within which the organization operates.

Leadership and security

Organizations can bolster their security by cultivating strong leadership, providing tailored training, and building a proactive security culture to create a ‘human firewall’ of colleagues armed with know-how.

Employees of all skillsets and seniorities should undergo comprehensive and ongoing cyber awareness training, whatever their role and seniority, to drive the defenses forward and cultivate a mindful culture.

When employees are provided with the knowledge and tools to maintain awareness of the dangers their company is facing, they can be the most effective method to keep the business secure.

Building a mindful culture

Building a mindful culture can be complemented by a Zero Trust approach, which creates a robust defense against evolving cyber threats. This strategic approach mandates rigorous verification for all access requests, irrespective of their origin or the user's location within the network, thereby yielding exceptionally strong results that effectively eliminate a significant portion of potential threats.

For example, when an employee receives an email requesting sensitive information or a link to a suspicious website, they should be trained to recognize it as a potential phishing attempt right away, verify the sender's identity, and report the email to the IT department for further investigation.

This proactive stance, ingrained through a Zero Trust philosophy and continuous education, significantly reduces the likelihood of successful breaches. It’s better safe than sorry, and in the realm of cybersecurity, this means being diligent about taking the extra steps to fortify an organization's digital defenses.

Don't stop at basic protections

Don't stop at basic protections, make ongoing training a priority.
Defenses can’t stop at antivirus technology and endpoint protection, and training isn’t a one-time solution. While these are the necessities, they are simply not enough for the twenty-first century heist as businesses continue to battle millions of cyber-attacks each month.

As threats advance or teams become complacent, ongoing phishing simulations, tests and education are key in maintaining a robust human firewall. Companies must invest in technology and ongoing training to equip employees across all roles and levels with the skills and awareness to stay alert. A company’s greatest weapon can be its workforce, if leveraged.

Cybersecurity needs tech, but it's nothing without people who are well trained to understand the latest attack methods and protect against the digital transition's inherent risks.

We list the best ransomware protection.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Open-source AI is central to safe development and deployment

Mon, 08/11/2025 - 01:51

An old proverb famously states, "If you want to go fast, go alone. If you want to go far, go together."

This is especially true when it comes to artificial intelligence, where breakneck advances happen seemingly every day. And while individual companies are rapidly fielding their own AI-powered chatbots and analysis tools, real long-term improvement and innovation in this new scientific frontier often requires broad collaboration in developing open and trusted AI systems that produce accurate, reliable, and safe outputs.

Early conventional wisdom held that only so-called 'closed' AI systems controlled by one company could be safe and trusted. Some argued that open models would inevitably undermine safety or lead to misuse. But experience is quickly showing that open source models and the collaboration they bring are a powerful tool for promoting security and trust.

The power of collaboration

Collaboration is a powerful force for AI advancement because it fosters diverse perspectives and capabilities. When it comes to AI, collaboration can, in many cases, be optimized by leveraging open source to reduce bias, increase transparency, gain greater control over our data, and ultimately, accelerate time to innovation.

According to McKinsey, organizations that view AI as essential to their competitive advantage are far more likely to use open source AI models and tools than organizations that do not. Open source AI models, tools, and frameworks enable developers and researchers to build upon existing work, rather than starting from scratch, to achieve higher-quality outputs more quickly.

The open source software approach thrives on community contributions, bringing together individuals, companies, and organizations from around the globe to collaborate on shared goals. This is where organizations like the AI Alliance—which was spearheaded by IBM and others, and is comprised of technology creators, developers, and adopters collaborating to advance safe and responsible AI—play a crucial role.

By pooling resources and knowledge, the AI Alliance provides a platform for sharing and developing AI innovations. This meritocracy yields immediate value, both for the broader technology ecosystem and the world at large.

Why the AI Alliance matters today

There are many practical and ethical reasons for such broad-based AI partnerships. AI research and development require substantial resources, including data, computing power, and expertise. The availability of open source models keeps costs down, broadening choices and helping to prevent the concentration of the AI industry in the hands of a few major players.

The AI Alliance also offers a forum to hold honest conversations among like-minded organizations about AI-related legislation and its impacts on greater innovation and adoption.

In a short time, the AI Alliance has blossomed into a vibrant ecosystem, bringing together a critical mass of data, tools, and talent. Today, more than 140 organizational members from 23 countries collaborate through the alliance to address some of the most pressing challenges in AI.

Open source is particularly critical to members of the alliance, including Databricks, which has long championed the democratization of AI. We’ve open sourced many critical big data processing and analytics projects, like the Delta Lake, MLflow, and Unity Catalog tools that underpin many large data and AI deployments today.

When it comes to today’s AI ecosystem, we need to ensure that everyone, including academics, researchers, non-profits, and beyond, can access and understand the best AI tools and models. The more we all understand these models and how to utilize them, the more we can share ideas on how to safely shape the future of AI and subsequently use it to solve today’s toughest challenges.

But we can’t do it alone.

Collaborate, code, and create the future of AI

We established a policy working group within the Alliance to focus not only on advocacy but also on developing responses to government requests that could impact open-source AI development. For example, last year, we contributed to the landmark National Telecommunications and Information Administration study examining potential benefits and risks of open weight frontier AI models.

The final NTIA report strongly underscored the valuable role of open models in today’s AI ecosystem, while also highlighting the need for vigilant monitoring and ongoing evaluation of policies to manage emerging risks in the future.

Our intention is to ensure that AI regulation is thoughtfully crafted so that open source AI thrives. Organizations like the AI Alliance have laid a solid foundation for international cooperation, but it's just the beginning.

If you work at a business that prioritizes artificial intelligence, you too can be part of this important work. Start by developing educational programs, workshops, and training sessions – and joining AI-related projects and communities – to share knowledge and build tools that benefit others.

You can create and share your own open source projects, such as datasets, pre-trained models, or utilities, which build on a foundation of AI fairness, transparency, and accessibility to ensure the benefits of AI are widely distributed. Check out GitHub or Hugging Face to look for AI/ML projects that align with your skills and interests.

The advent of AI is a pivotal moment in our collective human history. Experience shows that collaboration will be key to our success in advancing AI innovation with safety and trust. We must move into this promising future with open arms and open software models and tools, adequately prepared for the challenges ahead. Let's go far—together.

We list the best IT Automation software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Not so smart anymore - researchers hack into a Gemini-powered smart home by hijacking...Google Calendar?

Sun, 08/10/2025 - 14:51
  • Experts warn a single calendar entry can silently hijack your smart home without your knowledge
  • Researchers proved AI can be hacked to control smart homes using only words
  • Saying “thanks” triggered Gemini to switch on the lights and boil water automatically

The promise of AI-integrated homes has long included convenience, automation, and efficiency, however, a new study from researchers at Tel Aviv University has exposed a more unsettling reality.

In what may be the first known real-world example of a successful AI prompt-injection attack, the team manipulated a Gemini-powered smart home using nothing more than a compromised Google Calendar entry.

The attack exploited Gemini’s integration with the entire Google ecosystem, particularly its ability to access calendar events, interpret natural language prompts, and control connected smart devices.

From scheduling to sabotage: exploiting everyday AI access

Gemini, though limited in autonomy, has enough “agentic capabilities” to execute commands on smart home systems.

That connectivity became a liability when the researchers inserted malicious instructions into a calendar appointment, masked as a regular event.

When the user later asked Gemini to summarize their schedule, it inadvertently triggered the hidden instructions.

The embedded command included instructions for Gemini to act as a Google Home agent, lying dormant until a common phrase like “thanks” or “sure” was typed by the user.

At that point, Gemini activated smart devices such as lights, shutters, and even a boiler, none of which the user had authorized at that moment.

These delayed triggers were particularly effective in bypassing existing defenses and confusing the source of the actions.

This method, dubbed “promptware,” raises serious concerns about how AI interfaces interpret user input and external data.

The researchers argue that such prompt-injection attacks represent a growing class of threats that blend social engineering with automation.

They demonstrated that this technique could go far beyond controlling devices.

It could also be used to delete appointments, send spam, or open malicious websites, steps that could lead directly to identity theft or malware infection.

The research team coordinated with Google to disclose the vulnerability, and in response, the company accelerated the rollout of new protections against prompt-injection attacks, including added scrutiny for calendar events and extra confirmations for sensitive actions.

Still, questions remain about how scalable these fixes are, especially as Gemini and other AI systems gain more control over personal data and devices.

Unfortunately, traditional security suites and firewall protection are not designed for this kind of attack vector.

To stay safe, users should limit what AI tools and assistants like Gemini can access, especially calendars and smart home controls.

Also, avoid storing sensitive or complex instructions in calendar events, and don’t allow AI to act on them without oversight.

Be alert to unusual behavior from smart devices and disconnect access if anything seems off.

Via Wired

You might also like
Categories: Technology

3D printing and AI will bring in 'new era of nuclear construction' - but how safe is it?

Sun, 08/10/2025 - 12:45
  • 3D printers built complex concrete parts faster, yet long-term durability remains largely untested
  • Oak Ridge finished reactor shielding in days, raising speed-versus-safety debates across the industry
  • Advanced construction methods rely more on software, reducing labor yet increasing system dependence

In East Tennessee, a 3D printer arm has been used to build concrete shielding columns for a nuclear reactor.

The work is part of the Hermes Low-Power Demonstration Reactor project, supported by the US Department of Energy, and marks a new direction in how nuclear infrastructure is built, with both 3D printing and AI tools playing major roles.

And according to Oak Ridge National Laboratory (ORNL), large parts of the construction were completed in just 14 days, which could have taken several weeks using conventional methods.

Efficiency gains clash with engineering caution

The new method uses 3D printers to create detailed molds for casting concrete, even in complex shapes, with the goal of making construction faster, cheaper, and more flexible while relying more on US-based materials and labor.

AI tools also played a role in the project, as ORNL used the technology to guide parts of the design and building process.

These tools may help reduce human error and speed up work, especially when creating difficult or unique parts, but depending heavily on AI also raises questions. How can builders be sure these systems won’t make unnoticed mistakes? Who checks the decisions that are automated?

The project is also a response to rising energy demands - as AI systems and data centers use more power, nuclear energy is seen as a stable source to support them.

Some experts say that future AI tools may end up running on power from reactors they helped design, a feedback loop that could be both efficient and risky.

The use of 3D printing in this project makes it possible to build precise structures faster.

Still, it’s not yet clear how well these 3D-printed parts will hold up over time.

Nuclear reactors need to last for decades, and failure in any part of the structure could be dangerous. Testing and quality checks must keep up with the speed of new building methods.

For now, 3D printing and AI seem to offer powerful tools for the nuclear industry.

But while faster construction is a major benefit, safety must remain the top concern - this “new era” may bring improvements, but it will need close attention and caution at every step.

Via Toms Hardware

You might also like
Categories: Technology

Fresh Google Pixel Watch 4 leaks may give us our best look yet at the upgraded sensors and charging system

Sun, 08/10/2025 - 10:30
  • More Pixel Watch 4 information has leaked
  • We can see new sensors and charging contacts
  • The wearable should launch on August 20

The Pixel Watch 4 is almost certainly going to be unveiled alongside the Pixel 10 series and the Pixel Buds 2a on Wednesday, August 20 – though Google has only confirmed the date, not what's being launched – and a new leak gives us more information on the wearable.

Images posted to Reddit (via 9to5Google) show what look to be official marketing slides for the Pixel Watch 4, detailing features such as improved durability, battery life, and activity tracking accuracy – courtesy of a "Gen 3 sensor hub".

That would be an upgrade on the sensors we saw with the Google Pixel Watch 3, and should mean better precision in readings such as heart rate – though we won't know for sure until we've actually had an opportunity to try it out.

We also get another look at the rather unusual side charging system that showed up in an earlier leak, with charge contacts positioned on the side of the watch casing: it would appear this is how you'll be able to charge up the Pixel Watch 4.

'Technological advancements'

The Pixel Watch 3 was launched in August 2024 (Image credit: Google)

There's plenty of positive phrasing in these marketing materials, as you would expect. The watch apparently brings "significant technological advancements" over its predecessor, together with a "premium crafted design".

The battery life is listed as reaching 30 hours between charges, which is said to be a 25% boost over the current model. Better battery life had already been mentioned in previous leaks, so we're hopeful in that particular department.

There's also mention of the two expected watch sizes, 41 mm and 42 mm, while Gemini integration is mentioned, as well as "dual frequency" GPS – which suggests the wearable will be more accurate and faster in reporting its location.

Together with the rest of the leaked information that's also emerged in recent days, it looks as though the Pixel Watch 4 could be an appealing prospect, when it's finally confirmed – and perhaps worth a spot on our best smartwatches list.

You might also like
Categories: Technology

OpenAI has new, smaller open models to take on DeepSeek - and they'll be available on AWS for the first time

Sun, 08/10/2025 - 05:26
  • OpenAI’s new models run efficiently on minimal hardware, but haven’t been independently tested for workloads
  • The models are designed for edge use cases where full-scale infrastructure isn’t always available
  • Apache 2.0 licensing may encourage broader experimentation in regions with strict data requirements

OpenAI has released two open-weight models, gpt-oss-120B and gpt-oss-20B, positioning them as direct challengers to offerings like DeepSeek-R1 and other large language learning models (LLMs) currently shaping the AI ecosystem.

These models are now available on AWS through its Amazon Bedrock and Amazon SageMaker AI platforms.

This marks OpenAI’s entry into the open-weight model segment, a space that until now has been dominated by competitors such as Mistral AI and Meta.

OpenAI and AWS

The gpt-oss-120B model runs on a single 80 GB GPU, while the 20B version targets edge environments with only 16 GB of memory required.

OpenAI claims both models deliver strong reasoning performance, matching or exceeding its o4-mini model on key benchmarks.

However, external evaluations are not yet available, leaving actual performance across varied workloads open to scrutiny.

What distinguishes these models is not only their size, but also the license.

Released under Apache 2.0, they are intended to lower access barriers and support broader AI development, particularly in high-security or resource-limited environments.

According to OpenAI, this move aligns with its broader mission to make artificial intelligence tools more widely usable across industries and geographies.

On AWS, the models are integrated into enterprise infrastructure via Amazon Bedrock AgentCore, enabling the creation of AI agents capable of performing complex workflows.

OpenAI suggests these models are suitable for tasks like code generation, scientific reasoning, and multi-step problem-solving, especially where adjustable reasoning and chain-of-thought outputs are required.

Their 128K context window also supports longer interactions, such as document analysis or technical support tasks.

The models also integrate with developer tooling, supporting platforms like vLLM, llama.cpp, and Hugging Face.

With features like Guardrails and upcoming support for custom model import and knowledge bases, OpenAI and AWS are pitching this as a developer-ready foundation for building scalable AI applications.

Still, the release feels partly strategic, positioning OpenAI as a key player in open model infrastructure, while also tethering its technology more closely to Amazon Web Services, a dominant force in cloud computing.

You might also like
Categories: Technology

Monster season 4: everything we know so far about the hit show's return to Netflix

Sun, 08/10/2025 - 03:00
Monster season 4: key information

- Yet to be officially confirmed by Netflix
- Will follow the story of Lizzie Borden
- Whole new cast expected
- No official trailer released yet
- No news on future seasons

Monster season 4 is coming, though the news is yet to be officially confirmed. The true crime anthology series has become a record breaker for Netflix, one of the best streaming services, as season 1 reached one billion hours of viewing in its first 60 days. Monster being one of only four series to have achieved this.

Unsurprisingly, all focus is currently on the upcoming season 3, reportedly dropping on the streamer in October. Season 3 will focus on Ed Gein's story, played by Charlie Hunnam. But, there's still plenty to say about season 4. Such as, how it will turn its attention to Lizzie Borden – an entirely different tale with the show's first female lead.

So, here's what we know so far about the next (next) instalment of Monster from the potential release date, possible cast, news, rumors and more.

Monster season 4: is there a release date?

Jeffrey Dahmer was the focus on Monster season 1 (Image credit: Netflix)

No, there's not a release date for Monster season 4 just yet, but that's not surprising since season 3 is yet to stream on Netflix.

But, according to What's On Netflix?, creator Ryan Murphy revealed that season 3 is slated to drop in October.

And, for Monster season 4, Variety confirmed (although Netflix hasn't yet) that it is "already in the works" and is "currently prepping for a potential fall shoot".

With seasons 1 and 2 released in September, season 3 with a supposed October release date, I'd predict we won't see season 4 until September/October 2026.

Monster season 4: has a trailer been released?

Season 2 was called 'Monsters' focusing on the Menendez brothers (Image credit: Netflix)

There's no Monster season 4 trailer to share just yet and that's because filming hasn't even commenced. With production rumored to begin in fall, I'd expect we won't see a trailer until late 2026 in line with the predicted release date.

Monster season 4: predicted cast

A new cast for each season of Monster (Image credit: Netflix)

With each season of the anthology series following a different true crime story, the cast is always entirely new. So, when it comes to predicting the Monster season 4 cast, it's almost impossible.

What we do know is that each season of Monster so far has starred big names in the lead roles. For season 1, Evan Peters was Jeffrey Dahmer. For season 2, the Menendez brothers were played by Cooper Koch and Nicholas Alexander Chavez.

And, as confirmed by Tudum, season 3 will see Charlie Hunnam play Ed Gein with supporting cast Laurie Metcalf, Tom Hollander and Olivia Williams.

For Monster season 4 then, there will be a female lead to play Lizzie Borden. But, who that is, we'll have to wait and see. I'll be sure to update here as soon as I hear more about the casting for this season.

Monster season 4: story synopsis and rumors

It's not the first time Lizzie's tale has been told (Image credit: Lifetime)

Full spoilers for Monster seasons 1 to 3 to follow.

Netflix's Monster depicts true crime stories with each season following a different case. For season 1, it was Jeffrey Dahmer. For season 2, Lyle and Erik Menendez. And for the upcoming season 3, Ed Gein.

And it has already been revealed that Monster season 4 will tell the story of Lizzie Borden. Her life and crimes though are a little different than the three seasons that came before. As the first female lead, Lizzie Borden was actually tried and acquitted for the axe murders of her father and stepmother in 1892.

Now, if you've not heard of Lizzie Borden before, a quick internet search will no doubt give you all the information you need and thus, the plot of Monster season 4. But, in the interest of not ruining the entire season, I won't delve into all the details here.

It's not the first time Lizzie's tale has been told though, which is not entirely surprising considering how prolific a case it was for its time. There's 2015's The Lizzie Borden Chronicles, which saw Christina Ricci in the titular role. Or, 2018's Lizzie with Chloë Sevigny.

For Monster season 4 being a true crime retelling of the story, I imagine it'll be as tense and thrilling as the seasons that came before it.

Will there be more seasons of Monster?

Lizzie now, but who next? (Image credit: Roadside Attractions)

There's a few reasons why it's hard to speculate on future seasons of Monster, namely that season 3's release date is yet to be confirmed and secondly, while season 4 is reportedly happening, there's actually been no official word from Netflix... yet.

So, with this in mind, it seems unlikely we'll hear about any future seasons of Monster anytime soon. But, as such a resounding success on the streaming platform and with an abundance of prolific true crime stories left to tell, there's always hope that Monster will continue for many more seasons to come.

For more Netflix-based coverage, read our guides to Nobody Wants This season 2, Stranger Things season 5, The Four Seasons season 2, and One Piece season 2.

Categories: Technology

Massive leak of over 115 million US payment cards caused by Chinese "smishing" hackers - find out if you're affected

Sun, 08/10/2025 - 00:04
  • Phishing attacks now bypass multi-factor authentication using real-time digital wallet provisioning tactics
  • One-time passcodes are no longer enough to stop fraudsters with mobile-optimized phishing kits
  • Millions of victims were targeted using everyday alerts like tolls, packages, and account notices

A wave of advanced phishing campaigns, traced to Chinese-speaking cybercriminal syndicates, may have compromised up to 115 million US payment cards in just over a year, experts have warned.

Researchers at SecAlliance revealed these operations represent a growing convergence of social engineering, real-time authentication bypasses, and phishing infrastructure designed to scale.

Investigators have identified a figure referred to as “Lao Wang” as the original creator of a now widely adopted platform that facilitates mobile-based credential harvesting.

Identity theft scaled through mobile compromise

At the center of the campaigns are phishing kits distributed through a Telegram channel known as “dy-tongbu,” which has rapidly gained traction among attackers.

These kits are designed to avoid detection by researchers and platforms alike, using geofencing, IP blocks, and mobile-device targeting.

This level of technical control allows phishing pages to reach intended targets while actively excluding traffic that might flag the operation.

The phishing attacks typically begin with SMS, iMessage, or RCS messages using everyday scenarios, such as toll payment alerts or package delivery updates, to drive victims toward fake verification pages.

There, users are prompted to enter sensitive personal information, followed by payment card data.

The sites are often mobile-optimized to align with the devices that will receive one-time password (OTP) codes, allowing for immediate multi-factor authentication bypass.

These credentials are provisioned into digital wallets on devices controlled by attackers, allowing them to bypass additional verification steps normally required for card-not-present transactions.

Researchers described this shift to digital wallet abuse as a “fundamental” change in card fraud methodology.

It enables unauthorized use at physical terminals, online shops, and even ATMs without requiring the physical card.

Researchers have observed criminal networks now moving beyond smishing campaigns.

There is growing evidence of fake ecommerce sites and even fake brokerage platforms being used to collect credentials from unsuspecting users engaged in real transactions.

The operation has grown to include monetization layers, including pre-loaded devices, fake merchant accounts, and paid ad placements on platforms like Google and Meta.

As card issuers and banks look for ways to defend against these evolving threats, standard security suites, firewall protection, and SMS filters may offer limited help given the precision targeting involved.

Given the covert nature of these smishing campaigns, there is no single public database listing affected cards. However, individuals can take the following steps to assess possible exposure:

  • Review recent transactions
  • Look for unexpected digital wallet activity
  • Monitor for verification or OTP requests you didn’t initiate
  • Check if your data appears in breach notification services
  • Enable transaction alerts

Unfortunately, millions of users may remain unaware their data has been exploited for large-scale identity theft and financial fraud, facilitated not through traditional breaches.

Via Infosecurity

You might also like
Categories: Technology

How to watch Alien: Earth online from anywhere — stream small screen spin-off of the classic movie

Sat, 08/09/2025 - 23:00
Watch Alien: Earth online

An abandoned spacecraft containing suspicious organisms; a fearless female charged with taking them on; a shady corporation overseeing the chaos – Alien: Earth looks like it will slot into the franchise canon perfectly! US viewers can tune into Alien: Earth live on FX or online via Sling TV and Hulu, while it's on Disney Plus elsewhere around the globe. Read on for how to watch Alien: Earth online from anywhere with a VPN.

Premiere date: Tuesday, August 12 at 8pm ET / PT

US broadcast: FX via Sling TV

Global streams: Hulu (US) | Disney Plus (UK, CA & AU)

Use NordVPN to watch any stream

Stepping into the big shoes of Ripley, Wendy (played by Sydney Chandler) is the central heroine of Alien: Earth. A hybrid ("a humanoid robot infused with human consciousness"), she leads a team that investigates the USCSS Maginot space vessel that has crashed to Earth in suspicious circumstances two years prior to the events of the original Alien movie. No prizes for guessing that the creatures they find on board are far from friendly or obedient.

Diehard fans of the original movies worried that Alien: Earth will be yet another disappointing spin-off probably needn't be – creator Noah Hawley has form when it comes to reimagining beloved films for the small screen. His Fargo anthology series won three Golden Globes, three Primetime Emmys and a legion of fans.

Also starring Timothy Olyphant (Justified), Alex Lawther (The End of the F***ing World), and Samuel Blenkin (Black Mirror), below we have all the information you need on where to watch Alien: Earth online and stream every episode from wherever you are.

How to watch Alien: Earth online in the US

Alien: Earth premieres in the US on FX at 8pm ET / PT on Tuesday, August 12 with a double bill. Further episodes will go out one at a time in the same slot weekly.

Cord cutters can access FX via an OTT service such as our favorite, Sling TV. Sling Blue carries FX and starts at just $45.99 a month with 50% off your first month.

Episodes will also be available stream online at the same time they air on the Hulu streaming service. Plans start from $9.99 per month, or get loads more content for just one dollar more with the Disney Plus Bundle.

Have one of these subscriptions but away when Alien: Earth is on? You can still access your usual streaming services from anywhere by using a VPN.

Get 50% off your first month of Sling TV
Sling TV gives you live TV at an affordable price. The Sling Blue package includes more than 50 channels including FX, ABC, Fox and NBC (in select cities), AMC, Bravo, Food Network, HGTV, Lifetime and USA.

How to watch Alien: Earth online from outside your country

If you’re traveling abroad when Alien: Earth episodes air, you’ll be unable to watch the show like you normally would due to annoying regional restrictions. Luckily, there’s an easy solution.

Downloading a VPN will allow you to stream the show online, no matter where you are. It's a simple bit of software that changes your IP address, meaning that you can access on-demand content or live TV just as if you were at home.

Use a VPN to watch Alien: Earth from anywhere.

Editors Choice

NordVPN – get the world's best VPN
We regularly review all the biggest and best VPN providers and NordVPN is our #1 choice. It unblocked every streaming service in testing and it's very straightforward to use. Speed, security and 24/7 support available if you need – it's got it all.

The best value plan is the two-year deal which sets the price from $2.91 per month, and includes an extra 4 months absolutely FREE. There's also an all-important 30-day no-quibble refund if you decide it's not for you.

- Try NordVPN 100% risk-free for 30 days and get an Amazon Gift Card included right now!VIEW DEAL ON

How to watch Alien: Earth online in Canada, UK, Australia and worldwide

Viewers outside the US can watch Alien: Earth on Disney Plus, the show’s international home. In Canada, new episodes arrive weekly every Tuesday, with the first two landing on August 12. They land on Wednesdays in the UK and Australia from August 13.

You can take a look at Disney Plus prices and plans where you are, starting for as little as £4.99 / CA$8.99 / AU$15.99 per month.

Away from home? You can still connect to your usual VOD services by downloading a VPN and pointing your location back to your home country

Alien: Earth Need to KnowCan I watch Alien: Earth for free?

The show isn't on any free services, but US viewers can use the Hulu 7-day free trial to watch episodes of Alien: Earth for free.

Alien: Earth episode guide

Alien: Earth is set to consist of eight episodes, which will premiere in the US on the following schedule:

  • Episode 1 - "Neverland": Tuesday, August 12
  • Episode 2 - "Mr. October": Tuesday, August 12
  • Episode 3 - "Metamorphosis": Tuesday, August 19
  • Episode 4 - "Observation": Tuesday, August 26
  • Episode 5 - "Emergence": Tuesday, September 2
  • Episode 6 - "The Fly": Tuesday, September 9
  • Episode 7 - "In Space, No One": Tuesday, September 16
  • Episode 8 - "The Real Monsters": Tuesday, September 23
Alien: Earth trailer

Alien: Earth trailers began appearing in June this year. Here is the official trailer from FX:

Alien: Earth cast
  • Sydney Chandler as Wendy
  • Timothy Olyphant as Kirsh
  • Alex Lawther as CJ
  • Samuel Blenkin as Boy Kavalier
  • Essie Davis as Dame Silvia
  • Adarsh Gourav as Slightly
  • Kit Young as Tootles
  • David Rysdahl as Arthur
  • Babou Ceesay as Morrow
  • Jonathan Ajayi as Smee
  • Erana James as Curl
  • Lily Newmark as Nibs
  • Diêm Camille as Siberian
  • Adrian Edmondson as Atom Eins
Can I watch Alien: Earth on Netflix?

No, Alien: Earth isn't available on Netflix anywhere around the world.

You can catch all the action on Hulu in the US and Disney Plus in other territories around the world..

VPN services are evaluated and tested by us in view of legal recreational use. For example:a) Access to services from other countries, (subject to the terms and conditions of that service).b) Safeguarding your online security and making your online privacy more robust when abroad.Future plc does not support nor condone the illegal or malicious use of VPN services. We do not endorse nor approve of consuming pirated content that is paid-for.

Categories: Technology

No, that's not a misprint - Sandisk unveils massive 256TB SSD, but it will power the next generation of AI workloads, so don't think you'll ever get one

Sat, 08/09/2025 - 17:32
  • Sandisk’s 256TB SSD skips cache entirely, raising concerns about short-burst workload performance
  • Claims of faster speeds remain unverified without public benchmarks or IOPS performance numbers
  • Direct Write QLC may sacrifice speed in exchange for higher reliability and data integrity

Sandisk has announced a 256TB SSD, the UltraQLC SN670, which is set to ship in the first half of 2026.

This model represents the largest SSD ever revealed by the company, marking a bold step toward high-density storage solutions tailored for AI and hyperscale infrastructure.

Although the company plans to release the 128TB version to testers within weeks, full commercial availability remains months away.

An architecture built for scale, not speed

At its core, the SN670 is built on a 218-layer BiCS 3D NAND architecture and features a CBA (CMOS directly Bonded to Array) 2Tb die.

It connects through a PCIe Gen5 NVMe interface and is part of Sandisk’s new UltraQLC platform.

Unlike conventional SSDs that buffer data through pseudo-SLC caches, this model uses a “Direct Write QLC” approach.

This simplifies the writing process and makes the drive more power-loss safe, but it also introduces tradeoffs, especially when it comes to performance under heavy or short-burst loads.

Without an SLC cache, the SN670 may suffer from slower short-burst writes, inconsistent performance under load, and increased controller demands, making it less responsive during intensive or unpredictable workloads.

However, Sandisk claims the SN670 delivers over 68% faster random reads and 55% faster random writes compared to a leading 128 TB Gen 5 QLC SSD.

The sequential read speeds are over 7% better, while sequential write speeds improve by more than 27% in internal comparisons.

Sandisk has emphasized benefits like Dynamic Frequency Scaling, which is said to improve performance by up to 10 percent at the same power level

It also claims the Data Retention profile could reduce recycling wear by as much as 33%.

Both features are intended to enhance longevity and reduce energy consumption.

However, none of these claims are backed by disclosed performance data such as read/write speeds or endurance figures.

Internally, the UltraQLC SN670 is supported by a custom controller and firmware, which Sandisk says enables better latency and bandwidth, but without actual benchmarks or IOPS comparisons, these statements remain marketing-driven projections.

It is worth noting earlier iterations of Sandisk’s enterprise drives using QLC NAND showed limitations compared to TLC-based models.

In this case, native QLC programming latencies could reach 800–1200 microseconds, several times slower than SLC-based designs.

Sandisk may be relying on optimizations like large DRAM buffers or advanced die parallelism, but such architectural details have yet to be confirmed.

The final product will arrive in U.2 form initially, with more variants expected later in 2026.

For now, Sandisk’s 256TB drive is a symbolic leap toward future data infrastructure, not a realistic option for mainstream users.

Via Blocks and Files

You might also like
Categories: Technology

Pages