When you use an ad blocker, encountering an "ad-block wall" or a pop-up asking you to disable it for access is common. While opinions on these ad-block walls vary — some implementations being more reasonable than others — they are generally straightforward in their intent. These websites openly express their dislike for ad blockers and request users to turn them off.
However, a new and concerning trend has emerged with certain anti-ad block pop-ups that not only restrict access but also misattribute blame to ad blockers for website issues. In this article, we’ll explore this phenomenon in detail, examining the tactics behind these deceptive messages and their implications for user trust and website credibility.
Browsing and finding errorsI was browsing for tech reviews when I came across a site that seemed to load normally at first. But as soon as I opened it with my AdGuard for Windows running to block ads, a pop-up appeared. It said the site couldn't load properly because "html-load.com is broken" and prompted me to allow html-load.com to proceed.
I was surprised by this message, especially since the site seemed to be loading more or less correctly in the background. However, that didn’t last long. A few seconds later, the page no longer displayed the intended content and instead showed a jumble of strings, some of which were hyperlinked.
After clicking “OK,” another pop-up appeared, further explaining the situation. It read: “The page could not be loaded due to incorrect/bad filtering rule(s) of ad blocker.” In fine print at the bottom, it added: “The html-load.com domain is used for loading essential web resources such as HTML, CSS, and images. If this domain is blocked, errors may occur in website loading.”
After some research, we at AdGuard discovered that dozens of websites employ similar tactics, displaying pop-ups that blame ad blockers for layout issues.
The catch here is this is not what is actually happening — ad blockers are not breaking these websites, it is so-called ad recovery tools that make them look broken. While it’s true that sometimes ad blockers can garble the original layout after blocking ads due to a bad or outdated filtering rule, this happens very rarely and almost never results in a complete breakdown of the layout.
Before we delve any deeper into why we believe the messages we’ve shown above are deceptive, let us show some other examples of similar if not the same behavior that we’d encountered earlier.
In August this year, we noticed similar tactics being used by Mail.ru, a popular Russian email service and web portal. Suddenly, a block displaying news stopped showing for users with ad blockers. As we looked into the root of the issue, we discovered that Mail.ru had added code to the page that, upon detecting an ad blocker, hides the news section. After implementing this code, Mail.ru made sure to point the finger at ad blockers as the culprit, going as far as emailing a notice to users that blamed ad blockers for the disappearance of certain elements from the page.
From what we’ve seen recently, it seems that the trend of framing ad blockers for the incorrect display of web pages (that is de-facto lying to them) is gaining momentum and becoming global.
Classic ad block walls: what’s the difference?This approach of forcing users to disable their ad blockers is both new and not new at the same time. The very idea that users need to turn off their ad blocker to access content isn’t novel; it has long been employed by websites that greet visitors with so-called ad block walls, or anti-adblock pop-ups. These pop-ups typically request users to disable their ad blocker or add the site to their ad blocker’s whitelist.
In the case of these "classic" ad block walls, publishers openly acknowledge that their issue with ad blockers lies in their mere use, not in how they disrupt the website’s layout. Opinions on the "classic" ad block walls approach may vary, but at least the publishers are being honest. When it comes to our own policy, we believe that ad blocker detection messages should be allowed if they offer a feasible value exchange that does not put the user’s privacy or security at risk.
The same cannot be said for the new approach we’re focusing on in this article. So, let’s dive a little deeper into how it works behind the scenes.
First method: Reliance on external stylesWhat we've observed is the work of so-called ad recovery tools. These tools operate in various ways, often attempting to bypass ad blockers to display ads or recover lost revenue.
One common method involves an ad recovery tool loading styles from external sources. External styles are CSS (Cascading Style Sheets) files that define how a website looks and feels — everything from layout to colors.
If a website relies on an external style from a certain domain, the ad recovery tool loads this style from external sources, and if that domain is blocked by an ad blocker, the website's layout can break. To address this issue and maintain the layout, AdGuard can sometimes load styles manually after a script is blocked. This approach helps ensure that the website remains visually coherent, even when ads are being filtered out.
This can be complicated, particularly on iOS or within browser extensions.
There is also a second method.
Second method: Misleading warning messagesMany websites, including the one we used to illustrate the trend, do not rely on external styles; their layout remains intact even when ads are blocked. However, if the site detects that a script from html-load.com (in our case) isn’t loading, it triggers a misleading warning message. After clicking "OK," a larger warning appears, filled with confusing jargon about CSS and images. In reality, it’s not the ad blocker causing issues, but an ad-recovery tool that removes the layout using special scripts.
When it detects that some requests are blocked or some specific elements are hidden or something went wrong with loading ads, the ad-recovery script removes all elements with style and link tag by using a script like:
document.querySelectorAll('link,style').forEach((e)=>e.remove())
This tactic misleads users into believing that the ad blocker is responsible for the failure. In fact, the site is attempting to shift the blame away from its own choice to use an ad-recovery tool, which is causing the problem, and pin it on the ad blocker instead.
ConclusionWhat we’ve established here is that some websites are misrepresenting the reason for why they would not open with an ad blocker one. They come up with outright misleading messages that blame ad blockers for loading issues. In our view, such unscrupulous behavior only casts them in a negative light. When websites deceive users right from the moment they land on their pages, these sites risk damaging their reputation and eroding user trust. As the saying goes, if someone wrongs you once, they’re likely to do it again.
This kind of misdirection not only harms the relationship between users and ad blockers, but also between users and the websites themselves, because they manipulate user behavior with misinformation and abuse user trust.
We've listed the best website change monitoring software and the best website defacement monitoring service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The UK’s first live 50Gbps fibre broadband connection has been successfully tested in a trial by Nokia and Openreach, delivering speeds up to twenty times faster than existing services.
Openreach revealed more on the test, which was conducted in Ipswich over Openreach’s full-fibre network using Nokia’s 50G PON technology, and achieved download speeds of 41.9Gbps and upload speeds of 20.6Gbps.
To put this speed into perspective, downloading a high-definition movie on a typical 100Mbps connection takes about seven minutes. With a 1Gbps connection, it takes around 40 seconds, while at 50Gbps, the same movie would be ready almost instantly.
A global push for hyperfast fibre broadbandThe test involved Openreach’s upgraded XGS-PON network, an enhanced version of its existing infrastructure, which supports higher symmetric speeds.
Trevor Linney, Director of Network Technology at Openreach, emphasized the long-term significance of the trial, noting, “it’s crucial that we continue to research, innovate and evolve our network to meet our customers’ demands for decades to come."
"The full fibre network we’re building today is a platform for the UK’s economic, social and environmental prosperity, and this test proves we can keep upgrading the speeds and services our customers experience over that network long into the future.”
One of the most immediate benefits of 50Gbps broadband will be entertainment, as technologies like virtual reality (VR), augmented reality (AR), and 8K video streaming require high bandwidth and low latency to function smoothly. Beyond entertainment, the same high-speed connectivity will also enhance remote work and online learning.
In healthcare, high-speed broadband is essential for telemedicine, AI-driven diagnostics, and real-time medical imaging. Near-instant transmission of large medical files will enable quicker remote consultations and enhance patient care, particularly in urgent situations.
Sandy Motley, President of Nokia Fixed Networks, highlighted how this technology sets the stage for even greater advancements.
“Our platform provides [Openreach] with a full range of PON technologies and services that can be delivered over their existing fibre network," she said.
"From 10G and 25G today to eventually 50Gbps or even 100G, our unique toolkit of fibre solutions allows Openreach to future-proof their network and flexibly address their evolving network demand.”
The UK joins China and the United Arab Emirates in testing these broadband speeds, though there's no confirmed timeline for a full rollout just yet.
You may also likeGoogle wants you to know that Gemini 2.0 Flash should be your favorite AI chatbot. The model boasts greater speed, bigger brains, and more common sense than its predecessor, Gemini 1.5 Flash. After putting Gemini Flash 2.0 through its paces against ChatGPT, I decided to see how Google's new favorite model compares to its older sibling.
As with the earlier matchup, I set up the duel with a few prompts built around common ways anyone might employ Gemini, including myself. Could Gemini 2.0 Flash offer better advice for improving my life, explain a complex subject I know little about in a way I could understand, or work out the answer to a complex logic problem and explain the reasoning? Here's how the test went.
Productive choices (Image credit: Screenshots from Google Gemini)If there’s one thing AI should be able to do, it’s give useful advice. Not just generic tips, but applicable and immediately helpful ideas. So I asked both versions the same question: "I want to be more productive but also have better work-life balance. What changes should I make to my routine?"
Gemini 2.0 was noticeably quicker to respond, even if it was only a second or two faster. As for the actual content, both had some good advice. The 1.5 model broke down four big ideas with bullet points, while 2.0 went for a longer list of 10 ideas explained in short paragraphs.
I liked some of the more specific suggestions from 1.5, such as the Pareto Principle, but besides that, 1.5 felt like a lot of restating the initial concept, whereas 2.0 felt like it gave me more nuanced life advice for each suggestion. If a friend were to ask me for advice on the subject, I'd definitely go with 2.0's answer.
What's up with Wi-Fi? (Image credit: Screenshots from Google Gemini)A big part of what makes an AI assistant useful isn’t just how much it knows – it’s how well it can explain things in a way that actually clicks. A good explanation isn’t just about listing facts; it’s about making something complex feel intuitive. For this test, I wanted to see how both versions of Gemini handled breaking down a technical topic in a way that felt relevant to everyday life. I asked: “Explain how Wi-Fi works, but in a way that makes sense to someone who just wants to know why their internet is slow.”
Gemini 1.5 went with comparing Wi-Fi to radio, which is more of a description than the analogy it suggested it was making. Calling the router the DJ is something of a stretch, too, though the advice about improving the signal was at least coherent.
Gemini 2.0 used a more elaborate metaphor involving a water delivery system with devices like plants receiving water. The AI extended the metaphor to explain what might be causing issues, such as too many "plants" for the water available and clogged pipes representing provider issues. The "sprinkler interference" comparison was much weaker, but as with the 1.5 version, Gemini 2.0 had practical advice for improving the Wi-Fi signal. Despite being much longer, 2.0's answer emerged slightly faster.
Logic bomb (Image credit: Screenshots from Google Gemini)For the last test, I wanted to see how well both versions handled logic and reasoning. AI models are supposed to be good at puzzles, but it’s not just about getting the answer right – it’s about whether they can explain why an answer is correct in a way that actually makes sense. I gave them a classic puzzle: "You have two ropes. Each takes exactly one hour to burn, but they don’t burn at a consistent rate. How do you measure exactly 45 minutes?"
Both models technically gave the correct answer about how to measure the time but in about as different a way as is possible within the constraints of the puzzle and being correct. Gemini 2.0's answer is shorter, ordered in a way that's easier to understand, and explains itself clearly despite its brevity. Gemini 1.5's answer required more careful parsing, and the steps felt a little out of order. The phrasing was also confusing, especially when it said to light the remaining rope "at one end" when it meant the end that it isn't currently lit.
For such a contained answer, Gemini 2.0 stood out as remarkably better for solving this kind of logic puzzle.
Gemini 2.0 for speed and clarityAfter testing the prompts, the differences between Gemini 1.5 Flash and Gemini 2.0 Flash were clear. Though 1.5 wasn't necessarily useless, it did seem to struggle with specificity and making useful comparisons. The same goes for its logic breakdown. Were that applied to computer code, you'd have to do a lot of cleanup for a functioning program.
Gemini 2.0 Flash was not only faster but more creative in its answers. It seemed much more capable of imaginative analogies and comparisons and far clearer in explaining its own logic. That’s not to say it’s perfect. The water analogy fell apart a bit, and the productivity advice could have used more concrete examples or ideas.
That said, it was very fast and could clear up those issues with a bit of back-and-forth conversation. Gemini 2.0 Flash isn't the final, perfect AI assistant, but it's definitely a step in the right direction for Google as it strives to outdo itself and rivals like ChatGPT.
You might also likeMeta is showing off a machine capable of turning your thoughts into words typed on a screen, but don't expect to write your Instagram captions telepathically any time soon. The device weighs about half a ton, costs $2 million, and is about as portable as a refrigerator. So, unless you were planning to lug around a lab-grade magnetoencephalography (MEG) scanner, you won’t be sending mind texts anytime soon. And that's before even considering how you can't even slightly move your head when using it.
Still, what Meta has done is impressive. Their AI and neuroscience teams have trained a system that can analyze brain activity and determine what keys someone is pressing – purely based on thought. There are no implanted electrodes, no sci-fi headbands, just a deep neural network deciphering brainwaves from the outside. The research, detailed in a pair of newly released papers, reveals that the system is up to 80% accurate at identifying letters from brain activity, allowing it to reconstruct complete sentences from a typist’s thoughts.
While typing out phrases, a volunteer sits inside a MEG scanner, which looks a bit like a giant hair dryer. The scanner picks up magnetic signals from neurons firing in the brain, and an AI model, aptly named Brain2Qwerty, gets to work learning which signals correspond to which keys. After enough training, it can predict the letters a person is typing. The results weren't perfect, but could reach accuracy levels of up to 80%.
Brain typing (Image credit: Meta)Telepathic typing has some real limits for now. The scanner needs to be in a specially shielded room to block out Earth’s magnetic field, which is a trillion times stronger than what's in your head. Plus, the slightest head tilt scrambles the signal. But there's more to it than just another Meta-branded product. The research could really boost brain science and, eventually, medical care for brain injuries and illnesses.
"To explore how the brain transforms thoughts into intricate sequences of motor actions, we used AI to help interpret the MEG signals while participants typed sentences. By taking 1,000 snapshots of the brain every second, we can pinpoint the precise moment where thoughts are turned into words, syllables, and even individual letters," Meta explained in a blog post. "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations—the meaning of a sentence—and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard."
Despite its limitations, the non-invasive aspect of Meta's research makes for a much less scary approach than cramming a computer chip right in your brain as companies like Neuralink are testing. Most people wouldn't sign up for elective brain surgery. Even though a product isn't the stated goal of the research, historical points demonstrate that giant, lab-bound machines don't have to stay that way. A tiny smartphone does what a building-size computer couldn't in the 1950s. Perhaps today's brain scanner is tomorrow’s wearable.
You might also likeFramework has announced its Laptop 16 device is now able to install up to 26TB of superfast Gen4 SSD storage.
This boost in capacity is achieved by using a dual M.2 SSD adapter alongside high-capacity 8TB WD_BLACK SN850X drives, allowing users to combine four drives (two pairs connected via dual adapters) to reach the maximum storage capacity.
Framework’s modular laptop design philosophy allows users to easily swap components like mainboards, memory modules, and even entire shells when needed - helping to extend device lifespans, reduce waste, and promote sustainability through easier repairs.
Expanding storage possibilitiesFramework's flexibility here also takes advantage of PCIe Gen4 technology for faster data access and transfer.
Modular design also fosters an open source community where developers contribute new module designs and software solutions. According to Framework, the Laptop 16's open source Graphics Module Shell has inspired new module developments, such as an e-paper display Input Module.
Expanding its commitment to open hardware, Framework also launched the DeepComputing DC-ROMA RISC-V Mainboard for the Framework Laptop 13.
Powered by the StarFive JH7110 processor and based on the open-source RISC-V ISA, this board is designed primarily for developers looking to accelerate the RISC-V software ecosystem.
You may also likeMax has released a first look image of Euphoria season 3 as production starts on the popular teen drama after a number of delays. But sorry, I'm just not hyped for the show's third season anymore.
The image features Zendaya as Rue (see below), who returns to the role pretty much exactly three years after the premiere of Euphoria season 2. According to The Hollywood Reporter, there were multiple factors on why Max delayed the third season's production. The main reason (as well as the Hollywood writer's strike) is that the network and creator Sam Levinson were discussing where the action would be set after the characters leave high school. Max decided to hit pause on the series until it could be "brought creatively in line with the previous two seasons," as per Variety.
The show has helped launch the careers of Zendaya, Sydney Sweeney, Hunter Schafer, and Jacob Elordi, who have since become fully fledged movie stars and were pursuing other opportunities while Levinson and Max decided on their season 3 character arcs.
HBO/Max boss Casey Bloys said in November to Variety: “I know the show gets a lot of attention now because, you know, it has created some genuine movie stars, and they have various projects that are working on, but we are shooting this season, so nothing has changed. It’s eight episodes.”
#Euphoria Season 3 is in production. pic.twitter.com/hEPx5AOTmoFebruary 10, 2025
What is Euphoria season 3 about?At the moment, no official plot details have been revealed regarding Euphoria season 3. Still, there were rumors that Levinson had the idea to include a five-year time jump and potentially see Rue working as a private investigator.
Whether Euphoria season 3 picks up immediately after the events of season 2 or in the near future, the one issue that needs to be dealt with is the messy love triangle between Nate (Elordi), Cassie (Sweeney), and Maddy (Alexa Demie), as well as these three big questions that need answering.
The main cast are all expected to return, including Zendaya, Jacob Elordi, Sydney Sweeney, Alexa Demie, Maude Apatow, and Hunter Schafer. Meanwhile, Barbie Ferreira shared that she won't be back as Kat for the third season, and neither will Storm Reid, who plays Rue's sister Gia. Angus Cloud, who shot to stardom as Fezco in the series, also won't be in the third installment after the actor tragically passed away in July 2023.
Euphoria season 2 concluded with an epic finale to a, in my opinion, somewhat messy and unbalanced series filled with neglected characters and baffling storylines. So, combining a disappointing second season with a three-year wait, my excitement isn't at an all-time high. But knowing me, I'm still going to watch the third season of one of the best Max shows.
You might also likeAMD is set to bring forward the launch schedule of its Instinct MI355X GPU as it looks to bring the battle to Nvidia in the increasingly lucrative hardware market.
Though the product was originally set for a late 2025 debut, the MI355X is now expected to arrive by mid-2025, Nextplatform reports.
It's a move that reveals the scale of AMD’s urgency to challenge the established market dominance; Nvidia’s Blackwell series has long been synonymous with top-tier performance.
Outpacing the competition?We already know the Instinct MI355X GPU is built on AMD's new CDNA 4 architecture and will come with 288GB of HBM3E memory, as well as support 8TB/sec of bandwidth.
These enhancements, along with support for FP6 and FP4 low-precision computing, are designed to meet the demanding needs of AI processing.
By comparison, Nvidia’s Blackwell B200 offers 192GB of HBM3E memory with similar bandwidth, positioning the MI355X as a serious contender in high-performance AI acceleration.
AMD’s push into high-performance GPUs is driven by the explosive growth of its data center business segment.
In 2024, this segment, which includes Epyc CPUs, Instinct GPUs, Pensando DPUs, and Xilinx FPGA accelerators, accounted for nearly half of its $25.79 billion in revenue. The company’s Instinct GPU sales alone surpassed $5 billion, reflecting strong demand for AI and high-performance computing solutions.
Nevertheless, AMD faces production challenges due to limited access to high-bandwidth memory (HBM) and advanced packaging technologies like CoWoS, which have constrained its ability to fully meet market demand.
Although Nvidia continues to lead the global AI accelerator market with a commanding share exceeding 90% and a valuation that places it among the world's most valuable companies, AMD’s decision to fast-track the MI355X launch shows its determination to mount a serious challenge and claim some market share for itself.
In case you missed it, AMD unveiled initial details about its next-generation accelerator in June 2024, hinting at what was to come. Shortly thereafter, the company released additional information on its upcoming Instinct MI355X GPU.
You may also like