Google and the US Government have come together to announce the release of Gemini for Government, described as the combination of “commercial cloud, industry-leading Gemini models, and agentic solutions” for maximum productivity.
The announcement comes from Google and the General Services Administration (GSA) and forms part of the OneGov Strategy for centralized IT procurement and Trump’s AI Action Plan.
Gemini for Government will be valid until 2026 and builds on existing Google agreements with the US Government.
Google launches heavily discounted Gemini for GovernmentGemini for Government will cost $0.47 per agency and follows a Google Workspace deal with savings of 71% for government agencies – all enabled through renewed purchasing power through the GSA’s OneGov Strategy.
“Building on our Workspace offer for federal employees, ‘Gemini for Government’ gives federal agencies access to our full stack approach to AI innovation, including tools like NotebookLM and Veo powered by our latest models and our secure cloud infrastructure, so they can deliver on their important missions,” Google CEO Sundar Pichai commented.
Gemini for Government includes Google-quality enterprise search, video and image generation, NotebookLM AI, Deep Research agents, Idea Generation agents, and support for workers to build their own agents.
“We are a long-term, strategic partner for America, deeply invested in the mission, innovation, and security of our government,” Google Public Sector CEO Karen Dahut noted, pointing to headline figures like Google’s 100,000+ US workers and its data centers and officers across 26 states.
With the deal set to last just one year, it’s unclear what happens next. The GSA could renew or extend the agreement, or competition could once again open up as the White House looks for cheaper or other advanced AI tools.
You might also likeApple TV+’s sudden price hike has left subscribers scrambling to decide whether to cancel their subscription before the increased rate shows up in their next renewal bill, but there are some ways to avoid paying as much as 30% more.
The best I've seen so far is to sign up for an annual subscription, as the 12-month rate remains unchanged for the moment. That means you can not only still get a more than 16% saving on the old monthly cost of Apple TV+ by buying an annual subscription, but you can also save 55% on the newly raised monthly price.
By signing up for 12 months upfront, you can bring the cost of a monthly subscription down to just $8.33 in the US, £7.49 in the UK, and AU$10.83 in Australia (see the table below for how these stack up against the new raised prices).
Apple TV+ global costsAnnual
Monthly equivalent
New monthly prices
US
$99
$8.33
$12.99
UK
£89.99
£7.49
£9.99
AU
AU$129.99
AU$10.83
AU$15.99
Apple also has not announced any further changes to the price of other bundles and offers, which means the cost of its mega subscription service that bundles all of its services, including Apple Music, Apple Arcade, iCloud+, Apple TV+, and more, into one package remains unchanged.
The Apple One subscription bundle has three different pricing tiers depending on whether you want an individual account, family, or more premium features like a higher storage capacity on iCloud, making it a great value for anyone who regularly uses these services. It starts at $19.95 a month in the US, £18.95 per month in the UK, and AU$21.95 per month in Australia.
If you haven't already read our Apple TV+ cost guide, then you might not know that you can also still get a free trial of Apple's streaming service. The introductory offer is available for new and select returning customers, making it a great option if you're a savvy budgeter who hops between subscriptions.
An Apple TV+ free trial is by far the cheapest option to avoid the price hike, and there are multiple ways to get one. The standard trial period is seven days, but you can get a month if you're a student or a new Apple One subscriber – or three months with an Apple device purchase. In the US, you can also get three months if you purchase a Roku device, or six months if you're on a T-Mobile Go5G plan. There are similar offers in the UK too, with EE iPhone contracts and Three bundles.
These are the best ways to avoid missing out on streaming Spike Lee's new movie Highest 2 Lowest, alongside the return of The Morning Show season 4 and Slow Horses season 5 in September. Of course, it's always worth keeping an eye on the best streaming deals, but Apple TV+ isn't known to offer many discounts throughout the year.
Will Apple TV+ introduce an ad-supported tier?If you're like me and signed up to the rare Apple TV+ deal that marked a 70% saving on the usual price of a subscription in April, then your discounted account has probably run out by now – great timing, I know.
There are thankfully ways to save money on an Apple TV+ subscription (as listed above), but for anyone who's looking for a more long-term solution that doesn't involve swapping and changing which of the best streaming services you're signed up to each month, then one could be on the horizon.
Reports have been circling for over a year now that Apple TV+ has been making inroads in the advertising space. From recruiting ad execs to testing ad tools, industry insiders have connected these dots and believe they suggest that Apple TV+ could be next to get an ad-based tier.
That's not too surprising considering the success that Netflix, Disney+, and Prime Video have had following the rollout of similar ad-supported tiers, with the services each announcing an increase in sign-ups off the back of the launch of these new, cheaper tiers.
With Apple increasingly spending more on big-ticket productions and acquiring the expensive rights to big sporting tournaments – namely the Formula 1 rights in the US – it's bound to have an impact on the bottom line, and that means we'll have to cough up for it.
I wouldn't be surprised if we start to hear more about a potential ad-supported tier rolling out on Apple TV+ off the back of the price hike, especially if it impacts subscriber numbers, as Apple's streaming service will undoubtedly want to win back eyeballs.
You might also likeDanish hi-fi company Dali has just announced a new pair of budget stereo bookshelf speakers, and as someone who's been testing the model just above them in the range at home recently, I think these could be very interesting.
The new speakers are called the Dali Kupid, and they're hoping to make you fall in love with their funky color options (as well as more traditional wood finishes), and their promise of detailed, audiophile-pleasing sound for a low price of just £299 (about $400 / AU$625 – no price has been confirmed for the US or Australia at the time of writing).
They sit between the cheapest Dali Spektor 1 bookshelf speakers ($280 / £199 / AU$499) and the impressive Dali Oberon 1 speakers ($600 / £399 / AU$749) – both of which scored five stars from our friends at What Hi-Fi? in their reviews of those products.
I've been using the Dali Oberon 1 at home recently as part of testing a new streaming amp, and they're really quite astounding for the price when it comes to detail and musicality – which means I think these genuinely could be fantastic value.
The Kupid are built with a custom-designed 26mm tweeter paired with a 4.5-inch mid-woofer. While the woofer appears to be very similar to one in the Spektor 1, the combination with a new tweeter and different bass reflex design could produce wider-ranging sound than the Spektor 1.
Though having said that, I should note that the Kupid are rated for slightly less extensive bass frequencies than the Spektor 1 (63Hz for the Kupid and 59Hz for the Spektor) – but spec numbers never tell the whole story with speakers, so I would expect a more full sound from the Kupid when factoring in all elements of the design.
One of the big focuses of the Kupid seems to be making them easy to live with – Dali suggests they should be pretty unfussy to place and get good sound from, and they come with wall brackets in the box as well as rubber feet.
They're also reasonably small, and they can be powered comfortably from 4-ohm amplification, so budget amps should have no problem getting their best sound. Dali says they should sound great quiet as well as loud, so they're suited to a lot of different environments – this is something the Oberon 1 are great at, so I don't doubt it here.
And perhaps coolest of all, they come in five great finishes for different tastes: Black Ash, Walnut, Caramel White, Golden Yellow, and Chilly Blue.
(Image credit: Dali)A great way to step up to the detail of bookshelf speakers?These look like they would pair nicely with something like the Pro-Ject Stereo Box E amp, which costs $349 / £199 and should have enough power for these speakers, plus has Bluetooth built in. That would get you lovely analog audio components from two great hi-fi makers, all in a compact size, plus the convenience of wireless connectivity – all for under £500 total, in the UK.
If you compare to a similar wireless stereo speaker setup – something like a pair of Sonos Era 100 speakers costs only a little less – you'd almost certainly hear a major difference in detail and clarity from the hi-fi system.
With far bigger speaker drivers, more air moved, and more space for powerful components, you'll find that music has a lot more room to express itself than from a smaller system. This usually means you'll get the experience of hearing 'new' elements in songs, or just be able to appreciate them anew with an improved sound profile.
I actually did a comparison listening test recently between a pair of stereo Sonos Era 300 speakers and the Dali Oberon 1 speakers with the new Wiim Amp Ultra powering them – and although the Sonos speakers gave a great account of themselves with the sound dispersal and hefty low-end, the Oberon 1 had a clear edge when it came of mid-range expression, detail across the frequencies, and the handling of complex instrument mixes. Basically, all the things that make you feel really immersed in a song are boosted.
Obviously, we'll have to give these a real test to see if they can do the same at a lower price, but given my recent experience with Dali speakers, and the company's history, I think these look like a good threat to our list of the best stereo speakers.
You might also like…Google has asked app developers to prepare for "upcoming 64-bit Google TV and Android TV devices" by making sure their TV apps are available as 64-bit versions.
Developers have plenty of time to prepare: the new rules come into force in August 2026.
Moving from 32-bit to 64-bit is good news for Google TV and Android TV users with compatible hardware, because 64-bit apps generally deliver faster loading times, less lag and better overall performance.
And streaming fans can see that in action on an Apple TV 4K, because Apple started the 64-bit app transition ten years ago and completed the switchover in 2019.
Many third-party devices, such as the Nvidia Shield, are 64-bit ready. (Image credit: Future)Does Google's 64-bit move mean new hardware?Yes, but not necessarily from Google: while there's been some speculation that Google is working on a new 64-bit Google TV Streamer for launch next year, the operating systems are also used by third party products such as the Nvidia Shield and several of the best TVs, including the Sony Bravia 8 II. Google's blog post notes that three versions of the Nvidia Shield are 64-bit capable.
While Apple removed 32-bit support in tvOS 13 back in 2019, Google isn't following suit. "We’re not making any changes to 32-bit support, and Google Play will continue to deliver apps to 32-bit devices," Google TV product manager Fahad Durrani wrote.
What Google is doing here is asking developers to futureproof their apps, and from next August that means submitting both 32-bit and 64-bit versions for maximum compatibility.
That means users of existing Google TV and Android TV hardware don't need to worry about their apps disappearing or being left without updates in the foreseeable future, but depending on your device it might mean a performance boost is coming next year – or it might mean your next device gets the boost.
You might also likeSwann, the company behind some of the best home security cameras we've tested here at TechRadar, has launched a new compact outdoor camera that you'll never need to take down for recharging, and which doesn't require a subscription plan to save and review your video footage.
The Swann EVO Wireless Solar is a compact weather-resistant camera (much smaller than the all-seeing Swann MaxRanger4K Solar), and can run all day with just 45 minutes of sunlight exposure, giving you plenty of flexibility as to where you mount it.
The camera comes with a 16GB SD card, so you can save your footage locally and keep full control over it. If you do decide that you want to keep clips in the cloud, the free Swann Secure plan gives you 1-7 days of cloud recording for a single camera.
The EVO Wireless Solar records at 2K with a 120-degree field of view, and offers infrared night vision for spotlight-free recording after dark. There's two-way audio as well, letting you speak to visitors and warn off potential intruders.
(Image credit: Swann)The Swann EVO Wireless Solar has a list price of $129.99 / £99.99 / AU$179.95, and is available to buy direct from Swann.
For comparison, the Ring Outdoor Camera Plus Battery is $79.99 / £79.99 / AU$179, but lacks solar charging, and requires a Ring Home plan if you want to store your recordings. For more details of what you get with Ring Home, see our guide do I need a Ring subscription.
The price difference between the wireless Swann and Ring cameras disappears immediately if you want solar charging, since Ring's solar panels start at $39.99 / £39.99 / AU$59 each.
We're hoping to test the Swann EVO Wireless Solar soon, to see how it compares with other wireless cams in its price bracket and above.
You might also likeAs artificial intelligence (AI) tools like ChatGPT, Co-Pilot, Grok and predictive analytics platforms become embedded in everyday business operations, many companies are unknowingly walking a legal tightrope.
While the potential of AI tools provide many benefits - streamlining workflows, enhancing decision-making, and unlocking new efficiencies - the legal implications are vast, complex, and often misunderstood.
From data scraping to automated decision-making, the deployment of AI systems raises serious questions around copyright, data protection, and regulatory compliance.
Without robust internal frameworks and a clear understanding of the legal landscape, businesses risk breaching key laws and exposing themselves to reputational and financial harm.
GDPR and the Use of AI on Employee DataOne of the most pressing concerns is how AI is being used internally, particularly when it comes to processing employee data. Many organizations are turning to AI to support HR functions, monitor productivity, or even assess performance. However, these applications may be in direct conflict with the UK General Data Protection Regulation (GDPR).
GDPR principles such as fairness, transparency, and purpose limitation are often overlooked in the rush to adopt new technologies. For example, if an AI system is used for employee monitoring without their informed consent, or if the data collected is repurposed beyond its original intent, the business could be in breach of data protection law.
Moreover, automated decision-making that significantly affects individuals, such as hiring or disciplinary actions, requires specific safeguards under GDPR, including the right to human intervention.
The Legal Grey Area of Data ScrapingAnother legal minefield is the use of scraped data to train AI models. While publicly available data may seem fair game, the reality is far more nuanced. Many websites explicitly prohibit scraping in their terms of service, and using such data without permission can lead to claims of breach of contract or even copyright infringement.
This issue is particularly relevant for businesses developing or fine-tuning their own AI models. If training data includes copyrighted material or personal information obtained without consent, the resulting model could be tainted from a legal standpoint. Even if the data was scraped by a third-party vendor, the business using the model could still be held liable.
Copyright Risks in Generative AIGenerative AI tools, such as large language models and image generators, present another set of challenges. Employees may use these tools to draft reports, create marketing content, or process third-party materials. However, if the input or output involves copyrighted content, and there are no proper permissions or frameworks in place, the business could be at risk of infringement.
For instance, using generative AI to summarize or repurpose a copyrighted article without a license could violate copyright law. Similarly, sharing AI-generated content that closely resembles protected work may also raise legal red flags. Businesses must ensure their employees understand these limitations and are trained to use AI tools within the bounds of copyright law.
The Danger of AI “Hallucinations”One of the lesser-known but increasingly problematic risks of AI is the phenomenon of “hallucinations”- where AI systems generate outputs that are factually incorrect or misleading, but presented with confidence. In a business context, this can have serious consequences.
Consider a scenario where an AI tool is used to draft a public document or legal summary, in which it includes fabricated company information or incorrect regulations. If that content is published or relied upon, the business could face reputational damage, client dissatisfaction, or even legal liability. The risk is compounded when employees assume the AI’s output is accurate without proper verification.
The Need for Internal AI GovernanceTo mitigate these risks, businesses must act promptly to implement robust internal governance frameworks. This includes clear policies on how AI tools can be used, mandatory training for employees, and regular audits of AI-generated content.
Data Protection Impact Assessments (DPIAs) should be conducted whenever AI is used to process personal data, and ethical design principles should be embedded into any AI development process.
It’s also critical to establish boundaries around the use of proprietary or sensitive information. Employees interacting with large language models must be made aware that anything they input could potentially be stored or used to train future models. Without proper safeguards, there’s a real risk of inadvertently disclosing trade secrets or confidential data.
Regulatory Focus in 2025Regulators are increasingly turning their attention to AI. In the UK, the Information Commissioner’s Office (ICO) has made it clear that AI systems must comply with existing data protection laws, and it is actively investigating cases where this may not be happening. The ICO is particularly focused on transparency, accountability, and the rights of individuals affected by automated decision-making.
Looking ahead, we can expect more guidance and enforcement around the use of AI in business. The UK is currently consulting on its AI Bill which aims to regulate artificial intelligence by establishing an AI Authority, enforcing ethical standards, ensuring transparency, and promoting safe, fair, and accountable AI development and use that businesses must comply with.
AI is transforming the way we work, but it’s not a free pass to bypass legal and ethical standards. Businesses must approach AI adoption with caution, clarity, and compliance to safeguard their staff and reputation. By investing in governance, training, and legal oversight, organizations can harness the power of AI while avoiding the pitfalls.
The legal risks are real, but with the right approach, they are also manageable.
We feature the best cloud document storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
President Trump may soon be browsing for the best website builders after ordering improvements to federal government websites and physical spaces in the hope to make them more attractive for both workers and customers.
“The Government has lagged behind in usability and aesthetics,” Trump said in a new Executive Order, noting the need for system modernization that could tackle high maintenance costs in the process.
The Executive Order explains legacy systems can be costly to maintain and costly to American citizens, who can spend more time than necessary trying to navigate them, hence the need for change.
Trump wants to modernize US Government websitesThe Order introduces Trump’s new ‘America by Design’ initiative, which begins with high-touch point sites where citizens are most likely to interact with government agencies.
The formation of a new National Design Studio and the appointment of a Chief Design Officer will oversee the project.
“It is the policy of my Administration to deliver digital and physical experiences that are both beautiful and efficient, improving the quality of life for our Nation,” Trump wrote.
The National Design Studio has been tasked with reducing duplicative design costs, much in the same way that the White House has already started centralizing IT procurement to boost cost efficiency.
It will also use a standardized design for consistency and trust, and improve the quality of public-facing experiences.
Agencies have been given until July 4, 2026 to deliver their initial results after consulting with the Chief Design Officer.
Separate Reuters reporting has revealed Airbnb co-founder Joe Gebbia will lead the National Design Studio as Chief Design Officer, with the Internal Revenue Service set to be the first place to see an overhaul.
Trump’s Order also confirms the “temporary organization” will close in three years, on August 21, 2028, suggesting that site modernization could be complete even before that.
You might also likeWe've just been treated to a host of new Google Pixel devices, including four different Pixel 10 phones, but we also have news about Google devices that aren't coming – including a flip foldable and a successor to the Pixel Tablet from 2023.
Speaking to Mark Gurman and Samantha Kelly at Bloomberg, Google's Vice President of Devices and Services Shakil Barkat confirmed that there are no plans for a Google flip foldable to join the Pixel 10 Pro Fold.
Barkat also ruled out a smart ring, and says the Pixel tablet series is on pause until a "meaningful future" can be figured out for the product category. It seems the likes of Samsung will be left to release those kinds of devices for the time being.
The status on smart glasses, meanwhile, is "TBD" – it seems Google is happy to stay focused, for now. "Every time a new type of category of product gets added, the bar on maintenance for the end user keeps going up," says Barkat. "It's already pretty painful."
The "vanguard" of AIGoogle is focused on Pixel phones and AI (Image credit: Philip Berne / Future)Google execs did also use the interview to hype up what they are working on. Rick Osterloh, who is head of Google's hardware and Android divisions, described the Pixel 10 as a "super strong release" in what is now a "mature category".
The Pixel 11 is almost finalized, apparently, while work has started on the Pixel 12. Google design chief Ivy Ross says that the company is aiming for big visual changes to the Pixel phones "every two to three years" – so watch this space.
As you would expect, the Google team pushed AI as being the big innovation that'll be happening on phones over the next few years, via Gemini and features such as Magic Cue, which surfaces key info from your phone when you need it.
Osterloh says he wants Android to be "on the vanguard of where AI is going", and that Google isn't overly worried about Pixel sales: the phones account for about 3% of the US market at the moment, compared to a 49% share for Apple.
You might also likeIn March, the US Agency for International Development (USAID) employees faced abrupt dismissal by the newly formed Department of Government Efficiency (DOGE). This agency-on-agency downsizing left many employees in security limbo – without jobs but with access to government-issued devices.
There was no immediate revocation of endpoint credentials, remote lockouts, or retrieval of the hardware and its crucial data. “The agency doesn’t even know how to turn off access to the systems for everyone on administrative leave,” said a former deputy administrator.
While unintended, these abrupt public service cuts created endpoint and cybersecurity holes. USAID manages sensitive geopolitical information and yet there was no reliable mechanism to de-provision devices.
This situation highlights a common weakness across federal agencies – device footprints are often large and poorly administered, thereby turning every endpoint into a potential backdoor.
This just isn’t good enough. Let’s look at what every government agency requires to better manage, monitor, and protect its endpoints.
Don’t let ghost devices haunt public sector networksFirst, cuts without security planning exacerbate the problem of “ghost devices”: endpoints that disappear without proper offboarding end up as unknown and unseen attack vectors. These invisible laptops, phones, and tablets across government networks become much more likely when endpoints take a backseat to efficiency.
Unfortunately, agencies solely focused on the budget bottom line often fail to invest in systems that precisely show what devices are on the network, which are active, and who’s using them. Not having this kind of information creates a security headache and inefficiency in the race to efficiency.
When restructuring happens overnight, endpoint management strategies help agencies maintain control even when human resources processes are chaotic. The last thing admins want is to manually track down endpoints by relying on spreadsheets, email trails, or someone’s memory.
If efficiency is the goal, agencies should recognize that dealing with lost or compromised endpoints is ultimately more expensive and embarrassing than investing in proper mobile device management (MDM) from the start.
Don’t wait for trouble to call ITLikewise, with no forcing function or endpoint system in place, response times suffer. The period between when devices go missing or when users leave their positions and admins step in is vital. But understaffed and under-resourced IT teams can create dangerous lags. In this window, bad actors can crack devices to copy files, exploit credentials, and intercept sensitive communications.
If a unified endpoint management solution had been in place at USAID, it would’ve been significantly easier and faster to account for each device even after the dismissals. Access could have been revoked remotely and the data wiped clean – a win for cybersecurity that sidestepped the ensuing bad press.
Another good way to avoid this scenario is by controlling who can access what data and when. This is possible with access and identity management platforms, and most effective when coupled with zero trust. This principle ensures that no device or user is inherently trusted and creates additional security layers that verify each access attempt. This way, even if a device falls through administrative cracks, these systems in concert limit the damage by preventing unauthorized access.
To be truly efficient, government networks need to shift from reactive to proactive postures. This means automated alerts when devices go offline in unusual circumstances, geolocation tracking, remote locking capabilities, and emergency wiping protocols. Whether devices are halfway around the world or down the hall, giving admins these powers goes a long way to nipping live threats in the bud.
Ironically, this approach actually maximizes the value of government technology investments throughout their lifecycle and helps achieve the stated desire for public sector efficiency.
Don’t let good tech die youngGovernment efficiency initiatives often focus on headcount when significant savings can be found in the total cost of tech ownership. The federal government spends almost four times more on technology per employee than other industries. Agencies can lower this figure by improving how they recondition endpoints and return them to the frontlines.
Effective endpoint management creates genuine efficiency by allowing agencies to remotely reset laptops and redeploy them with fresh policies. As a result, rather than premature retirement, admins and agencies can extend hardware lifecycles for substantial savings. This approach also advances sustainability goals and addresses equity gaps when properly wiped devices are redeployed to underserved agencies or programs.
Going forward, the public sector must think holistically about what it’s cutting. Decision-makers must consider both the human cost – thousands of careers disrupted and institutional knowledge lost – and the technical implications of such rapid workforce changes. Frank discussions with admins about how these decisions affect the broader ecosystem are therefore essential.
Letting people go while ignoring their device access and data security is unacceptable. Agencies need both protocols and platforms to ensure devices can be remotely managed and appropriately reassigned. Improved endpoint management won’t solve every challenge in the public sector, but it can help put agencies back in control of their devices and destiny.
We've featured the best endpoint protection software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Peacemaker season 2 has finally made its long-awaited debut – and the popular HBO Max show's latest installment wastes no time letting us know that Batman already exists in the DC Universe (DCU).
Admittedly, previous DCU projects have done that. Creature Commandos episode 6 revealed the DCU's Batman for the first time – albeit in silhouette form – while James Gunn's Superman movie contained a blink and you'll miss it reference to Gotham City via a road sign.
Nevertheless, the first episode of Peacemaker's sophomore season really drives home that Bruce Wayne has been operating as his vigilante alter-ego for some time – and six specific Easter eggs that prove it.
Full spoilers immediately follow for Peacemaker 2's inaugural chapter.
Krank Toys is a nationwide toy and model business that exists in the DC Universe (Image credit: HBO Max)The first of those happen 12 minutes into this season's opener, titled 'The Ties That Grind'.
As Leota Adebayo and Chris Smith pull up to the place where the latter's Justice Gang interview is being held, said venue bears the name Krank Toys.
A business founded and run by Griffin Krank, and later taken over by his son Cosmo after his father's death, the Gotham City-based enterprise and the Krank family weren't created for DC Comics. Indeed, they were specifically made for 2004 animated series The Batman (NB: not to be confused with its 2022 Matt Reeves-directed movie namesake). In that show, the Kranks made futuristic but dangerous toys and, after Bruce Wayne used his considerable clout to shut it down, Griffin adopted the supervillain pseudonym Toymaker to enact revenge on Wayne.
Jaina Hudson, is that you? (Image credit: HBO Max)The DCU TV Original's next Batman Easter egg appears moments later. As Smith approaches the building's entrance, one of its bodyguards opens the door and a visibly upset woman, who's dressed in a white rabbit costume, walks past Smith.
A criminal known as the White Rabbit – real name Jaina Hudson – exists in the comics. And, given Hudson's alias appears in episode 1's end credits sequence, it's clear this individual in 'The Ties That Grind' is the Gotham-based socialite who moonlights as a crook. So, it's another fun call-back to a member of the Caped Crusader's stacked rogues gallery.
Sasha Bordeaux's backstory has been altered for Peacemaker season 2 (Image credit: HBO Max)15 minutes pass before the DCU Chapter One TV series drops its next Batman reference in the form of ARGUS agent Sasha Bordeaux, played here by Sol Rodríguez.
Created by Greg Rucka and Shawn Martinbrough, and debuting in 'Detective Comics #751' in December 2000, Bordeaux has big ties to The Dark Knight. I won't spoil anything about her comic book history here in case any of it has been adapted for her live-action take in the DCU. However, speaking to me ahead of the show's return, Rodríguez told me she's "really love" it if Bordeaux appears in the DCU's Batman film, which is currently titled The Brave and the Bold. We'll see if her wish is granted post-season 2.
It sounds like the general populace is growing increasingly concerned about metahumans... (Image credit: HBO Max)Easter egg number four appears – or, rather, is heard – during the news report Rick Flag Sr is watching before Bordeaux enters his office to inform him of the "glitch" they've been keeping tabs on at Smith's home (i.e. Smith using the Quantum Unfolding Chamber to access another dimension).
In said news bulletin, the anchor says there have been three breakouts at Belle Rive Penitentiary and Arkham over the past two months. The latter is, unsurprisingly, a reference to Arkham Asylum, the psychiatric hospital that supervillains captured by Batman are sent to.
A blink and you'll miss it creature feature (Image credit: HBO Max)The penultimate Easter egg can be glimpsed in the trophy room of the Smith household that exists in the alternate universe we'll see throughout season 2.
As the DCU's Chris Smith inspects some framed newspaper clippings of his family's heroic exploits in this parallel dimension, one such article reveals they thwarted something know as the Rainbow Creature. A powerful Abominable Snowman-type character, this monster first appeared in 'Batman Vol. 1 #134' in September 1960. Created by Bill Finger and Sheldon Moldoff, it hails from South America and has access to various superpowers, including pyrokinesis and the ability to vaporize objects, by way of its multi-colored fur.
Keith Smith namedrops the city patrolled by Nightwing in this season's premiere (Image credit: HBO Max)The final reference in one of the best HBO Max shows' second season isn't specific to Batman. Given his ties to the hero it's related to, though, it still counts.
So, what is it? When the DCU's Smith stumbles outside and meets his brother Keith, who's alive and all grown up in this alternate reality, the latter says to the former "I thought you were in Bludhaven". That's the Gotham-adjacent city protected by Nightwing, aka Dick Grayson. He's one of many individuals to assume the superhero identity of Robin, i.e. Batman's sidekick, in DC Comics.
For more on Peacemaker's latest season, read my Peacemaker season 2 release schedule guide to find out when new episodes will be released. Then, check out my Peacemaker season 2 review, which contains clues about what might happen in episodes 2 through 5.
You might also likeGenAI tools such as ChatGPT, Gemini, and Copilot have become essential components of modern workflows, significantly saving countless hours and revolutionizing various tasks. 42% of enterprises actively deployed AI, and 40% are experimenting with it and 59% of those using or exploring AI have accelerated their investments over the past two years.
Their widespread adoption across industries has demonstrably boosted efficiency and productivity, making them indispensable for many organizations across almost all industries.
However, the rapid integration and reliance on GenAI tools have inadvertently fostered a dangerous sense of complacency within organizations.
While these tools are easy to use and offer widespread benefits, ignoring the consequences of misuse and even malicious use has led to a serious underestimation of the inherent risks tied to their deployment and management, creating fertile ground for potential vulnerabilities.
When Innovation Hides ExposureWhile typical users may not consider the vulnerabilities that GenAI tools bring, many CISOs and AI leaders are increasingly concerned about the misuse that’s unfolding quietly beneath the surface.
What often appears to be innovation and efficiency can, in reality, mask significant security blind spots. By 2027, it is estimated that over 40% of breaches will originate from the improper cross-border use of GenAI. For CISOs, this isn't a distant concern but an urgent and growing risk that demands immediate attention and action.
The exploitation of everyday AI users isn’t just a scary headline or a cautionary tale from IT—it’s a rapidly growing reality. These emerging attacks are sweeping across industries, catching many off guard. Just recently, researchers disclosed a Microsoft Copilot vulnerability that could have enabled sensitive data exfiltration via prompt injection attacks.
The ongoing underestimation of basic AI usage risks within organizations is a key driver of this emerging danger. The lack of awareness and robust policies surrounding the secure deployment and ongoing management of GenAI tools is creating critical blind spots that malicious actors are increasingly exploiting.
A New Security MindsetThe evolving landscape of GenAI presents a critical inflection point for cybersecurity leaders. It's imperative that CISOs and industry professionals move beyond the initial excitement and acknowledge that these tools have inherent risks that have been introduced by the widespread adoption of these powerful tools.
The current situation, marked by rapid integration and security oversight mixed with dangerous complacency, demands a fundamental shift in how organizations perceive and manage their digital defenses especially with AI.
The future of network security hinges on intelligent, comprehensive monitoring systems capable of understanding normal behavioral patterns and rapidly identifying deviations. This approach is not only crucial but paramount for detecting sophisticated threats that bypass traditional defenses.
Tools that can defend and protect against highly sophisticated threats need to include advanced capabilities at their core. Particularly, when considering scenarios where seemingly innocuous actions, like using a basic GenAI chatbot could lead to the silent exfiltration of sensitive corporate data, without user interaction or explicit warnings.
In these instances, traditional signature-based detection methods would likely prove ineffective. Therefore, it's imperative to begin leveraging advanced pattern recognition and behavioral analysis to combat threats specifically designed to evolve and evade detection.
Trust in AI Starts from WithinWith the rise of increasingly sophisticated threats pressing closer to the enterprise perimeter, organizations must take decisive and actionable steps. This begins with addressing internal distrust of AI. Roughly three-quarters of AI experts think the technology will benefit them personally, however, only a quarter of the public says the same.
Fostering an environment where employees understand both the advantages and the risks associated with its use is essential to bridging this gap in perception. The promotion of responsible usage across the organization lays the groundwork for a more secure adoption of GenAI technologies.
While traditional human error remains a threat, the widespread adoption of GenAI has created a new, more subtle class of behavioral risks. Equipping employees with the knowledge to use GenAI tools securely is essential and should include comprehensive training, setting clear usage guidelines, and implementing robust policies tailored to defend against AI-driven attack vectors.
As the AI landscape adapts and changes, security frameworks must be continuously updated to keep pace with these evolving threats and to ensure appropriate safeguards are in place.
Real Security Starts with Behavior ChangeDespite technological advancements, attackers continue to exploit human error. Today’s most significant data exposure isn't necessarily from a phishing link, while still a prime point of entry for threat actors; it's from an employee pasting proprietary source code, draft financial reports, or sensitive customer data into a public AI chatbot to work more efficiently.
In turn, companies must adopt strategies that address human behavior and decision-making. In an attempt to boost productivity they inadvertently externalize intellectual property.
This requires companies to evolve their approach beyond periodic training. It demands continuous engagement focused on GenAI-specific scenarios: teaching employees to recognize the difference between a safe, internal AI sandbox and a public tool.
It means creating a culture where asking "Can I put this data in this AI?" becomes as instinctual as locking your computer screen. Employees must be equipped to understand these new risks and feel accountable for using AI responsibly.
Demonizing AI usage, even basic use will never solve the problem at hand. Instead, embracing a secure approach to GenAI from a holistic point of view empowers employees to leverage these powerful tools with confidence to maximize their operational advantages while minimizing exposure to risk.
By leading with clear guidance, highlighting potential warning signs and operational risks, organizations can significantly reduce the chances of data breaches related to improper AI usage, ultimately protecting critical assets and preserving organizational integrity.
We feature the best firewall for small business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
In 2025, enterprises are caught in an invisible battle between algorithms. Artificial Intelligence (AI) has emerged as a tool for both defenders and attackers. This duel between “Good AI” and “Bad AI” is reshaping how enterprises approach security in an increasingly connected and complex world.
While AI empowers organizations to detect, prevent, and counteract evolving threats, bad actors have weaponized the same technology to create more sophisticated and elusive attacks.
So, what defines Bad AI, and how does Good AI counter it? And more importantly, can enterprises embrace advanced cybersecurity strategies to remain resilient in the face of this evolving threat landscape?
AI – The Double-Edged SwordThe clash between Good AI and Bad AI is a battle of intelligence, adaptation, and creativity, driven by ever-evolving systems. Bad AI, embedded in malicious software, is advancing fast, allowing hackers to bypass defenses, infiltrate networks, and compromise sensitive data through behavior modification and imitation of legitimate system activities. For instance, malware like Emotet has leveraged AI to evolve, making it increasingly elusive and harder to neutralize.
In response, Good AI counters these threats by analysing massive datasets, identifying risks, and even predicting attacks before they occur, positioning itself as an enterprise’s strongest ally in staying one step ahead of attackers.
AI is not solely about the adversarial. Trust, transparency, and human alignment are the main goals of good AI. It is intended to preserve privacy, ethics, and security while also evolving responsibly.
Bad AI exploits its power - hiding behind layers of opacity, bias, and harmful intentions.
The cost of inaction: Why proactive defense is non-negotiableAccording to research by the World Economic Forum, cybercrime costs are expected to reach $10.5 trillion annually by 2025, with the costs to British businesses amounting to £27 billion a year.
This not only reflects financial losses, but also broader consequences such as weakened trust, reputational damage, and operational disruptions caused by cyberattacks. And it doesn't end there. These attacks will increase in frequency and difficulty as the AI era progresses.
The rapid evolution of AI means that enterprises can no longer depend on traditional, reactive security measures. Cyberattacks are not only growing in volume but becoming increasingly tailored, adaptive, and intelligent. Attackers are leveraging AI to not only craft sophisticated phishing schemes but compromise privileged accounts and deploy evolving malware.
Without robust, proactive strategies, organizations risk falling behind and leave themselves vulnerable to breaches that can disrupt their operations.
To navigate this complex landscape, enterprises must embrace a resilience mindset - one that prioritizes not just protection but also adaptability, foresight, and innovation. Here are five key strategies enterprises can adopt to build proactive, AI-driven defenses:
1. AI-powered threat detection and responseTraditional defense mechanisms are no longer sufficient, as enterprises now require predictive AI-powered threat intelligence platforms that analyze vast datasets, detect anomalies, and identify attack vectors before they occur.
Operating autonomously, these platforms neutralize threats in real time, reducing human error and enabling faster, data-driven responses. AI systems can anticipate phishing attempts by analysing user behaviors and patterns, flagging suspicious activity early to prevent breaches.
This real-time protection, combined with continuous evolution, ensures defenses stay effective against ever-changing threats.
2. Zero Trust enhanced by AIThe Zero Trust model is a cornerstone of modern cybersecurity, and its effectiveness increases exponentially when combined with AI. AI-powered Identity and Access Management (IAM) systems evaluate risks in real time by analysing factors like user behaviors, geolocation, and device health.
Through continuous monitoring of access points, AI ensures only authorized individuals access sensitive data, significantly reducing the risk of both insider threats and external breaches.
3. Self-healing networksAI-powered self-healing networks will redefine resilience by automatically identifying security breaches, isolating compromised components, and restoring them to a secure state without human intervention.
By leveraging AI-enabled automation, these networks ensure business continuity during sophisticated attacks, mitigating risks, reducing operational downtime, and keeping enterprises functional and efficient amid evolving threats.
4. Blockchain-integrated data integrityAs AI becomes pivotal in cybersecurity, ensuring data integrity is essential, and blockchain technology offers a robust solution by providing an unchanging ledger that guarantees authenticity and prevents tampering.
By leveraging blockchain-enabled frameworks, organizations can secure transactions in real time, flag anomalies, and enhance transparency, ensuring data remains trustworthy even in highly targeted attack environments.
5. Collaborative Threat IntelligenceAI-driven threat intelligence platforms enable organizations to share information on attack vectors, vulnerabilities, and tactics globally, fostering a collaborative approach to cybersecurity.
This strengthens industry resilience and enhances defenses against sophisticated adversaries, ultimately helping organizations stay ahead of emerging threats.
Security frameworksFor CIOs, CISOs, and business leaders, the challenge is no longer whether to adopt AI in cybersecurity frameworks, but how to do so effectively. The key lies in understanding AI’s full potential, not just as a protective force but as a dynamic, evolving capability that requires continuous refinement, training, and alignment with strategic objectives.
Organizations must invest in equipping their teams with the knowledge and skills to work with AI systems effectively. Technical training, adaptive defenses, proactive monitoring, and an understanding of both the capabilities and limitations of AI will define the businesses that thrive in an AI-powered future.
Successful defense with smarter AIThe battle between Good AI and Bad AI is far from over - and it’s one that will continue to shape the cybersecurity landscape for years to come. However, the enterprises that lead this fight will be those that not only deploy AI for defense but also foster a deep understanding of how it works, how it evolves, and how it can fail.
By transitioning from reactive to proactive AI-driven strategies, businesses can ensure long-term digital resilience. Equipping AI with moral and ethical guardrails, aligning it to the greater good, and investing in continuous innovation will be critical for building smarter, stronger defenses.
We feature the best endpoint protection software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Quantum computing has long occupied the edges of our collective imagination – frequently mentioned, rarely understood. For many, it remains a distant prospect rather than an immediate concern. But that mindset is fast becoming a risk in itself.
While understanding may be limited today, that must quickly change. Quantum computing has long been viewed as a technology several decades away, but recent breakthroughs suggest it could arrive far sooner.
Google’s Willow and Microsoft’s Majorana chips signal rapid technical acceleration, and the UK Government’s £500 million investment in quantum innovation confirms that global leaders are no longer treating this as speculative, but as a strategic priority.
Despite this, only 35% of professionals surveyed by ISACA believe quantum will enter the mainstream within years rather than decades, highlighting just how much industry perception is lagging behind reality.
That disconnect extends beyond expectations – it’s impacting readiness. Most organizations have yet to factor quantum into their cybersecurity planning, even though the technology is set to fundamentally reshape how vast sectors of society operate online.
This isn’t just about adopting a new form of computing – it’s about protecting the systems, economies and infrastructures that underpin our digital lives. And that starts with truly understanding what quantum is, and how it could both redefine and disrupt the cybersecurity landscape.
The Fundamentals: A Primer on Quantum ComputingIf classical computers are powerful calculators, quantum computers are like probability engines, processing information in ways that allow them to explore many possibilities simultaneously.
Classical computing relies on bits, which are binary units of information that can either be 0 or 1. Quantum computers, by contrast, use qubits (quantum bits), which can be both 0 and 1 at the same time – a phenomenon known as superposition. Qubits can also be entangled, meaning the state of one can instantly influence another, even at a distance.
This means quantum computers can perform complex calculations by exploring multiple paths at once, rather than one-by-one. Where a classical computer might take thousands of years to crack encryption software or simulate a protein structure, a quantum computer could, in theory, complete the task in seconds.
But this is not about speed alone – it’s about capability. Quantum computing makes it possible to solve problems previously considered intractable: from modelling complex chemical reactions at the atomic level, optimizing vast and variable systems like global logistics, to breaking the mathematical problems that make today’s encryption secure.
When it comes to AI the effect is expected to be hugely transformational as the capability of Quantum will lead AI to a new era, both in terms of its level of intelligence and value but also in terms of the risks that come along with AI. These breakthroughs will have profound implications for the systems that underpin daily life, including cybersecurity, healthcare, and finance.
Why Quantum Matters: Revolutionary Potential Across SectorsQuantum computers won’t replace classical machines, but they will be used to solve problems that today’s systems simply can’t at exponentially faster speeds. Their ability to handle complexity at scale means quantum computing will unlock solutions that were previously impossible or impractical, with major implications across a range of sectors.
This potential is already being recognized by many in the industry. ISACA’s Quantum Pulse Poll found that a majority (56%) of European IT professionals welcome the arrival of quantum computing, with the same number predicting that it will create significant business opportunities.
In healthcare, quantum systems could accelerate drug discovery by modelling molecules and protein folding far more accurately than classical machines allow. In business and finance, they could transform how organizations optimize supply chains, manage risk, and harness artificial intelligence to process and learn from vast datasets.
In cybersecurity, quantum has the power to redefine how we protect systems and data. Quantum Key Distribution could enable theoretically unbreakable encryption. AI-driven threat detection could become faster and more effective. And quantum-secure digital identity systems could help prevent fraud and impersonation.
But while these developments hold huge promise, they also introduce one of the most serious challenges facing cybersecurity today.
Quantum and Cybersecurity: A Looming DisruptionThis isn’t a distant concern. Over two-thirds (67%) of cybersecurity professionals surveyed by ISACA believe that quantum computing will increase or shift cyber risk over the next decade, and it’s not hard to see why.
At the center of concern is encryption. Today’s most common cryptographic methods, like RSA and ECC, are built on mathematical problems that classical computers can’t solve in practical timeframes. But quantum machines could crack these with relative ease, putting the security of data at serious risk.
This raises the very real threat of “harvest now, decrypt later” where malicious actors steal encrypted data today, intending to unlock it once quantum capabilities arrive. Sensitive information considered secure now, such as financial records, personal data, and classified communications could be exposed overnight.
The implications are vast. If these foundational algorithms are broken, the ripple effect would be felt across every sector. Cryptography underpins not just cybersecurity systems, but digital infrastructure itself, from banking and healthcare to identity verification and cloud computing.
As quantum advances, preparing for this threat is no longer optional. It’s a critical step toward protecting the digital systems we all rely on.
The Reality Check: How ready are we for quantum?While the pace of quantum innovation accelerates, organizational readiness is not keeping up.
Few organizations have started preparations. Just 4% of IT professionals say their organization has a defined quantum computing strategy in place. In many cases, quantum is still entirely off the radar. More than half of respondents (52%) report that the technology isn’t part of their roadmap, with no plans to include it.
Even when it comes to mitigation, most have yet to take basic steps. Despite the risks posed to current encryption standards, 40% of professionals say their organization hasn’t considered implementing post-quantum cryptography, creating worrying potential for disruption.
Part of the challenge lies in awareness. Quantum remains unfamiliar territory for most professionals, with only 2% describing themselves as extremely familiar with the technology. And while the U.S. National Institute of Standards and Technology (NIST) has spent more than a decade developing post-quantum encryption standards, just 5% of respondents say they have a strong understanding of them.
Meanwhile, global progress on quantum development continues to accelerate. Commercial applications are likely to arrive sooner than many expect, yet they may do so in a digital ecosystem unfit to cope. If encryption breaks before defenses are in place, the consequences could be severe, with widespread operational disruption, reputational harm, and regulatory fallout.
Preparing for quantum is no longer a theoretical exercise. The risk is real, and the window for proactive action is closing.
Preparing for the Post-Quantum FuturePreparing for quantum computing isn’t just a technical upgrade – it’s a strategic imperative. Yet most professionals still lack the awareness and skills needed to navigate what’s coming. Quantum education must now be a priority, not just for security teams, but across leadership, risk, and governance functions.
Governments have a role to play too. The UK’s £60 million investment in quantum skills is a strong start, but long-term readiness will depend on sustained collaboration between public and private sectors.
For organizations, action is needed now. That means identifying where quantum could pose a risk, assessing encryption dependencies, and beginning the shift to quantum-safe systems. Crucially, none of this will be possible without the right expertise.
Developing a holistically trained workforce on quantum (whilst continuing to do this for AI) will enable organizations to apply new technologies effectively and securely before the threats materialize.
Quantum brings extraordinary potential, but it also demands urgent preparation. Those who act early will be far better positioned to secure their systems and lead confidently in a post-quantum world.
We've featured the best cloud firewall.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro