Once upon a time, members of the Code Switch team were just kids, learning about race and identity for the first time. So on this episode, we're sharing some of the books, movies and music that deeply influenced each of us at an early age — and set us on the path to being the race nerds we are today.
It’s a scenario that plays out far too often: A mid-sized company runs a routine threat validation exercise and stumbles on something unexpected, like an old infostealer variant that has been quietly active in their network for weeks.
This scenario doesn’t require a zero-day exploit or sophisticated malware. All it takes is one missed setting, inadequate endpoint oversight, or a user clicking what they shouldn’t. Such attacks don’t succeed because they’re advanced. They succeed because routine safeguards aren’t in place.
Take Lumma Stealer, for example. This is a simple phishing attack that lures users into running a fake CAPTCHA script. It spreads quickly but can be stopped cold by something as routine as restricting PowerShell access and providing basic user training. However, in many environments, even those basic defenses aren’t deployed.
This is the story behind many breaches today. Not headline-grabbing hacks or futuristic AI assaults—just overlooked updates, fatigued teams and basic cyber hygiene falling through the cracks.
Security Gaps That Shouldn’t Exist in 2025Security leaders know the drill: patch the systems, limit access and train employees. Yet these essentials often get neglected. While the industry chases the latest exploits and talks up advanced tools, attackers keep targeting the same weak points. They don’t have to reinvent the wheel. They just need to find one that’s loose.
Just as the same old techniques are still at work, old malware is making a comeback. Variants like Mirai, Matsu and Klopp are resurfacing with minor updates and major impact. These aren’t sophisticated campaigns, but recycled attacks retooled just enough to slip past tired defenses.
The reason they work isn’t technical, it’s operational. Security teams are burned out. They’re managing too many alerts, juggling too many tools and doing it all with shrinking budgets and rising expectations. In this kind of environment, the basics don’t just get deprioritized, they get lost.
Burnout Is a Risk FactorThe cybersecurity industry often defines risk in terms of vulnerabilities, threat actors and tool coverage, but burnout may be the most overlooked risk of all. When analysts are overwhelmed, they miss routine maintenance. When processes are brittle, teams can’t keep up with the volume. When bandwidth runs out, even critical tasks can get sidelined.
This isn’t about laziness. It’s about capacity. Most breaches don’t reveal a lack of intelligence. They just demonstrate a lack of time.
Meanwhile, phishing campaigns are growing more sophisticated. Generative AI is making it easier for attackers to craft personalized lures. Infostealers continue to evolve, disguising themselves as login portals or trusted interfaces that lure users into running malicious code. Users often infect themselves, unknowingly handing over credentials or executing code.
These attacks still rely on the same assumptions: someone will click. The system will let it run. And no one will notice until it’s too late.
Why Real-World Readiness Matters More Than ToolsIt’s easy to think readiness means buying new software or hiring a red team, but true preparedness is quieter and more disciplined. It’s about confirming that defenses such as access restrictions, endpoint rules and user permissions are working against the actual threats.
Achieving this level of preparedness takes more than monitoring generic threat feeds. Knowing that ransomware is trending globally isn’t the same as knowing which threat groups are actively scanning your infrastructure. That’s the difference between a broader weather forecast and radar focused on your ZIP code.
Organizations that regularly validate controls against real-world, environment-specific threats gain three key advantages.
First, they catch problems early. Second, they build confidence across their team. When everyone knows what to expect and how to respond, fatigue gives way to clarity. Thirdly, by knowing the threats that matter, and the ones focused on them, they can prioritize those fundamental activities that get ignored.
You may not need to patch every CVE right now, just the ones being used by the threat actors targeting you. What areas of your network are they actively doing reconnaissance on? Those subnets probably need more focus to patching and remediation.
Security Doesn’t Need to Be Sexy, It Needs to WorkThere’s a cultural bias in cybersecurity toward innovation and incident response. The new tool, the emergency patch and the major breach all get more attention than the daily habits that quietly prevent problems.
Real resilience depends on consistency. It means users can’t run untrusted PowerShell scripts. It means patches are applied on a prioritized schedule, not “when we get around to it.” It means phishing training isn’t just a checkbox, but a habit reinforced over time.
These basics aren’t glamorous, but they work. In an environment where attackers are looking for the easiest way in, doing the simplest things correctly is one of the most effective strategies a team can take.
Discipline Is the New InnovationThe cybersecurity landscape will continue to change. AI will keep evolving, adversaries will go on adapting, and the next headline breach is likely already in motion. The best defense isn’t more noise or more tech, but better discipline.
Security teams don’t need to do everything. They need to do the right things consistently. That starts with reestablishing routine discipline: patch, configure, test, rinse and repeat. When those fundamentals are strong, the rest can hold.
For CISOs, now is the time to ask a simple but powerful question: Are we doing the basics well, and can we prove it? Start by assessing your organization’s hygiene baseline. What patches are overdue? What controls haven’t been tested in months? Where are your people stretched too thin to execute the essentials? The answers won’t just highlight the risks, they’ll point toward the pathway to resilience.
We list the best patch management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The withdrawal accounts for nearly half of the soldiers sent to Los Angeles in June to suppress protests over the Trump administration's immigration crackdown.
(Image credit: Eric Thayer)
File-sharing platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools.
The offending language buried in the ToS said that using WeTransfer gave the company the right to use the data "for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy."
That part about machine learning and the general broad nature of the text seemed to suggest that WeTransfer could do whatever it wanted with your data, without any specific safeguards or clarifying qualifiers to alleviate suspicions.
Perhaps understandably, a lot of WeTransfer users, who include many creative professionals, were upset at what this seemed to imply. Many started posting their plans to switch away from WeTransfer to other services in the same vein. Others began warning that people should encrypt files or switch to old-school physical delivery methods.
Time to stop using @WeTransfer who from 8th August have decided they'll own anything you transfer to power AI pic.twitter.com/sYr1JnmemXJuly 15, 2025
WeTransfer noted the growing furor around the language and rushed to try and put out the fire. The company rewrote the section of the ToS and shared a blog explaining the confusion, promising repeatedly that no one's data would be used without their permission, especially for AI models.
"From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We’ve since updated the terms further to make them easier to understand," WeTransfer wrote in the blog. "We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension."
While still granting a standard license for improving WeTransfer, the new text omits references to machine learning, focusing instead on the familiar scope needed to run and improve the platform.
Clarified privacyIf this feels a little like deja vu, that's because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change to the company's fine print implied that Dropbox was taking content uploaded by users in order to train AI models. Public outcry led to Dropbox apologizing for the confusion and fixing the offending boilerplate.
The fact that it happened again in such a similar fashion is interesting not because of the awkward legal language used by software companies, but because it implies a knee-jerk distrust in these companies to protect your information. Assuming the worst is the default approach when there's uncertainty, and the companies have to make an extra effort to ease those tensions.
Sensitivity from creative professionals to even the appearance of data misuse. In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the stakes are very real. The lawsuits and boycotts by artists over how their creations are used, not to mention suspicions of corporate data use, make the kinds of reassurances offered by WeTransfer are probably going to be something tech companies will want to have in place early on, lest they face the misplaced wrath of their customers
You might also likeNPR's Ailsa Chang talks with Michael Petrilli, head of the education policy thinktank Thomas B. Fordham Institute, about the Trump administration's efforts to dismantle the Education Department.