Full spoilers follow for The Lord of the Rings and The Rings of Power season 2's first four episodes.
The Rings of Power's showrunners have revealed some tantalizing season 2 details about the crafting of the rings for dwarves and men.
Speaking exclusively to TechRadar ahead of The Rings of Power season 2's launch on August 29, co-creators J.D. Payne and Patrick McKay opened up on how the newly created rings will differ from their elven counterparts. And, by the sounds of it, the dwarf lords and kings of men will struggle to deal with the impact that the dark magic-infused jewelry will have on them in the Prime Video series.
Before offering some hints at far darker events – born out of said rings' formation – to come in season 2, though, Payne and McKay disclosed whether, like the elven rings, they sought any real-world inspiration for those that'll be gifted to the dwarves and men. In short: they didn't.
Celebrimbor and Mirdania (pictured) help Sauron, disguised as Annatar, to craft more rings in season 2 (Image credit: Ben Rothstein/Prime Video)"It was always about what tone each ring should evoke, rather than [take inspiration from] any real-life historical or mythological references," Payne replied in response to my query. "For the elven rings in season one, we did take inspiration from Lalique jewelry, which has this art nouveau feeling that lent itself to the natural world, hence the vine-like and floral nature of the elven bands."
"But, for the dwarves, we wanted them to have some dwarven characteristics [with] this sort of solid, rocky, and mountainous design. Some of them have very specific detailing, such as King Durin's ring having the three peaks of Khazad-dûm, so you see the landscape reflected in the ring itself. As for the rings of men, we wanted them to be more angular and sharp, and have a very muted, unassuming color to them."
Adding to Payne's response, McKay said: "We also wanted them to feel progressively more evil with each group's creation. The dwarf rings have a slightly sinister and seductive quality to them, and then the rings for men even more so."
The elven rings weren't forged by Sauron's hand in season 1, hence their resistance to his overtures (Image credit: Amazon MGM Studios)Sounds ominous – but why are the dwarves and men more vulnerable to Sauron's machinations via the titular rings? Despite being crafted by Celebrimbor and Sauron – at the time, the latter was masquerading as Halbrand – in The Rings of Power season 1 finale, the elven rings (and those who wear them) aren't susceptible to the influence exerted on them by Middle-earth's big bad. Indeed, while Sauron provided some insight into their creation and was later present for their forging, he didn't actually have a physical hand in making the elven rings. That's because he didn't touch any of the materials, including the mithril, that were used to craft them.
We also wanted them to feel progressively more evil
Patrick McKayUnfortunately, the dwarves and men, who'll eventually own one of these bands, won't be so lucky. Indeed, Sauron, who spends much of one of the best Prime Video shows' second season wearing another disguise – Annatar, the so-called Lord of Gifts – was responsible for adding a new batch of mithril into the mix for the dwarven rings in season 2 episode 2. Before he drops said metal into the forge, though, he pauses for a second or two; the sorcerer imbuing it with some of his dark magic that should make it easier for him to control those who wield the rings for the seven dwarf lords.
As for the rings fashioned for the nine kings of men... well, without spoiling too much, they're created later in season 2. Considering that, in The Lord of the Rings, these nine individuals become the Nazgul, Sauron's most trusted lieutenants, due to the malevolent influence that the rings have on them, Sauron/Annatar has an even more diabolical scheme hidden up his elven sleeves for their crafting, too.
There's more to come from me on The Rings of Power season 2 front, so keep your eyes trained on TechRadar over the next four weeks as its finale draws closer. In the meantime, read more of my exclusives with its cast, including why "there has to be a cost" with the dwarven rings' creation and usage and how Sauron and Celebrimbor's relationship goes to "some dark places" in season 2's latter half.
You might also likeAs more and more organizations embrace Artificial Intelligence (AI) and Machine Learning (ML) to optimize their operations and gain a competitive advantage, there’s growing attention on how best to keep this powerful technology secure. At the center of this is the data used to train ML models, which has a fundamental impact on how they behave and perform over time. As such, organizations need to pay close attention to what’s going into their models and be constantly vigilant for signs of anything untoward, such as data corruption.
Unfortunately, as the popularity of ML models has risen, so too has the risk of malicious backdoor attacks that see criminals use data poisoning techniques to feed ML models with compromised data, making them behave in unforeseen or harmful ways when triggered by specific commands. While such attacks can take a lot of time to execute (often requiring large amounts of poison data over many months), they can be incredibly damaging when successful. For this reason, it is something that organizations need to protect against, particularly at the foundational stage of any new ML model.
A good example of this threat landscape is the Sleepy Pickle technique. The Trail of Bits blog explains that this technique takes advantage of the pervasive and notoriously insecure Pickle file format used to package and distribute ML models. Sleepy Pickle goes beyond previous exploit techniques that target an organization's systems when they deploy ML models to instead surreptitiously compromise the ML model itself. Over time, this allows attackers to target the organization's end-users of the model, which can cause major security issues if successful.
The emergence of MLSecOpsTo combat threats like these, a growing number of organizations have started to implement MLSecOps as part of their development cycles.
At its core, MLSecOps integrates security practices and considerations into the ML development and deployment process. This includes ensuring the privacy and security of data used to train and test models and protecting models already deployed from malicious attacks, along with the infrastructure they run on.
Some examples of MLSecOps activities include conducting threat modelling, implementing secure coding practices, performing security audits, incident response for ML systems and models, and ensuring transparency and explainability to prevent unintended bias in decision-making.
The core pillars of MLSecOpsWhat differentiates MLSecOps from other disciplines like DevOps is that it’s exclusively concerned with security issues within ML systems. With this in mind, there are five core pillars of MLSecOps, popularized by the MLSecOps community, which together form an effective risk framework:
Supply chain vulnerability
ML supply chain vulnerability can be defined as the potential for security breaches or attacks on the systems and components that make up the supply chain for ML technology. This can include issues with things like software/hardware components, communications networks, data storage and management. Unfortunately, all these vulnerabilities can be exploited by cybercriminals to access valuable information, steal sensitive data, and disrupt business operations. To mitigate these risks, organizations must implement robust security measures, which include continuously monitoring and updating their systems to stay ahead of emerging threats.
Governance, risk and compliance
Maintaining compliance with a wide range of laws and regulations like the General Data Protection Regulation (GDPR) has become an essential part of modern business, preventing far-reaching legal and financial consequences, as well as potential reputational damage. However, with the popularity of AI growing at such an exponential rate, the increasing reliance on ML models is making it harder and harder for businesses to keep track of data and ensure compliance is maintained.
MLSecOps can quickly identify altered code and components and situations where the underlying integrity and compliance of an AI framework may come into question. This helps organizations ensure compliance requirements are met, and the integrity of sensitive data is maintained.
Model provenance
Model provenance means tracking the handling of data and ML models in the pipeline. Record keeping should be secure, integrity-protected, and traceable. Access and version control of data, ML models, and pipeline parameters, logging, and monitoring are all crucial controls that MLSecOps can effectively assist with.
Trusted AI
Trusted AI is a term used to describe AI systems that are designed to be fair, unbiased, and explainable. To achieve this, Trusted AI systems need to be transparent and have the ability to explain any decisions they make in a clear and concise way. If the decision-making process by an AI system can’t be understood, then it can’t be trusted, but by making it explainable, it becomes accountable and, therefore, trustworthy.
Adversarial ML
Defending against malicious attacks on ML models is crucial. However, as discussed above, these attacks can take many forms, which makes identifying and preventing them extremely challenging. The goal of adversarial ML is to develop techniques and strategies to defend against such attacks, improving the robustness and security of machine learning models and systems along the way.
To achieve this, researchers have developed techniques that can detect and mitigate attacks in real time. Some of the most common techniques include using generative models to create synthetic training data, incorporating adversarial examples in the training process, and developing robust classifiers that can handle noisy inputs.
In a bid to quickly capitalize on the benefits offered by AI and ML, too many organizations are putting their data security at risk by not focusing on the elevated cyber threats that come with them. MLSecOps offers a powerful framework that can help ensure the right level of protection is in place while developers and software engineers become more accustomed to these emerging technologies and their associated risks. While it may not be required for a long time, it will be invaluable over the next few years, making it well worth investing in for organizations that take data security seriously.
We've featured the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The explosive popularity of generative artificial intelligence is disrupting the business world as enterprises race to apply the transformative power of GenAI chatbots to supercharge their business processes.
Yet as more employees adopt new generative AI tools like ChatGPT and Copilot in their daily roles, they’re usually doing so without a second thought to the larger security implications. IT teams are challenged to monitor each new software instance with limited visibility among sprawling networks of SaaS tools. Many AI projects that employees spin up go undetected by IT, exposing their organizations to shadow IT.
The concept of shadow IT involves the use of IT systems, devices, software, and services without explicit approval from the IT department. Most shadow IT is not introduced into an organization with malicious intent. Workers are burdened with a growing list of responsibilities in an ever-accelerating business market, so many of them turn to shadow IT to get their jobs done. Shadow IT is often easier to use than internal alternatives, has less red tape, or is a better fit for their style of work.
However, many IT teams are not prepared for the risks that these programs pose to network management and data security. Consider that 90% of employees who use unsecure practices do so despite knowing that their actions will increase risks for their organizations, according to Gartner. And fully 70% of employees who use ChatGPT hide that use from their employers, according to a survey by Fishbowl.
Risky climateIn addition, 9% of workers have admitted to pasting their company data into ChatGPT, and an average company leaks confidential information to the chatbot hundreds of times each week, according to Cyberhaven. ChatGPT then incorporates all that data into its public knowledge base for sharing with other users.
In this risky climate, budgets for generative AI projects are expected to almost triple between 2023 and 2025, rising from an average of 1.5% of IT budgets to 4.3% within two years, according to survey data from Glean and ISG. Larger companies will allocate still more for AI, with 26% of firms over $5 billion in revenue budgeting more than 10% toward generative AI by 2025. And more than one-third of survey respondents (34%) said they were willing to implement generative AI quickly despite the risks of negative outcomes.
SaaS shadow IT is probably one of the biggest hidden risk factors that IT leaders face today. Most people who utilize shadow IT tend to think that they’re just using a productivity tool. However, organizations have found over and over again that there is a high risk associated with shadow IT adoption.
Detecting Shadow IT and protecting data securityEvery cyber program is built around defending data, but if that data exists within shadow IT tools, then it remains unprotected. That’s why it is so important to discover what shadow IT exists in your environment, build a plan for when it happens – not if – and foster a culture that still promotes employee problem-solving while adhering to IT policy.
IT teams can apply several important considerations and precautions to maintain control over AI tools and protect their organizations from potential risks. The most effective way to detect shadow IT is on-device where the user is, as other forms of detection can miss critical information. Going to the source of shadow IT, which is the user, is the most effective approach.
After developing an inventory of shadow IT, organizations can compare the anomalies to sanctioned IT tools, survey the anomalous users, and use this information to better understand work trends, problems, and solutions. It is important to approach shadow IT users with an open mind versus shutting down the adoption. There are business problems being solved with the use of these tools, and IT teams need to understand what that need is and work collaboratively with users to ensure they have the tools they need, while keeping data secure.
Remember, shadow IT tools are only “shadowy” until they’re not. Once discovered and brought out of the shadows, the next step is to move these IT tools through procurement and internal processes for sanctioned purchases to ensure visibility and compliance.
All new AI tools should be properly managed, as shadow IT within an organization can introduce serious compliance, security, and business risks. However, recognize that shadow IT users are really just “intrapreneurs” who are seeking new solutions to existing problems. By trying to understand the reasons behind their adoption of shadow IT, organizations can identify opportunities to solve business problems that they may not currently comprehend.
Of course, you may find that some of these shadow IT tools don’t fit within the proper IT framework of rigorous controls. But once you’ve discovered the underlying user problems being solved along the way, the users and central IT can develop plans to solve these issues in a more formal and productive way.
We've featured the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The Internet has become the new backbone of corporate operations. Delivery services, banking, VPN connections from anywhere and everywhere - it’s all powered by the Internet as the new delivery mechanism for customer and employee applications and services.
Guaranteeing access to an always-on experience of those digital services is essential to any business. While predictive analytics and AI-powered intelligence allow us to build forecasting models that help optimize performance and minimize downtime, outages on the Internet can permutate in an infinite number of ways. But with Internet outages occurring on external networks and within third-party providers that sit outside the owned IT perimeter, how do you attempt to identify these outages, let alone predict and mitigate them?
No longer a finite number of eventsAn outage will present to the end user in a standard set of ways, including slower load times or a complete inability to access an application or service. Often, there’s commonality in the underlying pattern - or the chain of events - that led to that outage occurring.
In isolation, each pattern is detectable and observable. Most IT teams will conduct a post-incident analysis to map out the pattern or sequence of events that led to an outage. This helps understand the chain reaction of events in detail such that if the same pattern was to repeat in the future, it can be detected and an intervention made before it ends in a disruption that impacts users.
The challenge facing operations teams today is that things are no longer this simple, and outages are no longer based on a finite number of isolated events.
The multi-layered and multi-interdependent outageNetworks and applications have grown in complexity and this has influenced the characteristics of outages. In particular, the underlying patterns of system behavior that cause outages aren’t as repetitively predictable as they once were. Outage causes today are a lot more intricate and harder to diagnose. For instance a system or application no longer follows a linear client-network-server architecture; instead, it operates as a “mesh” of connectivity links, IT infrastructure and software components. The challenge for ops teams is that a mesh architecture dramatically increases the number of interconnected components and therefore permutations of conditions that can cause an outage. Compared to a more linear architecture, connections between components in the mesh and the number of permutations or sequences that can form an outage pattern are both exponentially higher.
In addition, the number of components in the mesh is also ever-changing. As more features are added to an application, more components or third-party services are incorporated into the application’s end-to-end delivery chain - and into the mesh that supports it. The complexity of the application grows, and so does the range of potential causes that can bring part or all of the application down. And it’s not just the direct dependencies that are a concern; third-party infrastructure services and components come with their own interdependencies, with systems and services that are often several steps removed from view.
Is an unpredictable pattern even a pattern?These outage patterns don’t manifest in predictable ways.
To have the best possible chance of accurately pattern-matching in this scenario, organizations need a reliable way to ‘read between the lines’ - to understand the intricate interplay of events and patterns being observed, and how that contextually relates to the performance of their specific application or infrastructure.
That level of contextual insight across any and all domains, even the ones that sit outside of enterprise visibility and control, demands a new approach to how we think about outage detection and mitigation.
Managing such a globally vast network, then, that includes networks and domains outside of enterprise control, requires a new approach to the level of data and contextual insight that IT leaders now need to care about.
When data-driven insight goes beyond enterprise scaleWhen it comes to seeing, but also predicting outages, it is access to high-fidelity data across all environments that matter - including cloud and the Internet - that will ultimately allow us to identify and navigate this new world of patterns on patterns on patterns, surfacing where a performance problem exists, why, and if it matters. Visibility across the end-to-end service delivery chain to see, correlate, and triangulate all patterns that matter to assuring always-on digital experiences.
So while outage patterns may permutate in perpetuity, so too are new technologies accelerating beyond human scale. Done right, new technologies will allow us to see outages within and beyond our perimeter, wherever they may occur, as well as power a new level of intelligence to generate the automated insight required to forecast all the different patterns—and the recommended action to avoid them.
We've featured the best cloud backup.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
US car rental giant Avis has confirmed it was hit by a data breach early last month affecting customer data.
In a letter to affected customers, Avis confirmed an “unauthorized third party” broke into one of its business applications, enabling the threat actor to obtain sensitive and personally identifiable information.
The attack, which is believed to have taken place between August 3 and August 6, was discovered on August 5. Avis promises to have worked with the relevant authorities and cybersecurity experts, however the extent of the attack remains unconfirmed.
Avis confirms customer data breachFollowing the discovery, further research into the attack revealed on August 14 that some personally identifiable information had been accessed. Redacted information in the letter makes it impossible to know precisely which data was obtained besides customer names.
TechRadar Pro has asked Avis to confirm this, together with the estimated number of affected customers and their regions, but the company did not immediately respond.
In its warning to customers, however, the company confirmed: “We have taken steps to deploy and implement additional safeguards onto our systems, and are actively reviewing our security monitoring and controls to enhance and fortify the same.”
Consequentially, Avis will offer affected customers a one-year membership for credit monitoring via Equifax, with codes redeemable until December 31.
Avis says those affected by the breach should review and monitor their account statements and credit history for unauthorized activity.
In light the attacker may have gotten access to email addresses, phone numbers and home addresses, Avis customers together with anyone with an online presence should be wary of phishing attempts, scams and identity theft.
Other basic internet hygiene steps, like regularly updating passwords with secure alternatives and enabling two-factor authentication, are also advisable.
More from TechRadar ProIt’s been almost two years since Elon Musk’s acquisition of Twitter - now X, after which the Tesla CEO laid off around 80% of staff.
These cuts included the ‘trust and safety’ team being slashed from 4,026 to 2,849, with full time moderators were halved from 107 to just 51.
Now, it seems X is back to hiring, as two dozen jobs have been listed on its careers site - for both cybersecurity and safety roles.
Band-aid on a bullet holeThe listings mention a number of locations for the new roles, with major US tech hubs such as New York City and Palo Alto, California included alongside X's international offices at Manila in the Philippines and Delhi in India.
X announced in January 2024 it would be creating a new Trust and Safety center at its office in Austin, Texas, where a hundred full-time content moderators would be based, and these roles also appear to be included.
Of course, there's still a very significant deficit from 2022’s staffing levels, but the move does point to a pretty significant symbolic defeat for Musk, who bought the platform in 2022 to ‘protect free speech’, but instead has undoubtedly ushered in an era of virtually unrestricted content.
This predictably landed X in hot water, with the platform recently shut down in Brazil after refusing to ban accounts spreading disinformation and hate messages - although Musk insisted this was political censorship. In a similar vein, almost half of all advertisers have left the site - and X lost nearly half a billion dollars in the first quarter of 2023.
Whilst the moderation staff will likely be a very welcome move, it’s difficult to imagine that just a few more staff will mitigate the ‘free for all’ philosophy introduced by Musk. On the security side, X is looking for threat intelligence specialists and security engineers to help secure the site.
In recent times, X has struggled with privacy and security concerns as well as bugs and crashes - worsened by the loss of staff. The quality assurance testing (or lack thereof) seems to have led to multiple issues, including external links breaking.
In an uncertain time for the platform, many users will be looking to Musk for some bold changes in order to make the site more secure - and this hiring spree could be a step in the right direction.
Via TechCrunch
More from TechRadar Pro
You walk into a room and turn on the light. Do you ever stop to think about what it takes to power the lightbulb as you flip that switch? Or how our outlets are always ready to charge our smartphones? No — we just expect these things to happen, and happen safely, without a second thought. A century ago, after years of innovation and technological improvements, the miniature circuit breaker was introduced to safely bring electrical power into homes for the first time. This year, Schneider Electric, celebrates 100 years of these unsung heroes of home energy safety. Here’s why you should too.
Miniature circuit breakers reside right in the heart of the home, in the electrical panel where power is received and supplied across the home to the various outlets, switches, and devices that we rely on every day. From our alarm clocks to our electric vehicles, to the very smartphone in our pocket that connects us to the world. However, a study by Schneider Electric of 9,000 consumers in seven countries revealed that 41% of Gen Z are unaware of what this crucial piece of technology in their homes is even for. Furthermore, 25% of the Gen Z population surveyed don’t even know where the electrical panel is in their home.
How did these circuit breakers end up in the electrical panel, and how did they come to resemble the boxes we all rely on in our homes today?
A history of progress in electrical safetyElectrical innovation began in the early 19th century. After its founding in 1836, Schneider Electric spent 150 years pioneering advancements in electrical safety. Entering the homes over the past century, the circuit breaker has enabled us all to live safely and happily without even realizing its protection. It safely cut the power when water spilled on the multi-socket extension cord and stopped it from overheating when too many loads were plugged into the same socket.
And as technology evolved, so did the protection they provided. The introduction of surge protection and residual current devices in the '70s and '80s made your TV safe while you watched during a big storm — people before that time could not watch TV during a storm! The electrical protection evolution continued by making your relaxing bath safe while your partner used their hair dryer, and crucially protected curious toddlers playing with sockets from fatal electrocution. As we go into the 2000s, arch-fault detection was introduced to provide additional fire protection by cutting power when it recognizes power flow through the air between two conductors.
Over the years, engineers at Schneider Electric have worked to enhance home electrical safety, so you don’t have to think about it. Perhaps the people who don’t know the role of their electrical panel are the best testament to these engineers’ efforts!
Mini circuit breakers have led to 100 years of innovations in home energy distribution. (Image credit: Schneider Electric) Celebrating the past while innovating for the futureThe lack of awareness around electrical protection has been the norm for decades, but home power needs are changing. This new era, sometimes referred to as Electricity 4.0 represents a more sustainable, smarter way to approach energy distribution designed for advanced technological and critical challenges, including power resiliency and the climate crisis. Understanding how our home electrical distribution system functions is a critical first step in moving toward a greener, more sustainable power grid.
Homes are evolving with greater electrification in response to this new energy landscape. Solar panels, EV chargers, and heat pumps are now becoming more common in homes, meaning the role of that electrical panel has never been more important. However, there is a critical point that less than half of the consumers surveyed realize: the more you electrify your home with solar, EVs, and heat pumps, the greater the strain on your electrical panel, increasing the risk of it overloading. Potentially putting your family, home, and belongings in danger.
This sea change comes as more of Gen Y and Gen Z are becoming homeowners. This forecasts a huge demand for new energy technology and therefore a widespread need for advanced electrical panels to keep those homes safe. Most homes in the United States were built before the 1980s, and only a small percentage of those homes have been upgraded to meet today’s safety standards. Improving electrification can help decarbonize your home, but the strain can be dangerous with outdated panels.
“The electrification of our homes is undeniably a positive step towards a more sustainable energy future. This shift, powered by renewables, enables us to tackle the 20% of global emissions produced by homes. However, as exciting as the transition to a new energy landscape is, it’s vital that safety sits at its core,” says Michael Lotfy Gierges, Executive Vice President of Global Home & Distribution Division at Schneider Electric.
Modern problems require modern solutionsUpgrading your home’s electrical panel can seem daunting, but it will become necessary as you increasingly electrify your home. To keep your home safe Schneider Electric has electrical panel solutions that can bring your home up to modern standards with high functionality hardware and state-of-the-art software. For the European market, the Resi9 Energy Center offers enhanced security, safety, and sustainability for those who are looking to move from being a consumer to prosumer —someone who both use and produce electricity in their homes— with solar panels for example, or someone who may be adding larger loads, such as an EV charger, in their homes.
For those in the US looking to modernize their homes, Schneider Electric will soon launch its first-of-a-kind Schneider Pulse smart panel. The Schneider Pulse panel safely integrates directly with solar, battery, and inverters from a range of manufacturers including Schneider Home ecosystem to be launched shortly.
These advances in home electrical distribution represent another successful step in electrical engineering to ensure that homes meet the electrical needs of today without sacrificing what really matters—your safety.
“Utilizing our 188 years of experience – 150 of which have been dedicated to the electrical market –our innovative home smart panel solutions offer a true evolution of the electrical panel. We are proud to introduce solutions that support the electrification of today’s homes while keeping safety, resilience, efficiency, and sustainability at the forefront,” says Gierges.
Schneider’s purpose is to empower people to be more sustainable, turning ambitions into action. Whether you’re adding solar panels and electric vehicle charging stations to your home or making the switch to more sustainable heat pumps and induction cook tops, Schneider Electric’s ecosystem of global offers have been developed to advance smarter and safer home electrical distribution for everyone.
Canon has launched the EOS C80, a powerful follow-up to the EOS C70, its four-year-old RF-mount Super 35 4K cinema camera thats's popular with indie filmmakers.
Identical in size to the EOS C70, the new EOS C80 marks a significant upgrade, packing the same (larger) full-frame backside illuminated stacked sensor with 6K video capabilities as the recently announced EOS C400. The EOS C80 also inherits the world-first triple base ISO of the EOS C400.
Most of the leading cinema cameras, like the Blackmagic Cinema Camera 6K and Panasonic Lumix S5 II, offer a dual base ISO; two ISO settings to deliver the best possible signal-to-noise ratio, one for good light and one for low light. However, a triple base ISO is even more versatile; the EOS C80's ISO 800, ISO 3200 and ISO 12,800 settings will each give you the cleanest possible image the camera's sensor is able to produce – a true cinema camera innovation.
Image 1 of 2The EOS C400 and the EOS C80 are the only two cameras offering triple base ISO. (Image credit: Canon)Image 2 of 2The EOS C80 uses Canon's RF-mount, but there is a PL to RF-mount adaptor to use cinema glass too. (Image credit: Canon)It may be a dedicated video camera, but the EOS C80 has the same RF-mount as Canon's photography-focused mirrorless cameras, a built-in grip and a 3.5-inch flip-out touchscreen, making it a familiar and more accessible package to those who've used a Canon mirrorless cameras, like the EOS R5 Mark II, than Canon's modular cinema cameras like the EOS C400.
The new camera's sensor, as used in the EOS C400, is capable of capturing 16 stops of dynamic range, 12-bit RAW video internally, with Canon's Cinema Raw Light color profile to keep file sizes down, and video resolution / frame rates of 6K 30fps, 4K 120fps and 2K 180fps.
Canon's dual-pixel CMOS autofocus promises reliable focusing with autofocus points covering 100% of the frame, but the 5-axis stabilization is the less effective electronic type, which means ideally you'll want some method of stabilizing the camera for handheld use.
Image 1 of 8(Image credit: Canon)Image 2 of 8(Image credit: Canon)Image 3 of 8(Image credit: Canon)Image 4 of 8(Image credit: Canon)Image 5 of 8(Image credit: Canon)Image 6 of 8(Image credit: Canon)Image 7 of 8(Image credit: Canon)Image 8 of 8(Image credit: Canon)Where the video-dedicated EOS C80 differs from Canon's video-focused hybrid cameras like the EOS R5C is through its swathe of video tools, such as built-in ND filters (2, 4, 6, 8 and 10 stops) and robust connectivity that includes 12G SDI (pro broadcasts demand secure SDI over HDMI) and mini-XLR terminals to attach accessories such as mics and monitors. It can also be adapted to Canon EF and to PL-mount lenses, complete with metadata – the latter includes legendary cine optics such as those by Cooke.
We can see the EOS C80 being Canon's most popular cinema camera with indie filmmakers especially, or as a more compact broadcast tool. It's priced at $5,499 / £5,339.00 for the body only (we'll update this article with Australia pricing when we get it), and available in November 2024 – preorders are open now.
You might also likeBending Spoons is set to lay off a staggering 75% of the WeTransfer headcount after acquiring the file-sharing platform just a few weeks ago.
The company, which is believed to have around 350 workers could lose around 260 members of staff in one of the biggest series of layoffs proportionally this year.
News of the job cuts comes a little over a month after the Italian tech firm acquired WeTransfer, marking its fifth tech merger of 2024.
WeTransfer prepared for major layoffsBending Spoons has a proven track record of heavily reducing workforces following recent acquisitions, along with making other consumer-facing changes to subscriptions. When TechRadar Pro probed Bending Spoons about the potential implications of the merger, including layoffs, the company declined to comment.
It has since come to light via a Reuters report that company CEO Luca Ferrari has confirmed layoffs to the tune of 75%, commenting: “I won't be more specific at this stage because the layoff hasn't been fully defined yet.”
At the time of the acquisition, Ferrari stated: “We’re enthusiastic about becoming [WeTransfer’s] new owner, and we feel a strong sense of responsibility to help the brand and the business thrive for many years to come.”
In a statement to TechCrunch, Ferrari added: “In this particular case, the vision we developed is of a smaller, more sharply focused WeTransfer organization, which we believe will be better-positioned to serve WeTransfer’s success with a long-term view.”
This isn’t the first time that Bending Spoons has enacted drastic changes for a company that it has bought out. In July 2024, under its new Italian leadership, it was confirmed that “most” US- and Chile-based Evernote workers would be made redundant as the note-taking platform packed its bags and headed to Europe to be closer to the Bending Spoons team.
More from TechRadar ProWe are mere hours away from Apple’s big “Glowtime” event, where the company is expected to announce its new iPhone 16 range – and Mac owners could be in for a treat as well, as it’s widely expected that Apple will reveal the final release of macOS 15 Sequoia alongside the expected iOS, iPadOS, and VisionOS releases.
While macOS 15 Sequoia public beta has been available to download and test for a while now, the fact that Apple could announce a launch date at its Glowtime event is surprising for a number of reasons.
For a start, the event is expected to focus primarily on the new iPhones, arguably Apple’s most important products, so I wouldn’t usually think that Apple would want to spend much time (if any) talking about operating systems for non-iPhone devices.
There are rumors suggesting that Apple could announce new Macs and MacBooks very soon, so you’d imagine that Apple would wait for a dedicated Mac event to announce the new version of macOS.
However, MacRumors claims to have seen Apple documentation that macOS 15 Sequoia will get a full public release by the middle of September, and if this proves correct, the publication notes that this would be the earlier macOS release since Mac OS X 10.8 Mountain Lion, which launched at the end of July back in 2012.
In recent years, Apple usually has an early September iPhone launch event, with the public release of iOS (the operating system iPhones use) happening shortly before the new iPhones go on sale.
Apple then usually releases the new version of macOS later in the year. The report by MacRumors, then, suggests quite a big break with tradition – and as a MacBook user, this excites and worries me in equal measure.
(Image credit: Apple) Why the early release of macOS?There are a few positive reasons why Apple might release macOS 15 Sequoia early. It could mean that the beta testing stages have concluded and no major issues were found – and Apple simply doesn’t feel the need to hold off releasing the final version of macOS 15. If this is the case, we could get the most stable macOS release in years, with no teething issues, bugs or other problems that have cropped up with previous releases.
It could also suggest that new Macs and MacBooks, such as the rumored M4 MacBook Pro, are indeed coming very soon, and Apple is keen to release macOS 15 Sequoia ahead of the new Macs going on sale.
It’s also widely expected that Apple will dedicated a big chunk of today’s Glowtime event to showing off its new AI tool – Apple Intelligence. The iPhone 16 range (and iOS) will likely be announced with lots of Apple Intelligence integration – and if macOS 15 Sequoia also comes with lots of Apple Intelligence goodies, it would make sense for Apple to announce when we can get it on our Macs during this part of the presentation.
However, an announcement during the iPhone event and an early release might not be all good news. It could suggest that macOS 15 Sequoia is only a minor update over macOS Sonoma, the current version.
The Glowtime event will likely be packed with new iPhone announcements, as well as possibly new Apple Watch and AirPods, so there may not be time to give macOS 15 Sequoia the announcement it deserves. If the launch reveal is rushed, and unceremoniously dumped between more headline-grabbing announcements like a new Apple Watch, then it could be a sign that Apple isn’t as excited about Macs as it once was. If that’s the case, then it would be a real shame indeed.
Either way, we may not have long to wait, so make sure you keep an eye on our Glowtime live blog for all the breaking news as it happens.
You might also likeApple’s annual September launch event is happening today (follow along with our iPhone 16 event live blog for the latest updates), and while we already had an inkling that certain Apple Intelligence features might be delayed until after the initial rollout of iOS 18, we’re now hearing that two of the most exciting new AI tools could be held up until December.
According to Bloomberg’s Mark Gurman, Apple now plans to launch Image Playground and Genmoji in iOS 18.2, which is likely to begin rolling out at the end of the year. Image Playground will allow users to create generative images in various iPhone apps like Messages or Notes, while Genmoji is essentially a custom emoji generator.
Apple Intelligence will almost certainly be the key selling point of today’s “It’s Glowtime” Apple event, but the first Apple Intelligence features aren’t expected to become available on compatible iPhones until the release of iOS 18.1. At that point – likely October – iPhone users will receive access to a handful of AI tools including Clean Up and writing assistance, but, as Gurman reports, Image Playground and Genmoji look set to debut at a later date.
As disappointing as this latest tip will be for prospective iPhone 16 owners, certain Apple Intelligence features are rumored to be delayed until 2025, which makes December seem like a comparatively short time to wait. Those delayed features include the full version of Apple’s new and improved Siri – it may launch in 2024, but it reportedly won’t be fully integrated into compatible iPhones until 2025 – and new features in the Mail app.
The rollout of Apple Intelligence features will be staggered (Image credit: Apple)As for which iPhones will be compatible with Apple Intelligence, Apple has confirmed that the iPhone 15 Pro and iPhone 15 Pro Max are the only currently available iPhones that possess enough processing power to handle Apple Intelligence tasks. The entire iPhone 16 line, however, will almost certainly be compatible with Apple’s new AI toolset, since the iPhone 16 and iPhone 16 Plus are reportedly set for an 8GB RAM upgrade (incidentally, here’s why RAM could be the key to Apple Intelligence).
Despite the rumored delays to all of the aforementioned Apple Intelligence features, we’re hoping that Apple still shows some of them off at today’s Apple event. Perhaps we’ll even get to take the likes of Image Playground and Genmoji for a spin during a post-launch hands-on session. If so, we’ll of course be sharing our first impressions.
To tune into the iPhone 16 launch event yourself, here’s how to watch the iPhone 16 launch live. Alternatively, check out our iPhone 16 event live blog for the latest on-the-ground updates from Cupertino.
You might also likeDid you know Star Trek Day is celebrated annually on September 8? If you didn't, you do now, but even though we're a day late there's still plenty of time to celebrate thanks to Paramount Plus. In fact, one of the best streaming services is showing episodes for free.
The hit sci-fi series is actually one of the reasons we gave Paramount Plus a big thumbs up when we compared it against other streaming giants such as Netflix and Prime Video, because it has a huge offering when it comes to Star Trek shows and movies – not sure where to start? Check out our guide to how to watch Star Trek in order.
Here's everything you need to know about where you can watch the first episodes from hit Star Trek shows for free online, and which ones of the best Paramount Plus shows are available.
How to watch Star Trek for freeCelebrate #StarTrekDay by streaming these episodes and more for FREE! Find them on the @ParamountPlus free content hub starting today. https://t.co/uSyJ9MwKgq pic.twitter.com/c33PqN78PYSeptember 7, 2024
The free Star Trek episodes are available via Paramount Plus' official YouTube channel in the free content hub on the app but please note these are only available in the US. Trekkies watching elsewhere in the world can still stream the episodes with a regular Paramount Plus subscription.
For those watching in the US, the beloved episodes will be free between September 7 – 13, so you've got plenty of time to enjoy some classics before they disappear.
The complete line-up of free episodes is as follows:
AMD’s new graphics cards are coming in hot, but the latest leak is sure to be a mixed bag for PC gamers - heavily depending on how much money you like to pour into your custom PC builds.
With Nvidia still dominating the high-end of the battle for the best graphics cards, AMD senior vice president Jack Huynh recently revealed in an interview with Tom’s Hardware that Team Red actually plans to dial back its aspirations for flagship gaming GPUs, saying that “my number one priority right now is to build scale” - meaning that the focus will be more on the mid-range and affordable market space when it comes to gaming graphics cards.
Hot on the heels of Huynh’s statements, new leaks from Chinese tech site Benchlife have indicated that we’re going to see at least four new GPUs packing AMD’s new RDNA 4 graphics architecture, split into two ‘Navi 44’ and two ‘Navi 48’ cards.
For the uninitiated, ‘Navi’ is the codename for subdivisions of AMD’s GPU microarchitecture; for example, last year’s Radeon RX 7600 XT was built on the Navi 33 architecture. These new Navi 44 and 48 cards will likely be sold under the Radeon RX 8000 family branding, marking AMD’s first step into an entirely new generation of desktop GPUs.
Although actual information about these four cards is fairly light right now, we do know that they will all feature GDDR6 VRAM, and based on Huynh’s new objectives for AMD’s graphics department, it would be reasonable to assume that none of them will be going directly up against the rumored RTX 5090 from Nvidia. If I had to guess, I might say that it’s likely we’re going to get an RX 8700 XT and XTX along with a more affordable RX 8600 XT and XTX, as some previous leaks have alluded to.
Opinion: AMD refocusing is a smart move right nowWhile some fans of Team Red will no doubt bemoan the lack of a flagship GPU in the upcoming RDNA 4 generation (which would probably be the Radeon RX 8900 XTX), it’s fair to say that this could be a savvy move on AMD’s part. Nvidia has absolutely dominated the high-end space with its current RTX 4090 - to the point where it actually canceled a possible RTX 4090 Ti because, let’s be honest, it would’ve been overkill.
Huynh rightly analyses that AMD needs to build its market share in the PC graphics space, and the right way to do that is to hit the midrange and budget arenas hard. Nvidia isn’t lacking in that area - the RTX 4060 is a fantastic GPU - but it’s a fight that AMD is far more likely to win than going head-to-head with Team Green at the top end of the slate.
AMD already has certain areas of the graphics market effectively locked down: the burgeoning popularity of gaming handhelds has no doubt been a boon for the company, since it makes the processors used in handheld PCs like the excellent Asus ROG Ally X, Lenovo Legion Go, and of course Vavle’s Steam Deck. AMD also works with Sony and Microsoft to produce custom chips for the current generation of home consoles.
But when it comes to desktop and laptop GPUs, Team Red has been playing second fiddle to Nvidia for too long. With its rival going all in on AI right now, AMD needs to situate itself as the real choice for PC gamers everywhere - and I’m hopeful that this new strategy will prove successful.
You might also like...Here’s a cool portable SSD I checked out at this year’s IFA - the Lexar Professional Go with hub. Aimed at mobile content creators, videographers, and filmmakers, this external storage device effectively lets users increase a phone’s storage capacity up on the fly.
It’s small, versatile, and while primarily targeting iPhone users, the slim SSD is compatible with any iPhone, iPad, and Android device with OTG and USB-C. Plugging into a computer, it works across macOS and Windows.
Beyond quickly adding additional storage, the Professional Go’s hub shows some clever design choices that make it easier to hook up peripherals when working in the field (scroll through the gallery below for close-ups).
Compact mobile storageOK, this isn’t the first SSD for a phone – it’s not even Lexar’s first release in the arena. But it is one of the best designs I’ve seen. The device started life on Kickstarter, with backers pledging $968,607, and it’s not hard to see why it smashed its $10,000 goal.
Phone storage fills up so fast. Some no longer had SD ports as users are pushed onto cloud storage subscriptions. Working with mobile media, with large files and slow to no signals outside, it’s a regular modern tragedy. That’s the problem Lexar’s Professional Go with Hub is seeking to solve.
Strikingly compact up close, the Professional Go alone measures 1.71x0.98x0.32in, while attaching the hub adds an inch to the length. In hand, it’s nicely lightweight, too, just as it should be. While it might not be a super-rugged hard drive, it does boast an IP65 rating, making it water- and dust-proof.
Image 1 of 10(Image credit: Lexar / Future)Image 2 of 10(Image credit: Lexar / Future)Image 3 of 10(Image credit: Lexar / Future)Image 4 of 10(Image credit: Lexar / Future)Image 5 of 10(Image credit: Lexar / Future)Image 6 of 10(Image credit: Lexar / Future)Image 7 of 10(Image credit: Lexar / Future)Image 8 of 10(Image credit: Lexar / Future)Image 9 of 10(Image credit: Lexar / Future)Image 10 of 10(Image credit: Lexar / Future)The storage itself comes in 1TB and 2TB, with Lexar quoting read/write speeds of up to 1050MB/s and 1000MB/s. Slotting into the phone’s charging port and neatly tucking around the back. On an iPhone 15 Pro or iPhone 15 Pro Max, the SSD sits flush, as it’s specifically designed for these phones. Testing this out on a Motorola Android phone, I found while not quite flush, it fitted nicely without leaving too much of a noticeable gap. In hand, it feels about as unobtrusive as an alien slip of plastic and metal on the back of a phone can be.
For professional video content, the hub is the star of the show. Connecting to the storage component via one of four USB-C ports, this little square lets users clip on lights, mics for on-the-go shooting, and supports for up to 30W power delivery. With the connector shaped like a hook, it also allows for more freedom when using rigs and shouldn’t interfere with gimbals.
On the face of it, there’s a lot to like about the Professional Go with Hub – whether for film-making or just giving the phone some much-needed storage. In the box, users get the SSD with rubber cover, Hub, two USB-C adapters, USB-C Cold Shoe adapter, wrist strap, and pouch. We’ll shortly be publishing a full review to see how close it comes to those 10Gbps transfer speeds.
Read more from TechRadar ProSamsung’s top phones are already expensive, but its 2025 handsets could be even pricier if they use the Snapdragon 8 Gen 4 processor, as at least some of them are expected to.
Qualcomm, which makes Snapdragon chipsets, has itself previously said to expect the Snapdragon 8 Gen 4 to cost more than the Gen 3, and now we have an idea of how much more, with reputable leaker @UniverseIce claiming (via NotebookCheck) that it will cost $240 (around £185 / AU$360).
That’s apparently a 20.68% price increase compared to the Snapdragon 8 Gen 3, amounting to about $40 (roughly £30 / AU$60) more.
And there’s no reason to think that such an increase wouldn’t be passed on to consumers, which could mean higher prices for not just the Samsung Galaxy S25 series, but also the Samsung Galaxy Z Fold 7, the Samsung Galaxy Z Flip 7, and numerous other Android phones, including devices from OnePlus, Sony, Xiaomi, and more.
Lots of factors affect the priceStill, there are a few things to bear in mind here. For a start, this news isn’t confirmed just yet, so we’d take it with a pinch of salt. Also, a lot of factors affect the pricing of a phone, so while this is worrying news, it’s possible that other factors could help to keep the price down.
Plus, brands like Samsung that sell enormous numbers of phones might be able to negotiate better pricing for the chipset.
There’s also a chance that companies could turn to other chipsets if the price is too high. Samsung already equips some of its phones with the company’s own Exynos chipset, including often Galaxy S flagships in some regions, and then there’s MediaTek, which we’d previously heard Samsung might turn to for chipsets.
Still, if this leak is correct then there’s a real chance that quite a lot of 2025’s best Android phones will be at least a little more expensive than current models.
You might also likeFollowing on from the introduction of Carbon Footprint for Google Workspace at Google Cloud Next ‘23, admins can now use the service to monitor and analyse their electricity usage.
The announcement comes via a partnership with Electricity Maps, a company providing an API offering readouts of “real-time and predictive electricity signals” to customers.
Previously, only top-level admins could access carbon footprint information, but now the ability to assign a custom role with the ability to do so is baked into Workspace.
Carbon emissions on the riseGoogle is right to note in the announcement that cloud computing - and AI - have drastically increased carbon emissions from the tech industry.
However, while the incorporation of the Electricity Maps API will help companies know what needs to be done to keep emissions down, the announcement fails to address the fact of the matter; that, as one of the best cloud storage and AI tool providers, and largest companies in the world, Google is probably more responsible as a contributor to climate change than any of its customers.
The announcement does include a mention of Google’s intent to be “carbon-free” by 2030, by both cutting down and moving to “nature-based” solutions, while sharing these with companies worldwide, which is honourable, and, providing it actually achieves this, does mean that Carbon Footprint for Workspace isn’t just Google passing the buck to others.
More from TechRadar ProApple is known for the quality of its products, which is why it’s all the more surprising when the company occasionally releases a dud. But Apple’s FineWoven cases, introduced with the iPhone 15 range, are more than just a dud – they’ve been a disaster, and now it seems almost certain that they’re destined for the scrap heap.
That’s because a prominent leaker has claimed that Apple is set to replace its FineWoven cases with a new alternative. That’s according to Majin Bu on X (formerly Twitter), who shared two images of what they claimed will be the new cases and the complete range of colors they will apparently arrive in.
Many of the colors are the same as the existing FineWoven cases, including blue, green, black, and taupe. Apple’s mulberry color has apparently been replaced with a dark purple, and there’s supposedly a new gray version in the works too. Majin Bu didn’t explain what material the new cases would be made from, other than to say that it would be something new to replace FineWoven.
Majin Bu has a mixed track record when it comes to Apple leaks, correctly predicting some upcoming features while getting others wrong. Interestingly to this point, the account seemed to retract the second image they posted when someone pointed out that it depicted existing leather cases, suggesting their due diligence was not at the level it should have been.
What’s the problem with FineWoven? (Image credit: Future)It would be an understatement to say that Apple’s FineWoven cases have met with disappointment. The products are made with a micro twill material, and were meant to be an eco-friendlier alternative to Apple’s leather cases. Yet they met with an almost immediate backlash.
Users complained that the cases scratched easily, and that these scratches could not be removed or masked. For others, the fabric material was a magnet for stains and blemishes, leading one reporter to joke that their FineWoven case was so dirty that it posed a “biomedical concern.”
It seems that this criticism did not escape Apple’s ears, and reports have suggested that the company has halted production of FineWoven cases and could announce that it's discontinuing them during its iPhone 16 event later today. That gives a boost to Majin Bu’s claim that something new is coming to replace FineWoven.
Ultimately, we’ll find out later today when Apple kicks off its iPhone 16 event and reveals a bunch of new phones, with fresh cases potentially in hot pursuit. We’ll be covering every new announcement as it happens, so make sure you check out our iPhone 16 launch live blog to find out the fate of FineWoven, and if you want to tune in, here's how to watch the iPhone 16 launch event live.
You might also likeNew research has claimed the rapid development of, and demand for, generative AI has accelerated the rate of greenhouse gas (GHG) emissions from data centers.
A report from Morgan Stanley suggests the datacenter industry is on track to emit 2.5 billion tons by 2030, which is three times higher than the predictions if generative AI had not come into play.
The extra demand from GenAI will reportedly lead to a rise in emissions from 200 million tons this year to 600 million tons by 2030, thanks largely to the construction of more data centers to keep up with the demand for cloud services.
Net-Zero targetsMorgan Stanley's report outlined 60% of the emissions figure will come from the operations of the data centers as they require massive amounts of power to run. The remaining 40% is likely to be thanks to the carbon emitted from the manufacturing of the construction materials and infrastructure for the centers.
With Google already reporting a 48% increase in emissions over the last five years, unsurprisingly, this brings net zero emissions targets into question. The tech industry already amounts to 40% of the entire annual emissions from the US - so carbon dioxide removal technologies are poised to play a key role in achieving environmental targets.
The difficulty in mitigating the environmental impact of data centers is that they can reduce energy consumption through water-cooling systems, but it takes an enormous amount of water to do so. With water becoming a more precious resource, those systems hamper tech giant’s green goals and place huge strains on areas with ‘high water scarcity’.
There’s uncertainty around the future of AI and its impact on the environment. The carbon removal and carbon capture, utilization, and sequestration (CCUS) technologies are not yet fully developed. Morgan Stanley suggests CCUS tech needs a $15 billion investment to bring them up to standard. The research also points to reforestation projects as a possible tool for net-zero targets in the future.
Via The Register
More from TechRadar ProPatent filings don't necessarily result in new-fangled tech, but they often give us a good indication of what engineers are thinking – or, in Ford’s case, smoking.
Recent filings registered with the United States Patent and Trademark Office (USPTO) have revealed that the company is experimenting with some pretty advanced holographic technology that, if it ever gets produced, will be able to project a plethora of realistic imagery inside and outside of the vehicle.
Although the patent filing goes into little detail about the actual use-cases for such technology, it does reveal that the general idea is to create a system and method for "projecting moveable and interactive holograms inside and outside of a vehicle".
Basic drawings that accompany the ideas show security guards roaming around a parked Ford, while another image shows a young boy pointing at what we understand to be virtual guard dogs.
The system uses integrated holographic camera modules (IHCMs) that can display pre-recorded 2D animations outside of the car "without any noticeable distortion".
A list of potential imagery includes: candles, flowers, lights, robots, equipment, animals, birds, cartoon characters, and creative or non-realistic content.
Image 1 of 3(Image credit: Ford)Image 2 of 3(Image credit: Ford)Image 3 of 3(Image credit: Ford)However, this dazzling display is not just limited to the exterior of the car, as the same (or similar) modules could also be used for the vehicle’s interior. Here, the inventors suggest drivers and passengers would be able to interact with 3D imagery and user interfaces.
The examples they give include a food menu, which could automatically be beamed to the holographic camera modules as the driver approaches a restaurant or drive-thru.
Similarly, the holographic tech could be used to beam a realistic image of a person in the passenger seat, which would perhaps act as a potential deterrent to opportunistic thieves or carjackers when the vehicle is parked for short periods of time.
Ford saves the wildest use-case until last, suggesting that the processor may form a hologram of a 'big polar bear' and that it may be projected as 'driving the vehicle'.
"Part of bear body may be inside the vehicle and a bear head may be outside the vehicle (e.g. protruding from a vehicle top portion)", the patent filing reads.
Wow, just wow.
This isn't the first time Ford has been spotted filing a patent for some slightly madcap technology. We recently reported that it was considering a system that uses Ford vehicles and their on-board camera and sensor technology to first detect a speeding motorist and then report them to the authorities.
The company also seems to have a bit of an obsession with holographic projectors, with a previous filing using a water misting system to project cinematic imagery onto said droplets for an impromptu drive-in experience, according to Ford Authority.
It's highly unlikely we'll see holographic dogs and security guards roaming around upcoming Ford models any time soon, but the interactive holographic system for a vehicle’s interior isn’t completely novel.
BMW debuted a similar system at CES in Las Vegas back in 2017, where a driver could interact with menu screens on a 3D holographic display that magically appeared in the car's centre console.
That tech never made it to production, but BMW continued with similar gesture-control technology, which can be found on a number of current BMW and Mini products and allows the driver to skip tracks and control the volume of the infotainment system by twirling a finger.
So perhaps there is a future for holographic projection after all – maybe just not involving guard dogs and polar bears.
you might also likeHere's an unexpected bit of news: the AirPods Max 2, which pretty much everybody was certain weren't going to get an update at today's big Apple event (you can even follow the build-up, thanks to our 'It's Glowtime' Live Blog) apparently are going to get an update today. Don't expect something dramatically different, mind, but the new AirPods Max will reportedly come with improved ANC and Adaptive Audio as well as the now-obligatory USB-C.
That's according to Bloomberg's Mark Gurman. If you're thinking "hang on, isn't that the same Mark Gurman who said that there wouldn't be new AirPods Max at the Apple event?" you're right. Gurman did say both a few days ago and also back in February that AirPods Max 2 this year was "Not possible. There is no AirPods Max 2. It's the same as current but USB-C. That's the only change." And that's why technology journalism is so exciting.
Gurman's news was posted to X last night, where he said that the new Max headphones would launch alongside the "low-end" AirPods 4. So, come on, what can we expect?
Apple AirPods Max 2: what to expectTo be fair to Gurman, his Apple news is apparently based on what sources tell him – and if their information isn't accurate, isn't complete or is out of date he can't exactly call up Tim Cook to get the inside track. But while the updates are supposedly more than just a USB-C port, we're still looking at a relatively minor upgrade. Personally, I'm hoping for the end to the horrible AirPods Max case, which I think is up there among the very worst things Apple has ever designed, but the changes are are likely to be internal.
The AirPods Max remain a superb pair of headphones, but upgrades are definitely due. USB-C was always going to be in the second generation AirPods Max, because EU regulators demand it. But it sounds like Apple has also taken the opportunity to upgrade the audio chip to deliver better ANC features. And that's welcome, because in the three years since the AirPods Max launched the technology, and the competition, has improved considerably.
The downsides of the AirPods Max are likely to remain, though: they'll still be heavy, they'll still be very expensive and they're not going to be a great option for Android users. As we said in our most recent update to our original AirPods Max review, in 2024 "they're definitely still among the best wireless headphones for certain buyers. However, it partly depends on what price you can get them for – we wouldn't recommend buying at full price" when "The Bose QuietComfort Ultra Headphones sound just as good and cost a lot less, and have far better noise cancellation (and are lighter)." The new Max are likely to be the best Apple headphones, but that doesn't necessarily mean they'll be the best value.
You may also like