We've already seen dozens of new free-to-view channels added to Google TV during 2024, and one more update has been rolled out in time for the holidays – bringing the number of channels available to US viewers to more than 170.
This latest update was spotted by 9to5Google, and should be available now if you're using a television set or streaming device with the latest Google TV software on it. You'll find them under the Google TV Freeplay app.
The new channels are Best of Dr Phil, Xumo Free Holiday Movie Channel, Xumo Free Holiday Classics, Xumo Christian Christmas, Continuum, Z Nation, The Design Network, Filmrise: Classic TV, UFC, Unbeaten, Big 12 Studios, Waypoint TV, and PursuitUP.
There are also updates for Stingray Greatest Holiday Hits, Stingray Soul Storm Christmas, and Stingray Hot Country Christmas. These new channels follow on from Designated Survivor and Places & Spaces – The Great Christmas Light Fight added in November.
Keep them coming A variety of new features have been added to Google TV this year (Image credit: Google)That brings the total number of channels available in Google TV Freeplay to 171 – though as 9to5Google notes, some of them are likely to be only available over the holidays (as a few of those title channels would suggest).
One channel has been removed at the same time though: it seems Motortrend Fast TV is no longer available. No doubt this chopping and changing of content is going to continue as we go through 2025 as well.
We've seen a steady rise in the number of free ad-supported television (FAST) channels available on streaming platforms in recent years: there are hundreds more available in apps such as Plex, Tubi, and PlutoTV.
You may remember Google TV adding extra channels in August and September of this year, as well as at other points during 2024. The software has also been given plenty of new features over the last 12 months as well.
You might also likeRecent analysis of the security landscape of machine learning (ML) frameworks has revealed ML software is subject to more security vulnerabilities than more mature categories like DevOps or Web servers.
The growing adoption of machine learning across industries highlights the critical need to secure ML systems, as vulnerabilities can lead to unauthorized access, data breaches, and compromised operations.
The report from JFrog claims ML projects such as MLflow have seen an increase in critical vulnerabilities. Over the last few months, JFrog has uncovered 22 vulnerabilities across 15 open source ML projects. Among these vulnerabilities, two categories stand out: threats targeting server-side components and risks of privilege escalation within ML frameworks.
Critical vulnerabilities in ML frameworksThe vulnerabilities identified by JFrog affect key components often used in ML workflows, which could allow attackers to exploit tools which are often trusted by ML practitioners for their flexibility, to gain unauthorized access to sensitive files or to elevate privileges within ML environments.
One of the highlighted vulnerabilities involves Weave, a popular toolkit from Weights & Biases (W&B), which aids in tracking and visualizing ML model metrics. The WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) enables low-privileged users to access arbitrary files across the filesystem.
This flaw arises due to improper input validation when handling file paths, potentially allowing attackers to view sensitive files that could include admin API keys or other privileged information. Such a breach could lead to privilege escalation, giving attackers unauthorized access to resources and compromising the security of the entire ML pipeline.
ZenML, an MLOps pipeline management tool, is also affected by a critical vulnerability that compromises its access control systems. This flaw allows attackers with minimal access privileges to elevate their permissions within ZenML Cloud, a managed deployment of ZenML, thereby accessing restricted information, including confidential secrets or model files.
The access control issue in ZenML exposes the system to significant risks, as escalated privileges could enable an attacker to manipulate ML pipelines, tamper with model data, or access sensitive operational data, potentially impacting production environments reliant on these pipelines.
Another serious vulnerability, known as the Deep Lake Command Injection (CVE-2024-6507), was found in the Deep Lake database - a data storage solution optimized for AI applications. This vulnerability permits attackers to execute arbitrary commands by exploiting how Deep Lake handles external dataset imports.
Due to improper command sanitization, an attacker could potentially achieve remote code execution, compromising the security of both the database and any connected applications.
A notable vulnerability was also found in Vanna AI, a tool designed for natural language SQL query generation and visualization. The Vanna.AI Prompt Injection (CVE-2024-5565) allows attackers to inject malicious code into SQL prompts, which the tool subsequently processes. This vulnerability, which could lead to remote code execution, allows malicious actors to target Vanna AI’s SQL-to-graph visualization feature to manipulate visualizations, execute SQL injections, or exfiltrate data.
Mage.AI, an MLOps tool for managing data pipelines, has been found to have multiple vulnerabilities, including unauthorized shell access, arbitrary file leaks, and weak path traversal checks.
These issues allow attackers to gain control over data pipelines, expose sensitive configurations, or even execute malicious commands. The combination of these vulnerabilities presents a high risk of privilege escalation and data integrity breaches, compromising the security and stability of ML pipelines.
By gaining admin access to ML databases or registries, attackers can embed malicious code in models, leading to backdoors that activate upon model load. This can compromise downstream processes as the models are utilized by various teams and CI/CD pipelines. The attackers can also exfiltrate sensitive data or conduct model poisoning attacks to degrade model performance or manipulate outputs.
JFrog’s findings highlight an operational gap in MLOps security. Many organizations lack robust integration of AI/ML security practices with broader cybersecurity strategies, leaving potential blind spots. As ML and AI continue to drive significant industry advancements, safeguarding the frameworks, datasets, and models that fuel these innovations becomes paramount.
You might also like