Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

IoT’s botnet problem is up 500% – three things admins must do now

TechRadar News - Thu, 02/06/2025 - 03:49

Botnet activity on connected devices is up 500% thanks to default passwords, outdated software, and inadequate security protections creating backdoors into enterprise networks. Now, even entry-level hackers with off-the-shelf tools are getting in on the act.

In November, researchers discovered a new and dangerous botnet, Matrix, made from open source and readily available solutions rather than custom code. While not highly sophisticated, this hack shows how bad actors with basic technical knowledge can make and sell botnets with the potential for widescale damage.

This is an escalating issue and something’s got to give. Stricter device regulations are on the way in 2025 but, until they’re enforced, it’s up to admins to step up. This demands immediate action on software patching, strong authentication, and unified device management.

Growing devices, growing botnets

It’s no coincidence that connected devices and botnets are growing at similar rates. In the past five years, consumers and enterprises have embraced devices in the smart home and office, resulting in a doubling of devices in the Internet of Things (IoT). This number is expected to double again in the next decade to more than 40 billion worldwide.

This is a problem since not all devices are created equal. By scanning the internet for known software flaws or easy-to-break passwords – two common vulnerabilities in cheaper products – hackers can bend these machines to their will. With more devices, there are more botnet targets.

Once compromised, devices become unwitting recruits in massive botnet armies, allowing attackers to spread malware, launch devastating DDoS attacks, and infiltrate critical enterprise systems. Nokia recently reported IoT devices engaged in botnet-driven DDoS attacks are up 500% over the past 18 months and account for 40% of all DDoS traffic.

Matrix only ups the degree of difficulty. This latest arrival demonstrates how making a botnet isn’t as hard as one might think, opening up new avenues for individuals to execute broad, multi-faceted attacks on numerous endpoint vulnerabilities and misconfigurations. Even more concerning? The solution is for sale as commercial botnet-as-a-service, turning basic tech know-how into automated hacking weaponry. And with enterprise ecosystems now counting more endpoints than ever before, it’s clear that admins must redouble their cybersecurity efforts in the face of this escalating threat.

Three ways admins can fight back against botnets

First, and it should go without saying, change any default passwords. Generic credentials are often shared across entire fleets of the same device – meaning hackers might already have your login if it’s left unchanged. Regardless of whether you’re securing a camera, sensor, or industrial control, don’t do default. Strong, randomized passwords are non-negotiable and go a step further with two-factor authentication for added protection.

Next, strengthen your software. Half of last year’s enterprise vulnerabilities remain unpatched and outdated, making them perfect botnet targets. Automated patch management isn’t optional – it’s integral to security survival.

Finally, be proactive. Hackers are counting on admin complacency and weak backend safeguards. Prove them wrong. Contain breaches by segmenting networks, consolidate endpoint management with a unified console, and deploy AI monitoring to catch suspicious behavior.

A critical step here is developing an incident response plan. Many organizations discover botnet infections too late because they lack clear protocols for detection and response. Regular tabletop exercises and automated network monitoring (more on that below) can help teams identify weak points and practice responding to potential breaches before they occur. These basics separate minor hiccups from major incidents.

Be smart and proactive

It’s worth mentioning that various regulations are coming online to help stop botnets this year.

Europe, The United States, and The United Kingdom are taking aim at basic vulnerabilities in different ways. Europe’s Cyber Resilience Act, for example, will go a long way to closing device backdoors by banning default passwords and obligating manufacturers to provide software support throughout a product’s lifespan.

Across the Atlantic, expect to see a consumer tick of approval on connected devices that meet cybersecurity minimums. Let’s hope these concerted efforts across major markets will hit botnets where it hurts – easy to exploit vulnerabilities – and make us all a little safer.

In the meantime, the buck stops with admins, and it’s not easy in a landscape of growing devices, experimental hackers, and stretched IT teams. To close the gap, look for extra and smarter ways to oversee your ecosystem. Make your life easier with automation, maintain a real-time ecosystem inventory, and establish clear security baselines for new endpoints. You’ll find that relatively small changes to how you manage, authenticate, and protect devices can make a big difference to your overall security posture.

This isn’t to say to do away with endpoints – far from it. Connected devices are popular in enterprises big and small for a reason. They unlock operational data, deliver business insights, and achieve newfound efficiencies. The key is to onboard them consciously and carefully, slamming shut every potential backdoor while unleashing the full promise of tomorrow’s smart office.

We've compiled a list of the best endpoint protection software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Multimodal AI, the next evolution in customer experience

TechRadar News - Thu, 02/06/2025 - 02:43

As artificial intelligence continues to reshape industries, leaders around the world are navigating the challenge of how to create clear and consistent regulations that balance innovation with safety. In September, representatives from technology companies, institutions, and researchers issued an open letter to European policymakers, warning that fragmented and inconsistent rules risk depriving the EU of two cornerstones of AI innovation: “open” and “multimodal” models. Open models are free and available to everyone to use, modify, and build on, which spreads social and economic opportunity. The latest multimodal models operate fluidly across text, images, and speech and will enable the next wave of breakthroughs in AI.

Multimodal AI represents a significant leap forward from traditional AI systems. Conventional AI typically focuses on one modality at a time, for example, a text-based chatbot processes only text, and a voice assistant like Siri primarily processes voice inputs. Multimodal AI systems process and respond across multiple formats simultaneously — integrating text, voice, images, and gestures to deliver more intuitive user experiences that feel more natural and human.

Transforming customer experience through multiple touchpoints

Multimodal AI is revolutionizing customer experience, offering transformative possibilities for how brands and customers interact. At their core, these systems have evolved how customers can engage with brands by offering unmatched flexibility in communication methods. They also boost efficiency by leveraging how humans naturally process information, letting users input data the fastest way they can, through speech, and delivering responses in formats that best suit their preferences or needs.

A customer may, for example, begin their interaction through voice commands while driving, seamlessly switch to text upon entering a quiet environment and receive visual confirmations throughout their journey. This adaptability creates a more natural and comfortable experience while maintaining conversational context across different modes of interaction.

With voice interfaces providing much-needed alternatives for individuals with visual impairments and text and visual outputs serving those with hearing difficulties, multimodal systems are helping to remove barriers and promote inclusivity, broadening access to everyday tasks and interactions with brands.

By synthesizing various forms of input, multimodal AI systems are building a more comprehensive understanding of user intent and context, resulting in more accurate and relevant responses. This deeper level of understanding significantly reduces friction in customer interactions and leads to improved overall satisfaction. Notably, multimodal AI’s ability to process multiple types of input also simultaneously leads to enhanced contextual intelligence.

In the retail sector, for instance, multimodal AI is revolutionizing online and in-store consumer experiences. Leading retailers are using the technology to help customers search for products more easily using a combination of voice queries and images. For example, shoppers can use smartphones to photograph a piece of furniture and then verbally specify modifications such as, “show me this in blue” or “find similar items at a lower price point.”

Smart mirrors with multimodal AI are another innovative retail application. They respond to voice commands and gestures, enable customers to “try on” clothes virtually in their reflections, requesting different sizes or colors, and receive product recommendations. These use cases demonstrate how powerful multimodal AI can be in blending the best of digital and physical retail applications.

Best practices for implementing multimodal AI

For organizations looking to implement multimodal AI solutions, several best practices should be considered:

Seamless Integration: The key to successful multimodal implementation lies in creating smooth transitions between different modes of interaction. Users should be able to switch between voice, text, and visual interfaces without disrupting their experience or losing context.

User-Centric Design: Organizations need to understand the preferences of their specific user base to deliver the best experience. This insight should guide the choice of modalities, ensuring the technology serves real user needs rather than being implemented for its own sake.

Contextual Data Utilization: Effective multimodal systems should leverage available contextual data, including location information, interaction history, and user preferences, to deliver more personalized experiences. However, this must be balanced with strong privacy protections, informed user consent, and transparent data collection and usage policies.

Accessibility First: Rather than treating accessibility as an afterthought, organizations should place it at the core of their multimodal AI strategy. This approach not only serves users with different abilities but often leads to better solutions for all users.

Continuous Improvement: The field of multimodal AI is rapidly evolving, making it essential for organizations to update and refine their systems regularly. This includes incorporating customer feedback, adapting to new technological capabilities, and maintaining robust security measures.

Leverage Third-Party Expertise: Partnering with an expert provider can help organizations navigate the complexities of multimodal AI implementation. These providers bring specialized expertise, ensuring seamless integration, responsible innovation, and adherence to regulatory standards. These collaborations can accelerate deployment while maximizing the technology’s impact on customer experiences.

Looking ahead: the future of CX

As generative AI (GenAI) continues to evolve, multimodal AI is unlocking new opportunities for brands to win customers, build loyalty, and drive higher engagement. Offering seamless and personalized experiences enables brands to attract new customers while strengthening relationships with existing ones, encouraging repeat business and increased spending. This technology enables brands to create more meaningful and impactful customer interactions across the entire customer journey.

For multimodal AI to thrive, technology leaders need to have confidence in consistent rules that balance safety with innovation. Europe has the opportunity to create a regulatory framework that addresses potential risks while unlocking the full potential of this transformative technology.

We've compiled a list of the best customer database software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

DOGE Teen Owns ‘Tesla.Sexy LLC’ and Worked at Startup That Has Hired Convicted Hackers

WIRED Top Stories - Thu, 02/06/2025 - 01:30
Experts question whether Edward Coristine, a DOGE staffer who has gone by “Big Balls” online, would pass the background check typically required for access to sensitive US government systems.
Categories: Technology

Asus Zenfone 12 Ultra Is the Rare High-End Phone That Still Has a Headphone Jack

CNET News - Thu, 02/06/2025 - 00:30
The newest Zenfone borrows cool features from the Asus ROG gaming phone line like fast charging and wired headphone support -- but it's not coming to the US.
Categories: Technology

OM System’s new OM-3 is the stunning retro Pen-F reboot and Fujifilm rival we needed

TechRadar News - Thu, 02/06/2025 - 00:00
  • Retro design inspired by the original Olympus OM-1 film SLR from 1973
  • New line for OM System, features a creative color mode dial like in the Pen-F
  • Sits between the OM System OM-1 II and OM-5, priced £1,699 body only (US / AU pricing to follow)
  • Three lenses get the mark II treatment; M.Zuiko Digital 17mm F1.8 II, 25mm F1.8 II and ED 100-400mm F5.0-6.3 IS II

OM System just doubled down on its retro appeal with a new OM-3 mirrorless camera. It's a stunning take on the original Olympus OM-1 film SLR from the 1970s, and the closest fans are getting to a reboot of 2016's popular digital Olympus Pen-F.

Costing £1,699 body-only, or £1,999 with the 12-45mm F4 Pro lens (US and Australia pricing to follow), the OM-3 sits between the flagship OM-1 II and the enthusiast-level OM-5. It delivers OM System features we already know; the same stacked 20MP micro four thirds sensor and TruePic X processor as the OM-1 II, together with class-leading in-body image stabilization rated up to 7.5EV and quad-pixel autofocus with AI-subject detection, wrapped in a robust IP53-rated weatherproof body.

The OM-3 also features a creative dial à la Pen-F. Olympus was ahead of the curve with the Pen-F, since we now see the likes of the Fujifilm X-T50 and Panasonic Lumix S9 capitalizing on the interest in custom color profiles with dedicated controls providing quick access to a catalog of custom looks.

In the OM-3, we get a creative dial with four modes; color profile, monochome profile, Art Filters plus Color Creator. Color profile has four customizable settings that emulate film looks, as does the Monochrome profile with its four black-and-white looks.

There's much to like about the OM-3. It packs the same power as the pricier OM-1 II into a more affordable, retro body with direct access to key features. This is a camera that OM System fans in particular have been waiting for, and a compelling Fujifilm alternative.

(Image credit: OM System) More than a pretty face

The OM-3 is a stunner that weighs under 500g and is smaller than the OM-1 II, but its appeal isn't just surface-deep. It's a highly robust metal camera that's dustproof, splashproof and freezeproof, plus it has a well thought-out design with direct access to many of its powerful, modern features.

There's a new dial for photo, video, or slow and quick motion options, with slow motion recording up to 60fps in 4K and up to 120fps in Full HD. OM System has introduced new Cinema modes with Log color profiles for video that maximize the dynamic range of the camera for grading clips later.

There's also a new button to access computational photography features directly, with OM System's full suite of modes to hand; Live ND filters with six levels of strength from ND2 to ND64, the graduated ND filter effect introduced in the OM-1 II, live composite for light trails, together with a High Res Shot mode to increase resolution up to 80MP, plus a focus stacking mode. These are clever features that minimize the accessories you need and time spent editing at a computer.

Price-wise, the OM-3 is pitted against the likes of the Fujifilm X-T5, despite its lower pixel count and smaller sensor size, which will always be a strike for some. However, I can't think of a faster camera for the money – this 120fps-shooting stunner could be the ultimate travel and outdoors camera, especially for those with an eye for design. OM System also launched mark II versions of three of its lenses alongside the camera, all of which add weather-sealing to the original versions; the M.Zuiko Digital 17mm F1.8 II, the 25mm F1.8 II and the ED 100-400mm F5.0-6.3 IS II. I can see the 17mm lens being a lovely pairing with the OM-3 in particular for street photography. We're currently conducting a full OM-3 review, coming soon.

(Image credit: OM System) You might also like
Categories: Technology

Today's NYT Mini Crossword Answers for Thursday, Feb. 6

CNET News - Wed, 02/05/2025 - 22:15
Here are the answers for The New York Times Mini Crossword for Feb. 6.
Categories: Technology

TikTok owner ByteDance has a new AI video creator you have to see to believe

TechRadar News - Wed, 02/05/2025 - 19:30
  • ByteDance is showing off the new OmniHuman AI video model.
  • OmniHuman transforms a single photo into a lifelike, full-body video.
  • The videos show realistic singing, speaking, and movement.

TikTok parent company, ByteDance, is showing off a new AI video creator that can produce vivid videos of people talking, singing, and moving around from a single photograph. The new OmniHuman model can bring an image to life with eerily accurate body movements, facial expressions, and gestures.

OmniHuman’s breakthrough involved training on more than 18,700 hours of video. The AI can now mimic how humans move, speak, and interact in videos. Notably, this AI can create fully moving characters rather than just animating a face or upper body. That means a single picture can be turned into a video of someone giving a speech, dancing, or even playing an instrument.

The result is a very realistic video, whether the character is a human from a photograph or one from a more stylized painting. You can see examples below.

OmniHuman everywhere

If and when ByteDance does make OmniHuman available, it's easy to imagine it blowing up on TikTok. The company already offers an AI video-maker named Jimeng on the platform, and something like OmniHuman could entice many more people to play with TikTok and its other features.

Of course, ByteDance won't enter the space without competition. OpenAI's Sora has drawn accolades and is a big name in the AI video space, but there are plenty of others, such as Pika, Runway, Pollo, and Luma Labs' Dream Machine.

There's a lot of potential use for ByteDance's model, whether recreating actors of the past for more movies or teaching students history from the simulated mouths of historical figures. Even digital avatars for social media and gaming could become more lifelike, adapting in real-time based on user input.

OmniHuman is still a research project for now, but the fact that ByteDance is already showcasing its capabilities suggests that practical applications aren’t far behind. The AI character below could be the next face of a video trend on TikTok.

You might also like
Categories: Technology

Google Has Officially Launched Gemini 2.0 for Everyone

CNET News - Wed, 02/05/2025 - 18:34
The flood of AI news from Google comes in the wake of the launch of DeepSeek, the breakthrough Chinese AI tool that's been making headlines.
Categories: Technology

NOAA Employees Told to Pause Work With ‘Foreign Nationals’

WIRED Top Stories - Wed, 02/05/2025 - 18:29
An internal email obtained by WIRED shows that NOAA workers received orders to pause “ALL INTERNATIONAL ENGAGEMENTS.”
Categories: Technology

The Best Air Fryer and More for Your Super Bowl Watch Party

CNET News - Wed, 02/05/2025 - 18:16
Keep your beer cold, your dips hot and your wings crispy with these handy game-day gadgets.
Categories: Technology

Comcast's Super Bowl broadcast won't just look great in Dolby Vision, but it will have Dolby Atmos sound for the first time

TechRadar News - Wed, 02/05/2025 - 17:30
  • Comcast will offer the Super Bowl with a Dolby Atmos mix
  • It's part of Comcast's 'Ehanced 4K' product, which pairs this sound mix with Dolby Vision and 4K view.
  • You'll need to have the right equipment to take full advantage.

If you’re looking for the most immersive visual and sound experience when watching Super Bowl LXI, you might be very thankful for subscribing to Comcast or wish you had this Sunday, February 9, 2025. Comcast is scoring a touchdown as, for the first time, the provider will broadcast Super Bowl LIX with Dolby Atmos sound.

Best of all, it’s alongside Dolby Vision, which isn’t a first, but it’s kind of the perfect pairing and made possible by a partnership between Comcast and Dolby. You’ll need to be subscribed to Xfinity X1 to enjoy this ‘Enhanced 4K’ presentation, which Comcast describes as “an unmatched viewing experience with the best picture and audio quality delivered to the home in the fastest way possible so the action customers see in their living room is only seconds behind the game unfolding in New Orleans.”

That feed will end up on the big screen through a Comcast X1 equipment box or the Xfinity Stream app running on one of the best streaming boxes or sticks. The app is also compatible with iOS and Android, so you can watch it on your phone or tablet.

A look at the Comcast Xfinity X1 interface on a TV from one of the X1 cable boxes. (Image credit: Comcast)

Even so, the best way to experience the Super Bowl in Dolby Vision and Dolby Atmos would be on a big screen – like one of TechRadar’s picks for the best TV – and with one of the best soundbars or home theater setups. This way, you can feel the immersion as the audio from the game gets presented in a full Dolby Atmos mix.

Paired with a big, crisp 4K TV, you’ll likely get the feel you’re inside the Superdome in New Orleans. Of course, Fox Sports will still present the game in 4K through Comcast and other providers, including for free on Tubi. You’ll also want your TV – and streaming box – to support the Dolby Vision visual format and the Dolby Atmos audio format. So yes, some specific hardware beyond being an X1 subscriber is required. You'll need either the Xi6, XiOne, or XG1v4 Comcast box, but with the latter Dolby Vision isn't yet supported.

The whole idea with Comcast’s Enhanced 4K product, though, is the best possible resolution – aka 4K – delivered with super low latency in both Dolby Vision and Dolby Atmos. Previous Super Bowls streamed on X1 were broadcast with Dolby Vision, but Dolby Atmos at the 2024 Paris Olympics checked off all the Dolby boxes. This won’t be the first sport presented in Dolby Atmos either; Apple also regularly streams all MLS Season Pass matches on Apple TV Plus with a Dolby Atmos mix.

Still, this is an exciting test for Comcast’s Enhanced 4K product, as the Super Bowl will be one of the most-watched events of the year. And if you have the proper setup at home, you’ll be in for a treat as the Kansas City Chiefs face the Philidelphia Eagles in Super Bowl LIX.

And if you're looking to upgrade your home entertainment setup before the big game, check out the best Super Bowl TV deals here.

You might also like
Categories: Technology

US Shoppers Face Fees of Up to $50 or More to Get Packages From China

WIRED Top Stories - Wed, 02/05/2025 - 17:29
Consumers and small businesses are already feeling the impact of President Donald Trump’s new tariffs, which eliminated a key trade exemption for parcels worth less than $800.
Categories: Technology

Want to rent a $300,000 AMD MI300A supercomputer for free for seven days? Gigabyte wants to hear from you ASAP

TechRadar News - Wed, 02/05/2025 - 17:20
  • Gigabyte G383 R80 supercomputer costs $304,207
  • Distributors and users without a clear project are not eligible
  • Get up to two weeks free trial via email

Gigabyte, through its subsidiary Giga Computing, is offering qualified individuals and organizations the opportunity to test one of the world’s most advanced supercomputers, the Gigabyte G383-R80 server powered by AMD MI300A APUs for free.

There's a catch; after the seven-day trial period, the configured price of this high-performance system is $304,207.

The price isn't the only thing that might ward you off; in addition to it being a strict time-bound trial period, distributors aren't eligible to apply, and users must have a clear project in mind to qualify.

Claiming Gigabyte's G383-R80 offer

To participate, applicants must fill out a form on the Gigabyte Launchpad website. Giga Computing will then review the application based on the commercial value or innovative potential of the proposed project.

If approved, the company will contact the applicant within three business days to confirm details. The trial period lasts for seven days, though extensions of up to two weeks can be requested through a sales representative or via email. Access to the server will be granted within two weeks, and users must initiate their project within three days of receiving the access link.

The Gigabyte G383-R80 server is designed for demanding workloads such as AI training, AI inference, and high-performance computing (HPC). It features a 3U rack-mount chassis and supports up to four AMD Instinct MI300A APUs, which combine CPU and GPU cores for accelerated computing.

For storage, it has eight 2.5-inch NVMe Gen5/SATA/SAS hot-swap drive bays, 12 PCIe Gen5 x16 slots, and a variety of storage options, including M.2 NVMe SSDs and U.2/U.3 NVMe SSDs with capacities ranging from 400GB to 61.44TB.

Networking capabilities include onboard 10Gb/s Ethernet ports and support for PCIe expansion cards like RJ45, SFP+, QSFP28, and QSFP56.

You may also like
Categories: Technology

SUPERb: Chinese researchers just designed and built a flying robot that looks like a precursor to Matrix's laser-focused Sentinels

TechRadar News - Wed, 02/05/2025 - 16:49
  • LiDAR helps SUPER to detect and avoid even the thinnest obstacles
  • The drone can track moving targets in dense forests
  • SUPER's real-time spatial mapping allows it to operate effectively even in low-light conditions

A team of researchers at the University of Hong Kong has designed and tested an advanced aerial robot capable of navigating complex environments at high speeds of up to 20 meters per second while maintaining precise control.

Named SUPER, the quadcopter drone uses cutting-edge LiDAR technology to detect and avoid obstacles, even thin wires that have posed challenges for traditional drones.

In research published on Science Robotics (via Techxplore), the team noted while SUPER has potential applications in search and rescue, its ability to operate autonomously in unknown environments suggests it could also be used for law enforcement and military reconnaissance.

The power of LiDAR for precision flight

Unlike conventional aerial robots that rely on cameras and sensors, SUPER uses 3D light detection and ranging (LiDAR) to map its surroundings and process spatial data in real time, allowing it to function in low-light conditions.

With a detection range of up to 70 meters, the LiDAR system feeds data to an onboard computer that continuously analyzes the environment.

This system enables SUPER to chart two distinct flight paths, one prioritizing safety and another allowing for exploratory movement.

To demonstrate its capabilities, the research team tested SUPER against a commercial drone, the DJI Mavic 3.

While the DJI model avoided larger obstacles, it struggled to detect thin wires of smaller diameters. In contrast, SUPER successfully avoided all obstacles, including 2.5-mm thin wires, thanks to its high-resolution point cloud processing.

The test result also reveals the drone glided through dense forests, tracking moving targets without colliding with trees or branches.

You may also like
Categories: Technology

Google Gemini's new model is the brainstorming AI partner you've been looking for

TechRadar News - Wed, 02/05/2025 - 16:30
  • Google has added the Gemini 2.0 Flash Thinking Experimental to the Gemini app.
  • The model combines speed with advanced reasoning for smarter AI interactions.
  • The app update also brings the Gemini Flash Pro and Flash-Lite models to the app.

Google has dropped a major upgrade to the Gemini app with the release of the Gemini 2.0 Flash Thinking Experimental model, among others. This combines the speed of the original 2.0 model with improved reasoning abilities. So, it can think fast but will think things through before it speaks. For anyone who has ever wished their AI assistant could process more complex ideas without slowing its response time, this update is a promising step forward.

Gemini 2.0 Flash was originally designed as a high-efficiency workhorse for those who wanted rapid AI responses without sacrificing too much in terms of accuracy. Earlier this year, Google updated it in AI Studio to enhance its ability to reason through tougher problems, calling it the Thinking Experimental. Now, it’s being made widely available in the Gemini app for everyday users. Whether you’re brainstorming a project, tackling a math problem, or just trying to figure out what to cook with the three random ingredients left in your fridge, Flash Thinking Experimental is ready to help.

Beyond the Thinking Experimental, the Gemini app is getting additional models. The Gemini 2.0 Pro Experimental is an even more powerful one, albeit a somewhat more cumbersome version of Gemini. It's aimed at coding and handling complex prompts. It’s already been available in Google AI Studio and Vertex AI.

Now, you can get it in the Gemini app, too, but only if you subscribe to Gemini Advanced. With a context window of two million tokens, this model can simultaneously digest and process massive amounts of information, making it ideal for research, programming, or rather ridiculously complicated questions. The model can also utilize other Google tools like Search if necessary.

Lite speed

Gemini is also augmenting its app with a slimmer model called Gemini 2.0 Flash-Lite. This model is built to improve on its predecessor, 1.5 Flash. It retains the speed that made the original Flash models popular while performing better on quality benchmarks. In a real-world example, Google says it can generate relevant captions for around 40,000 unique photos for less than a dollar, making it a potentially fantastic resource for content creators on a budget.

Beyond just making AI faster or more affordable, Google is pushing for broader accessibility by ensuring all these models support multimodal input. Currently, the AI only produces text-based output, but additional capabilities are expected in the coming months. That means users will eventually be able to interact with Gemini in more ways, whether through voice, images, or other formats.

What makes all of this particularly significant is how AI models like Gemini 2.0 are shaping the way people interact with technology. AI is no longer just a tool that spits out basic answers; it’s evolving into something that can reason, assist in creative processes, and handle deeply complex requests.

How people use the Gemini 2.0 Flash Thinking Experimental model and other updates could show a glimpse into the future of AI-assisted thinking. It continues Google's dream of incorporating Gemini into every aspect of your life by offering streamlined access to a relatively powerful yet lightweight AI model.

Whether that means solving complex problems, generating code, or just having an AI that doesn’t freeze up when asked something a little tricky, it’s a step toward AI that feels less like a gimmick and more like a true assistant. With additional models catering to both high-performance and cost-conscious users, Google is likely hoping to have an answer for anyone's AI requests.

You might also like
Categories: Technology

Best 55-Inch TVs for Super Bowl LIX

CNET News - Wed, 02/05/2025 - 16:20
If you're looking for a TV to watch the 2025 Super Bowl, a 55-inch model is perfect for a smaller room.
Categories: Technology

Puppy Bowl 2025: Start Time, How to Stream the Cute Canine Competition

CNET News - Wed, 02/05/2025 - 16:01
Pooches will clash during the annual adoption awareness match.
Categories: Technology

Best TV for Super Bowl LIX, Tested by CNET Experts

CNET News - Wed, 02/05/2025 - 15:45
Make the most of your Super Bowl watch party with our top picks for the best TVs for 2025, based on side-by-side testing in CNET’s lab.
Categories: Technology

Today's NYT Connections Hints, Answers and Help for Feb. 6, #606

CNET News - Wed, 02/05/2025 - 15:00
Here are some hints -- and the answers -- for Connections No. 606 for Feb. 6.
Categories: Technology

Today's Wordle Hints, Answer and Help for Feb. 6, #1328

CNET News - Wed, 02/05/2025 - 15:00
Here are some hints and the answer for Wordle No. 1,328 for Feb. 6.
Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology