I’ve watched enough robot videos online to know that slapping an AI model into a hunk of hardware doesn’t automatically make it useful. I remember one clip in particular where someone had wired ChatGPT into a robot dog, and the results were mixed.
Sure, the robot could suddenly answer your questions with confidence. But asking it to open a door or pick something up was still the same clumsy creature as before. The intelligence was in the voice, not the motion.
So, when Samsung announced that its long-awaited Ballie robot would come preloaded with Google’s Gemini AI, my first thought was to wonder what exactly Gemini brings to the table that Ballie hadn’t already promised.
Gemini is capable of understanding language and images and performing complex reasoning. Ballie is supposed to be a rolling companion packed with cameras, projectors, sensors, and the ability to navigate your home. I can see why some might be eager to see what a combination of the two could do.
Samsung and Google have suggested that Ballie plus Gemini would enable the robot to suggest activities to raise your energy or give you outfit tips using its onboard camera and a Gemini-powered sense of style. But looking at it more, I'm not really sure anything the companies describe is either already something Ballie would be able to do or something Gemini can already do without needing to be connected to Ballie.
Ballie can adjust smart home devices and project videos or ambient lighting onto your walls, but does it need Gemini to do so? Gemini can answer questions, analyze photos and fashion, and organize your whole day, but does being embedded in a ball-shaped robot enhance those features much?
You may as well open Gemini on a tablet and tape it to a skateboard. Gemini gives Ballie better language understanding and smarter suggestions, but those don’t require a robot body that can roll across your living room.
Robot AI necessity (Image credit: Samsung)Ballie is an impressive piece of hardware. It has dual projectors, depth sensors, LiDAR, multiple microphones, and a high-end camera setup. It can follow you, return to its charger, and even record or stream footage. But all of those things were innate to the hardware.
Gemini doesn’t make Ballie project in higher resolution or roll faster. It doesn’t give Ballie arms or the ability to interact physically with objects. It just gives it better words – and while words matter, they don’t necessarily translate into a robot that’s more useful.
AI can make a chatbot smarter and mimic your tone or finish your sentences in an email. But when it comes to hardware, intelligence without capability hits a wall. If Ballie can’t do more things because it has Gemini, then it's just a more articulate version of itself. If Gemini brings a lot of useful features to Ballie that it wouldn't have had before but that Gemini couldn't do on its own, then that's a different question.
It's not that Gemini brings nothing to the table. Talking to Ballie about your schedule and getting a visual summary projected on the wall is a pleasing idea. And Gemini’s ability to integrate multi-modal inputs could make those interactions smarter. But again, the robot’s physical role in that process is still fuzzy.
There’s a reason people still get excited by videos of robots folding laundry or climbing stairs. It’s because movement and manipulating the environment are still rare and impressive feats. No matter how smart the AI becomes, if the hardware can’t meet it halfway, the result will always feel like a demo missing its payoff.
Making more conversational, more responsive, and more human in its communication is a leap for Samsung. But that leap is only meaningful if the robot itself can deliver something you couldn’t already get from a screen.
Otherwise, you may as well open Gemini in a browser tab on a tablet, duct-tape it to a Roomba, and call it a day.
You might also likeOpenAI just rolled out a major memory upgrade for ChatGPT. Though subtle in a way, I think it could mark a significant shift in how people engage with AI, certainly in the long term.
Before now, ChatGPT's memory was limited to the current session unless ChatGPT decided some bit of it should be added to long-term memory or if you manually did so. Otherwise, every new conversation was a clean slate.
Now, ChatGPT can pull from your entire chat history across every session to respond to your latest query. It knows your vibe and can track your projects. It will remember things from your discussions even if you might have forgotten.
It still has the user-saved memory that you deliberately ask it to store, but now, every little comment and question will also be part of how ChatGPT processes conversations with you, like a polite robot intern who’s secretly keeping a journal. If you want to find out what ChatGPT's image of you is, you can just ask it to "Describe me based on all our chats."
You might not think this is such a big change, but as someone who's become a regular user of ChatGPT, I can easily imagine how it will benefit me. When I ask for a recipe idea, ChatGPT will now pull up previous recipes it's provided and ask if I liked the result, coming up with new meal ideas based on my opinion of the earlier one.
The same goes for brainstorming bedtime story ideas. I almost never want to write one entirely, but I do get some inspiration from the premises ChatGPT suggests, and now it will be better at riffing on suggestions I've said before.
While new features and improvements to AI chatbots can sometimes feel like a lot of noise for something that isn't that big a deal, persistent memory feels like real progress just by being a feature built for the long term. Maintaining context across interactions makes it easier for the overall 'relationship' to feel more meaningful.
It also opens the door to new use cases. Imagine tutoring that adapts to your learning style across weeks. Or therapy journaling with an AI that remembers what you said three sessions ago. Or productivity planning that doesn’t need to be re-explained every Monday morning. You don’t need the AI to be sentient as long as it's consistent.
Memorable movesChatGPT's memory improvement isn't without complications, though. Having an AI remember you across time inevitably raises questions about privacy, autonomy, and, frankly, how much information you want your AI companion to have.
Yes, it’s helpful that it remembers you’re kosher and like a bit of spice in your dishes, but you don't want it to assume too much.
This is pretty specific to just me, but I do a lot of tests of ChatGPT and its features, and not every test is built around my real life. I'm not traveling to Japan next week; I just wanted to see how ChatGPT would do at devising an itinerary. I then have to either delete that session or explain to the AI that it shouldn't use that question when formulating answers to other questions.
There’s also a philosophical element. The more AI mimics memory, the easier it becomes to anthropomorphize. If it remembers your favorite sports team, your pet’s name, or your dislike of semicolons, it starts to feel like a person, and it's vital to not ascribe self-awareness to an algorithm that is far from attaining it. It’s easy to trust a tool that remembers you. Maybe too easy in this case.
Nonetheless, for good or ill, I maintain that ChatGPT's comprehensive memory is one of the most consequential AI upgrades this year so far and will likely still be so when 2025 is over.
Memory is a potent trick, even if it doesn't let you make a Ghibli Studio version of yourself. Memory is the thing that turns an inert tool into a long-term assistant. Even if your assistant is just a digital emulation of a brain floating in a cloud, it's nice that it will remember the little things.
You might also likeAmid a serious escalation of hostilities between the two nations, senior Chinese officials have apparently acknowledged behind closed doors that Beijing was involved in a series of cyberattacks on US critical infrastructure.
These attacks saw Chinese Volt Typhoon hackers infiltrate US critical infrastructure systems for years, including compromising energy, communications, transportation, and water industries.
China had previously denied any involvement in these attacks, but the Wall Street Journal now reports Beijing officials admitted involvement in an “indirect and somewhat ambiguous” way, interpreted by US officials as a “warning to the U.S. about Taiwan.”
Monitor your credit score with TransUnion starting at $29.95/month
TransUnion is a credit monitoring service that helps you stay on top of your financial health. With real-time alerts, credit score tracking, and identity theft protection, it ensures you never miss important changes. You'll benefit from a customizable online interface with clear insights into your credit profile. Businesses also benefit from TransUnion’s advanced risk assessment tools.
Preferred partner (What does this mean?)View Deal
Escalating tensionsNews about increasing reciprocal tariffs between the two nations is pretty unavoidable, but the trade war is not the only stage for offensives, with US officials reportedly considering pursuing cyber strikes against China and security experts warning that China is poised to retaliate against tariffs with a “Typhoon” attack - referring to hacking groups Salt and Volt Typhoon.
This news comes after the Trump administration has implemented mass federal layoffs, which a former NSA cybersecurity director has warned will have a “devastating impact on cybersecurity”.
These admissions are, of course, likely to be a tactical move from China to underscore its own capabilities and willingness to use them.
For example, the Salt Typhoon attack into telecoms networks is considered a “historic counterintelligence failure”, and some officials even believe the group still lurks on US networks.
Previously, the US state department had opposed Taiwanese independence, but under Trump this seems to be much more uncertain, and escalating tensions between China and the US could lead to cyber offensives on both sides.
Taiwan has a strong economy, and crucially, is home to manufacturers of semiconductors - computer chips which are essential to almost all modern technology, and are used in satellite systems, phones, laptops, and AI.
You might also likeI love coffee, but I also love my sleep, so after about 2pm I always switch to decaf to avoid being kept awake at night. It works well, but opting for decaf generally means you miss out on some of the more unusual flavors around – like the double-fermented passionfruit beans I got from my local coffee roaster recently, or Nespresso's white chocolate and strawberry coffee pods, which are an unlikely but delicious combination of flavors.
My Speciality Coffee Association (SCA) instructor calls it 'cheating coffee', but when it tastes this good, who cares?
Thankfully, just as alcohol-free beer is now mainstream and varied, we're starting to see roasters and manufacturers start to get more creative with decaf – and Nespresso's new Sweet Vanilla Decaffeinato pods are so comforting, they might just become my new favorite bedtime drink.
The Sweet Vanilla Decaffeinato pods work in any Neespresso Vertuo machine (Image credit: Future)Coffee beans can have notes of vanilla by themselves, depending on the variety and the roast, and when extracted correctly (a tricky process to get right), coffee does have a natural sweetness. However, it's quite subtle, and if you want something more dessert-like, a coffee with added flavor is the way forward.
Decaffeinated coffee is made by removing the caffeine from green (unroasted ) coffee beans by dissolving it in water. There are a few different ways to achieve this, and Nespresso uses two different methods. The first involves simply soaking the beans in hot water to dissolve the caffeine (known as the Swiss water process). This process leaves behind no residue that could alter the taste and character of the coffee
The second method (the carbon dioxide process) is more efficient. Again, it involves soaking the beans in water to make them porous, but this time the soaked beans are placed in a pressurized container and exposed to CO2, which dissolves the caffeine.
Once caffeine has been extracted from the beans, it can be re-used to make high-caffeine drinks like Nespresso's energy-boosting functional coffees.
The vanilla flavor works well with barista oat milk (Image credit: Future)After loading a pod into my Nespresso Vertuo Pop (one of the best Nespresso machines around if you need something compact) and hitting the brew button, I was left with a cup full of creamy decaf coffee with a generous layer of foam.
It's delicious by itself, and the added flavor doesn't overpower the taste of the beans, but I enjoy a milky bedtime drink and vanilla typically works well with dairy, so I was keen to see how it would hold up as a white coffee.
The Sweet Vanilla Decaffeinato pods are mug-sized, meaning they'll fill a 230ml mug, but I still had room for a little caramel-flavored barista oat milk, or regular dairy milk. The flavors both combine very nicely, though regular milk would work best if you don't have a particularly sweet tooth.
Hopefully we'll see even more decaf options soon – cheating or otherwise.
IBM has announced the z17, a new mainframe to address growing AI demands on enterprise infrastructure.
Positioned as a foundation for hybrid cloud environments, and with support for real-time AI and enterprise-grade resilience, the IBM z17 is designed to handle transaction-heavy workloads, improve operational efficiency, and address security concerns in industries with stringent compliance needs.
Central to the new mainframe is the Telum II processor, which was originally announced at Hot Chips 2024. Developed using Samsung 5nm technology, it integrates an on-chip AI coprocessor to support inferencing tasks, including small language models with fewer than 8 billion parameters.
Big on securityAlongside the processor, Big Blue plans to offer the Spyre Accelerator card (also originally previewed at Hot Chips) to complement the Telum II and extend the AI compute capabilities for unstructured data processing such as text-based generative AI.
The z17 can accommodate up to 48 of these accelerator cards, allowing scalability across enterprise workloads. It is expected to be available 4Q 2025 via PCIe card.
Security is a big focus for the IBM z17 and includes AI-powered features such as Sensitive Data Tagging for z/OS and IBM Threat Detection for z/OS, both of which use natural language processing to identify and protect sensitive data or scan for potential threats.
In addition, it supports NIST-standardized quantum-safe cryptographic algorithms to address future regulatory requirements.
The z17 system also incorporates a new data processing unit to accelerate I/O protocols for networking and storage.
IBM says it expects application developers to benefit from AI-driven assistants that can automate tasks across the software development lifecycle, improving productivity and reducing skill transition issues in mainframe environments.
Transactional AI use cases such as fraud detection, money laundering prevention, and anomaly detection can now be deployed closer to the data source, IBM says, with support for multimodel inference to improve accuracy and reduce false positives.
"The industry is quickly learning that AI will only be as valuable as the infrastructure it runs on," said Ross Mauri, general manager of IBM Z and LinuxONE, IBM.
"With z17, we're bringing AI to the core of the enterprise with the software, processing power, and storage to make AI operational quickly. Additionally, organizations can put their vast, untapped stores of enterprise data to work with AI in a secured, cost-effective way."
Available in configurations that support up to 208 processors and 64TB of memory, the z17, which is the culmination of five years of design and development, is designed to operate at 5.5GHz and comes housed in up to four frames. While aimed at critical workloads, IBM is also positioning it as part of a larger hybrid cloud strategy.
IBM also took the wraps off z/OS 3.2, the next version of its flagship operating system for IBM Z systems. This is planned for release in the third quarter of 2025.
You might also likeFujifilm has never exactly followed the herd, but even by its standards the company's next compact camera will be an eccentric one – if new rumors about the so-called 'X-Half' are to be believed.
Fuji Rumors recently shared what it claims is the first leaked image of the camera, and has now followed that up with new post that gives it a name. The X-Half, it seems, will be a compact camera with 1-inch sensor that rivals the many half-frame cameras that have become popular among young snappers in recent years.
What is half-frame? As the Pentax 17 and Kodak Ektar H35 show, the format –traditionally found on 35mm film cameras – sees shots taken in vertical format, effectively giving you twice as many photos from a roll a film. The X-Half's twist, according to rumors, is that it'll be digital and, potentially, a bit more desirable than those two cameras.
Meet the Fujifilm X-Half – Fujifilm’s Upcoming Digital Half Frame Camerahttps://t.co/5Bpynjtpy9 pic.twitter.com/viEj6jIThIApril 11, 2025
Another reason why half-frame cameras are popular is because they easily let you create 'diptych' images, or two side-by-side frames. This lets you juxtapose two different angles on the same subject, which is a very social media-friendly trick. To help you compose these shots, Fuji Rumors claims the X-Half will have a vertical LCD on its rear panel.
Like Fujifilm's other big hits – notably the Fujifilm X100VI – the X-Half will also seemingly again blend digital convenience with film-like charm. The leaked specs include an optical viewfinder (which should keep the price down, compared to an EVF), a retro, Leica-like design, and a few manual controls including an exposure compensation dial. It'll also apparently have a fixed lens with an f/2.4 aperture.
There's unfortunately no rumored released date for the X-Half yet. But with speculation on the rise, it seems possible that Fujifilm could launch it in time for the summer seasons in the US and UK – assuming tariff-related complications don't derail it.
Analysis: A fun idea, if not for the hardcore Fuji faithful The recent Fujifilm GFX100RF (above) sits somewhere towards the opposite end of the scale to the rumored X-Half (Image credit: Tim Coleman)With many Fujifilm fans patiently waiting for more 'serious' cameras, like an X-Pro 4, these X-Half rumors probably aren't what many have been waiting for – but I'm happy to see something new on the horizon.
While the idea of a digital half-frame camera seems odd on paper – after all, you don't need to worry about saving film costs with digital – the X-Half could definitely find an audience among those who want a retro sidekick that's different from their smartphones, but easier to use than the best film cameras.
In that sense, it'll likely have more in common with Fujifilm's Instax series than pricier pro models like the GFX100RF. If it is Fujifilm's next launch, you could see it as the perfect flip-side to the latter, which is a medium format powerhouse that costs $4,899 / £4,699 / AU$8,799. The X-Half could, instead, be a compact that caters to a new, younger audience who want something with a bit more substance than an Instax.
That may leave Fuji fans who sit in between those two extremes feeling a little unloved, but a Fujifilm X-E5 is still rumored to be en route in 2025. For now, it seems Fujifilm is doing what originally brought it such big success in the early days of mirrorless cameras – hopping on new photographic trends with its own unique twist.
You might also like