In March, AWS announced the general availability of its new multi-agent capabilities, bringing the technology into the hands of businesses across almost every industry. Until now, organizations have mostly relied on single-agent AI systems, which handle individual tasks but often struggle with complex workflows.
These systems can also break down when businesses encounter unexpected scenarios outside their traditional data pipelines. Google also recently announced ADK (Agent Development Kit) for developing multi-agent systems and A2A (Agent to Agent) protocol for agents to communicate with each other, signaling a broader industry shift toward collaborative AI frameworks.
The general availability of multi-agent systems changes the game for startups. Instead of a single AI managing tasks in isolation, these systems feature robust and manageable networks of independent agents working collaboratively to divide skills, optimize workflows and adapt to shifting challenges. Unlike single-agent models, multi-agent systems operate with a division of labor, assigning specialized roles to each agent for greater efficiency.
They can process dynamic and unseen scenarios without requiring pre-coded instructions, and since the systems exist in software, they can be easily developed and continuously improved.
Let's explore how startups can leverage multi-agent systems and ensure seamless integration alongside human teams.
Unlocking value for startupsStartups can leverage multi-agent systems across several critical business functions, beginning with research and analysis. These systems excel at data gathering, web searches, and report generation through the process of retrieving, organizing and dynamically refining information.
This allows systems to streamline complex research workflows, enabling startups to operate more efficiently and make informed decisions at scale. Meanwhile, in sales processes, multi-agent systems improve efficiency by automating lead qualification, outreach and follow-ups. AI-driven sales development representatives (AI SDRs) can automate these repetitive tasks, reducing the need for manual intervention while enabling teams to focus on strategic engagement.
Many startups may also need to extract structured data from unstructured sources. For example, multi-agent systems automate web scraping and adjust to website format changes in real time, eliminating the need for continuous manual maintenance.
Unlike traditional data pipelines that require constant debugging, multi-agent systems autonomously manage tasks, reducing the need for large development teams. This is particularly useful for startups as they can ensure up-to-date data without expanding technical teams too quickly.
How businesses can implement multi-agent systemsStartups seeking to gain outsized results by leveraging these systems can do so through two impactful approaches.
One option is purchasing existing solutions to replace complex data flows and human-driven processes. This is the most cost-effective choice for many startups, as they can automate and replace complex sales pipelines and make data workflows more robust, reducing reliance on humans for repetitive tasks.
But for startups with unique operational needs, developing a multi-agent system in-house is ideal. Traditional systems require coding for every possible scenario – a rigid and time-consuming approach that is prone to human error. Multi-agent systems, in contrast, are tailored for all possible scenarios and dynamically adapt to complexities, making them a more flexible and scalable alternative.
Regardless of whether startups buy or build, multi-agent systems provide a game-changing opportunity to streamline operations, reduce manual workloads and improve scalability.
Overcoming challenges in AI integrationDespite its advantages, integrating multi-agent systems comes with certain challenges. Decision-making by agents within the multi-agent system isn’t always transparent since the systems often rely on large language models (LLMs) that have billions of parameters. This makes it challenging to diagnose failures, especially when a system works in one case but fails in another.
Additionally, multi-agent systems deal with dynamic, unstructured data, meaning they must validate AI-generated outputs across various input sources - from websites to documents, scanned documents and chat and meeting transcripts. This makes it a greater challenge to balance robustness to changes and accuracy. Beyond this, multi-agent systems face difficulties in maintaining effectiveness and require monitoring and updates in response to input source changes, which often break traditional scraping methods.
Startups can overcome these challenges by embracing new tools, such as LangFuse, LangSmith, HoneyHive and Phoenix, which are designed to enhance monitoring, debugging, and testing in multi-agent environments. Equally important is fostering a workplace culture that embraces AI agents as collaborators, not replacements. Startups should ensure buy-in across stakeholders and educate employees on the value of AI augmentation to allow a smooth adoption.
Transparency is also key. Founders must be open with staff about how multi-agent systems will be used to ensure a smooth collaboration between human and AI coworkers.
Achieving outsized resultsThe AI field is moving fast, making it difficult for experts, let alone everyday users, to keep up to date with each new model or tool that is released. Some small teams may therefore see multi-agent systems as unattainable.
However, the startups that successfully implement them into their workstreams – whether by purchasing or building custom solutions – will gain a competitive edge. Multi-agent systems bridge the gap between AI and human collaboration that can’t be achieved with traditional single-agent systems.
For startups focused on growth, multi-agent systems are the best tool in their arsenal to compete with incumbents who might be stuck with an outdated tech stack. The ability to streamline operations, reduce manual workload, and scale intelligently makes multi-agent systems an invaluable tool in achieving outsized results.
We've compiled a list of the best landing page creators.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Fujifilm has outdone itself with the new X half – a retro compact camera that packs some of its wackiest and outright funnest ideas yet, all inspired by film photography.
There’s a clue to the X half’s inspiration in the name – it’s a digital reimagining of half-frame film cameras like the Pentax 17. I've already tried the X half, and it was a much needed dose of fun – check out my X half hands-on review.
To facilitate half frame, the X half's 18MP JPEG photos are taken in 3 x 4 vertical format, recorded onto a vertical 1-inch sensor, and composed using the unique vertical LCD.
Alongside that fixed screen is a secondary screen that mimics the film canister window you see on many film cameras, and there's a fun surprise here – it’s touch sensitive, and allows you to swipe up or down to select one of Fujifilm’s Film Simulations. If this charming feature doesn't make its way into future Fujifilm cameras, I'd be shocked.
Film Simulation color effects are well known – they're inspired by Fujifilm film stock, and have helped to cement Fujifilm’s popularity over the last 10 years through cameras like the X100VI. The X half offers a stripped-back selection of 13 popular Film Simulations, including Provia and Astia.
You'd think all of the above would be enough to secure the X half's unique status, but Fujifilm has really let loose, with even more features for film photography fans to enjoy.
The LCD emulating a film cannister window with Velvia Film Simulation, and the vertical LCD (Image credit: Tim Coleman)Simulating film to another levelGoing one step further from that twin-screen combo and vertical shooting, there’s a Film Camera mode. This locks in your chosen Film Simulation and camera settings such as ISO, and disables the screen preview, leaving you to compose your shots via the optical viewfinder instead, as if you're shooting with film.
Once your ‘film’ is used up – either 36, 54 or 72 shots – you can exit the mode and view the screen once more, and make changes to settings again.
Film Camera mode is such a fun feature, and for me is the closest experience to film photography that I've had using a digital camera – and it's optional.
The film wind lever tucked in with the camera off. In the on position, the lever sticks out for an easy reach. (Image credit: Tim Coleman)Then there's what is in effect a film wind lever, which in this case, through 'cranking', is used to create diptychs – that’s two vertical shots side by side. These are recorded individually through the vertical 1-inch sensor, but then composited afterwards and displayed just like you'd get with a half-frame film camera on a roll of 35mm film.
Again, you can take or leave the diptych feature. I reckon it's a nice to have – working out how image pairs complement each other stretches your creative muscles.
We also get some completely new picture effects, almost all of which are film photography-inspired and include light leak, expired film and halation.
Full HD video capture is also possible, and the diptych effect can be applied to both photos and videos, which is really neat.
This is all packaged in a palm-sized, premium-feel compact that features a fixed 32mm f/2.8 lens with a mechanical aperture, plus the same battery as used in cameras like the X100VI for an 880-shot life, and which weighs just 240g.
Fujifilm X half in charcoal (left), silver (middle) and black (right). (Image credit: Tim Coleman)Fujifilm has created a dedicated app for the X half, which can be used to make diptychs, and upload and view images, plus the camera can connect wirelessly to one of Fujifilm’s Instax printers for on-the-go printing.
The app wasn't available when I tested the camera, but will be downloadable from early June. Meanwhile, the Fujifilm X half itself will be available globally from June 12 in silver, charcoal and black, and costs $849 / £699 / AU$1,349.
I’ve been reviewing digital cameras for 15 years, and the Fujifilm X half has to be one of the funnest yet – a compact camera with a difference. You can configure it in a way that’s as close to a film camera as you’re going to get with digital, plus it packs the retro look and feel that we’ve come to expect from Fujifilm.
What do you think of the Fujifilm X half? Let us know in the comments below.
You might also likeI write about vacuum cleaners for a living, and while performance varies, most new models these days tend to look roughly the same.
So when news of a new addition to the Dyson vacuum lineup landed in my inbox, I expected to see something similar to its existing models: slick and high-quality, but not especially distinctive or surprising.
How wrong I was.
The newly unveiled Dyson PencilVac doesn't just have an unusual name, it's all-round one of the most unique vacuums I've seen. This brand knows what it's doing in this marketplace – it makes some of the best cordless vacuums you can buy, and today's very best Dyson vacuums include features you still can't find anywhere else.
So while the PencilVac strays a long way from the tried-and-tested formula of what works for vacuum cleaners, I'm very optimistic about its performance. Here's a rundown of the most intriguing features in this new launch...
1. It's ridiculously thinThe most immediately noticeable thing about the PencilVac is that it's incredibly streamlined. Without the floorhead, the whole thing is 1.5 inches / 3.8cm in diameter. To make that possible, the brand had to develop a tiny new motor – the Dyson Hyperdymium 140k motor is just 1.1 inches / 2.8cm wide, and hidden entirely within the handle.
The PencilVac is also impressively lightweight, clocking in at 4lbs / 1.8kg. For context, the lightest option in our best cordless vacuum roundup right now is 5.7lbs / 2.6kg, and there are a number of models that weigh over 6.6lbs / 3kg.
All the PencilVac's mechanics are shrunk down and fitted inside the handle (Image credit: Dyson)Generally, when you shrink down a vacuum, you sacrifice power. That's why handheld vacuums tend to be much less 'sucky' than full-sized options. That holds true for the PencilVac – there's 55AW of suction, compared to 115AW for the V8 (the oldest Dyson stick vacuum in the current range) and a massive 280AW for the latest-and-greatest Gen5detect. However, while it's unlikely to be suitable for a truly deep clean, that's still a decent amount of suction for the size and weight.
As a side note, the 1.5-inch / 3.8cm diameter isn't incidental. Brand founder James Dyson says, "I have long wanted to make a vacuum of only 38mm diameter (the same as my latest hair dryer, the Supersonic r)". The Dyson Supersonic r is the pipe-shaped dryer that was originally released for professionals only, but recently joined the main consumer range.
2. There are cones instead of rollersMoving down to the business end, and you'll find the new 'Fluffycones' floorhead. It sounds like a Pokémon, but it's actually a reimagined cleaner head. Vacuums traditionally have one brush roll, maximum two, and they're tube-shaped. The Dyson PencilVac has four brushrolls, and they're all conical.
There's logic to the tapering shape: it helps direct long hair along the roll and into the dust cup, whereas with parallel rollers the hair tends to just wrap around and stay there, until your rip it off or attack it with scissors. Dyson's hair screw tool also has a conical brush roll, and works exactly as it's meant to when it comes to tackling long hair.
Rather than one parallel brushroll, the PencilVac has four tapering rollers (Image credit: Dyson)The cones project out at the sides so they can clean right to the edges of rooms, and the whole thing can lie flat to the ground, with a clearance of just 9.5cm / 3.75 inches off the floor.
I'm interested in Dyson's description of the rollers as 'fluffy', because in the brand's vocabulary that usually indicates a soft roller for use on hard floors only. In fact, the more I look at this vacuum, the more I'm convinced it's a specialist model just for use on hard floor. It's not specified in the press material I have so far, but it would make sense with the lower suction and smaller dust capacity.
3. There's no visible dust cupOne of the most baffling things about the PencilVac is that it doesn't appear to have a dust cup. Of course, there is one – like the motor, it's hidden away inside the handle.
The capacity is next-to-nothing: just 0.08L. However, Dyson has introduced a dust compression system, which uses air to squish down the particles so they take up as little room as possible. Dyson claims that means it can hold five times the physical volume.
The dust cup is also hidden within the handle (Image credit: Dyson)The emptying process has also been reimagined, with a push-lever system replaced by an exciting-sounding "syringe, no-touch bin ejection mechanism".
As it pushes out dust and debris, the mechanism simultaneously wipes the 'shroud'. I'm not totally clear what the 'shroud' is in this context, but I do know that keeping the internal mechanisms clean is key to efficient vacuum performance, so this seems like a good thing.
4. The floorhead glows and appears to floatAs well as siphoning off hair as you clean, the floorhead cones have another trick up their sleeve. The cones rotate in opposite directions, the aim being that this vacuum cleans just as well when it's pushed forward as when it's pulled back. This is a bit of a weak spot on the regular Fluffy floorhead – it has no trouble sucking things up when moving forwards, but pull it back and debris will pool behind it.
I'm intrigued to see how this new approach works in practice – especially because Dyson describes it as "floating" across the floor. I wonder, too, if it might make this vacuum reversible altogether, given the fact that the handle section looks very symmetrical.
(Image credit: Dyson)Dyson has also added "laser-like" illumination to both the front and back of the floorhead. This is another feature borrowed from the exsiting Fluffy floorhead, and helps create big shadows on the tiniest bits of dust, which otherwise might go missed. It only works on hard floors, which is another indication this vac is likely not for carpet.
5. There's a tool that looks like a chimney brushThere's an intriguing addition to the tool lineup in the form of a 'Rotating combi-crevice tool', designed for cleaning in awkward gaps. This seems especially geared towards cleaning high-up, where it can be tricky to get your angles correct. It makes particular sense for an ultra-light vacuum like this one, which is far easier to lift above your head than your average stick vacuum.
As an aside, it looks like the PencilVac is button- rather than trigger-operated. That's dictated by the streamlined shape, but it's also great news for maneuverability and easy of use – the fact that many Dyson vacs still use a trigger to turn on is a perpetual bugbear of mine.
You'll also get a Conical hair screw tool, similar to the one included with the newest Dyson stick vacuums, for tackling long hair on furniture. Both can be stored on the magnetic charging dock.
The Rotating combi-crevice tool looks perfect for cleaning up high (Image credit: Dyson )6. It's app-connectedI'm much less excited about this feature, but it feel like I should point out that this is the first Dyson cordless vacuum to connect to the MyDyson app. The app will provide more information about battery life and also report on filter status. However, there's also a screen on the vacuum itself showing remaining battery, so I'm hoping the app connection is an optional extra rather than an essential.
There's a companion app, but key information is also shown on the vac's screen (Image credit: Dyson)Price & availabilityThe PencilVac will arrive in Australia first, with launch scheduled for August 2025. It's due to go on sale in the UK sometime in 2026, and I'm awaiting info as to if/when it will come to the US. As of yet I don't have any pricing info at all – I'll update this article with more details when I have them.
You might also like...Google has rolled out a new AI-powered shopping feature to help you figure out what the clothes you are interested in buying might look like when you wear them. It's dubbed "try it on" and it's available right now in the US through Google Search Labs.
To get started, you just need to switch it on in the lab. Then, you upload a full-length photo of yourself and start looking for clothes in the Google Shopping tab.
When you click on an image of some outfit from the search results, you'll see a little "try it on" button in the middle of the enlarged version of the outfit in the right-hand panel. One click and about ten seconds later, you'll see yourself wearing the outfit. It may not always be a perfect illusion, but you'll at least get a sense of what it would look like on you.
Google claims the whole thing runs on a model trained to see the relationship between your body and clothing. The AI can, therefore, realistically drape, stretch, and bunch material across a variety of body types.
The feature doesn't work with every piece of clothing you might see, or even every type of outfit. The clothing retailer has to opt into the program, and Google said it only works for shirts, pants, dresses, and skirts.
I did notice that costumes and swimwear both had no usable images, but I could put shorts on myself, and costumes that looked enough like regular clothes were usable. The AI also didn't seem to have an issue with jackets and coats as categories.
Elvis looks(Image credit: Photo/Google AI)For instance, on Google Shopping, I found replicas of the outfits Elvis wore for his 1966 comeback and one of his jumpsuits from the 1970s. With a couple of clicks, I could imagine myself dressed as the King in different eras.
It even changed my shoes in the all-black suit. I'd always wondered if I could pull off either look. The images are shareable, and you can save or send them to others from the Google mobile app and see how much of an Elvis your friends think you are.
Super summer(Image credit: Photo/Google AI)The details that the AI changes to make the photos work are impressive. I used the AI to try on a fun summer look and the closest to a superhero costume I could try. The original photo is me in a suit and jacket with a bowtie and black dress shoes. But the shoes and socks on both AI-generated images not only match what was in the search result, but they're shaped to my stance and size.
Plus, despite wearing long sleeves and pants, the AI found a way to show some of my arms and legs. The color matches reality, but its imperfections are noticeable to me. My legs look too skinny in both, like the AI thinks I skipped leg day, and my legs in the shorts have not been that hairless since I turned 13.
Imperfections aside, it does feel like this will be a major part of the next era of e-commerce. The awkward guessing of whether a color or cut works for your skin tone and build might be easier to resolve.
I wouldn't say it can make up for trying them on in real life, especially when it comes to sizing and comfort, but as a digital version of holding an outfit up against you while you look in a mirror, it's pretty good.
Ending unnecessary returns(Image credit: Photo/Google AI)Uncanny as some of the resulting images are, I think this will be a popular feature for Google Shopping. I'd expect it to be heavily imitated by rivals in AI development and online retail, where it isn't already.
I particularly like how the AI lets you see how you'd look in more outlandish or bold looks you might hesitate to try on at a store. For example, the paisley jacket and striped pants on the left or the swallowtail jacket and waistcoat with Victorian trousers on the right. I'd hesitate to order either look and would almost certainly plan on returning one or both of them even before they arrive.
Returns are a plague on online retailers and waste tons of packaging and other resources. But if Google shows us how we’d look in clothes before we buy them, it could chip away at return rates; retailers will race to sign up for the program.
It could also open the door to more personalized style advice from AI. You could soon have an AI personal dresser, ready to give you a virtual fit check and suggest your next look, even if it isn't something Elvis would have worn.
You might also like