US organizations alone are wasting $2.2 billion every year on rehiring IT and tech talent due to poor onboarding experiences, new research has claimed.
A repoort from Nexthink found substandard onboarding has been linked with high turnover rates, meaning more workers are likely to leave their roles and companies are forced to invest in new workers, subjecting them to equally bad experiences.
Of the more than 117,000 IT or tech hires that take place in the US every year, over 29,000 (or around one-quarter) will likely leave their roles due to their initial experiences.
Proper onboarding can decrease turnover ratesNexthink's findings blamed poor onboarding experiences on the fact IT teams tend to only have a few days to equip new hires, meaning there is limited time to not only make the right impression but also to give an indication of the company’s operational efficiency.
The research also points to rushed setups, leading to technology issues and a lack of proper access, which can often be caused by hiring managers failing to communicate the necessary tools and permissions to IT in advance.
High turnover rates among new employees is also having a negative impact on existing workers, reducing morale and making them more likely to want to leave the company, amplifying the effects. Negative employer reputations as a result could also be making potential recruits less likely to want to onboard with a company.
The report calls for HR and IT departments to work together more closely, forming a ‘Super Team’ to understand the needs of new starters. The three takeaways highlighted by Nexthink are that an interdepartmental shared understanding should be developed, real user feedback and use data should be analyzed, and that workflows should be automated wherever possible to kickstart recruitment processes and make them more efficient.
You might also likeFrom Netscape to Chrome, the consumer browsers we have used since the dawn of the internet were first built for a singular need: accessing information. They worked brilliantly for that purpose, which is why – thirty-five years later – the browser remains one of the most pervasive consumer-grade technologies on the planet.
In the early browsing days, the internet just consisted of websites. Applications, on the other hand, lived outside the browser. Until one day someone had the brilliant idea to deliver apps inside the browser itself. Users have been accessing web apps ever since, from communications platforms to online banking. Today, it is the most pervasive way to engage applications on the planet. Extensions were tacked onto the browser, adding productivity and other features to extend the consumer browser’s capabilities.
However, consumers have not been the browser’s only users. Over the past three decades, enterprises – from banks and manufacturers to hospital systems and universities and beyond – have inextricably integrated the browser (as well as myriad apps and extensions) into their everyday operations.
Here’s where the friction starts to arise.
Unchanged browser, evolving needsThe core functionality of the traditional browser has remained largely unchanged, continuing to serve its original purpose for consumers without evolving to meet the enterprise's specific security and productivity needs. In fact, when you consider that the consumer browser must support billions of users worldwide, it must have a great deal of openness and flexibility to meet a wide array of consumer and advertiser needs. After all, consumer browsers were not designed to be a safe application delivery platform. They were designed for accessing websites and content.
Therefore, it’s no surprise that enterprise IT teams have always fought an uphill battle to place control around consumer browsers, not to mention browser extensions, which number more than 200,000 today. When these consumer browsers are used, security teams must layer on complex stacks of tools to secure their environments. Further, applications teams must commonly bolt on Virtual Desktop Infrastructure (VDI) environments and Virtual Private Networks (VPN) for connectivity. These measures are expensive, inefficient, and ineffective.
It’s not a knock on consumer browsers – these tools were simply never designed for enterprise needs.
It’s like putting a Rolls-Royce in the Daytona 500: the car may be perfect in its own right, but it isn’t built to perform in that environment.
The browser designed for the enterpriseThat’s where the enterprise browser comes in.
The enterprise browser uses its native mechanics to deliver corporate applications while embedding security, control, and productivity features directly into the browser itself – retaining the same experience users have enjoyed for decades while eliminating the need for complex add-on application delivery technologies and security stacks to keep them safe.
In an enterprise browser, security teams have full visibility into what employees can see and do at the appropriate times. Security features are native, from zero-trust and data loss prevention to session isolation and encryption. Workforce enablement is seamless, nearly eliminating the need for VPNs or virtual desktops for secure access to corporate resources. And access to cloud applications is fine-tuned without extra security tools.
For example, the enterprise browser can empower the user to freely engage personal applications such as ChatGPT, personal email, etc., while preventing users from copying sensitive data from corporate applications into such personal applications. It can enforce role-based access, ensuring that users only see and interact with resources that are appropriate for their role. And it can log and monitor all browser activity for security and compliance purposes where needed, providing visibility into who accessed what, from where, and how.
It can govern who uses what extensions under what circumstances and shut down high-risk extensions and access to unsanctioned GenAI websites in real-time. All while preserving privacy and without adding user delays or disruptions.
It’s a win for CIOs, CTOs, CISOs, and users. Employees enjoy faster, more efficient workflows in a familiar browser-based experience, while leadership gains visibility, compliance, security, and cost savings.
Secure browsers are not enterprise browsersAnd what about secure browsers? Are they enough to address enterprise needs and issues? In a word, no. It’s a lingering misconception that an enterprise browser equals a secure browser. They are fundamentally different. Sure, an enterprise browser is a highly secure environment to operate within, but the concept of an enterprise browser is so much more than the old-school secure browser approaches.
Secure browsers were built primarily to prevent security breaches, relying on clunky and restrictive measures that can interfere with necessary work tasks. These browsers are often virtualization engines wrapped around consumer browsers. They degrade the user experience while offering little to no enterprise-level control.
The bottom line is security is table stakes for an enterprise browser. However, they are designed from the ground up as an application delivery platform designed to secure and optimize the entire enterprise IT environment while giving the user a very natural and familiar environment to operate within. It is an optimistic landscape where the user has freedom and comfort while the organization can rest safely knowing their applications and underlying data are secure. Indeed, that’s why we coined the term “enterprise browser.”
Different cars for different tracksToday’s consumer browser has evolved beyond a window to the web to an unwitting participant for application consumption. It helps billions of people communicate across borders, learn new skills, watch their favorite sports, manage their money, and more – and drive trillions of dollars in ecommerce every year. Like the Rolls Royce, it’s an engineering marvel that has stood the test of time.
But at the end of the day, the enterprise requires a different vehicle. The enterprise browser finally delivers on challenges that have thus been out of reach for its consumer counterpart – empowering organizations to safeguard data, enforce policies, gather app and user insights, and more, all without compromising performance. Both have value, but enterprise demands require a different approach that can change everything.
We've listed the best mobile app development software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
In the datacenter world, one of the biggest challenges – and one of the greatest opportunities – is heat. When server components operate, all the electrical energy they consume is ultimately converted into heat, which must then be removed. Traditionally, organizations have relied on air cooling, typically using fans to manage this heat. In fact, according to sources, 80% of datacenters still primarily use air cooling to get rid of heat from server components.
However, air cooling is both inefficient and energy-intensive. According to a recent McKinsey study, air cooling can account for 40% of all energy consumption in the datacenter – it’s no surprise, then, that 40% of datacenter operators are exploring alternative cooling methods in search of eco-friendly, cost effective options to support sustainable datacenter operations.
At a fundamental level, this approach makes sense. As anyone who has flown a kite knows, air can be tricky to direct and manage. Fluid is much easier to control– as anyone who’s squirted a water pistol (or been on the receiving end of one) will also know.
However, fluid cooling techniques are changing, especially as we encounter server loads that tend to be more energy-intensive, and as a result, run hotter.
What is direct-to-chip (DTC) cooling?DTC is the most common form of water cooling and has been used by datacenters and gamers for decades. A metal plate sits on top of the processors (CPU or GPU) with a conducting material between the two – usually a thermal Interface Material (TIM). The plate then uses liquid in pipes to move the heat away from the chip, and a dissipation mechanism to disperse the heat.
This dissipation mechanism can be something as simple as a single fan, although in industrial settings, you tend to see dry coolers equipped with evaporative cooling mechanisms. Dry coolers are units that sit outside the datacenter. They feature radiator-like, finned heat exchangers through which the heated liquid is circulated. Fans on the outside of the units draw cool air in and pass it around the fins, cooling the liquid. This liquid is then fed back into the system and the process is repeated.
In warmer months – or locations – evaporative cooling is used alongside dry coolers. Hot air is drawn through wet pads, evaporating water and cooling the air. This air is then used to cool the liquid from the datacenter through the dry coolers.
These dissipation methods can be used across most types of liquid cooling. DTC is more focused than air cooling, largely because liquid in pipes is easier to direct to specific components than air. With air cooling, although you can direct air currents via fan orientation, DTC allows you to be very precise. It’s also more efficient, thanks to physics, because liquids typically possess higher thermal conductivity than gases.
But even with DTC, some air cooling is usually necessary because of the challenges in adapting multiple cold plate designs to accommodate all the IT equipment that generates heat. GPUs and CPUs do generate the majority of the heat in servers, but RAM and hard drives also get hot, so some air cooling is often used. The ratio of air- to water-cooling is usually in the region of 30%/70%.
However, the world of cooling is constantly moving, and there is another form of cooling which can cool all the components at once.
Immersion coolingIn immersion cooling, the entire server is immersed in fluid. There are several benefits to this: all the components can be cooled at once, and higher heat loads can be handled. Furthermore, because the entire server is immersed in liquid, dust cannot enter the system, substantially enhancing product lifespan. However, immersion cooling is a lot more complex than DTC and maintenance is a more involved process.
There are two forms of immersion cooling – single phase and dual phase. In single phase cooling, the liquid stays liquid throughout the cycle (i.e. it keeps its phase constant). In dual phase cooling, it does not.
Single phase immersion cooling
In single-phase immersion cooling, a cool fluid enters at the base of the immersion unit to cool the server, while the heated fluid leaves at the top – and as with DTC, a dry cooler is used to cool this fluid after it passes through a plate heat exchanger. A separate coolant in a loop is then used to dissipate the heat.
Dual phase immersion cooling
In a dual phase cooling system, servers are immersed in fluid, but the fluid has a low boiling point. When server components heat up, the liquid boils and is directed to a condenser unit, where the gas (vapor) is cooled and re-condensed into a liquid. The fluid can then flow back down into the system to be re-used.
However, not only are the coolants in dual-phase systems generally more expensive, but maintenance is even more difficult, partially because the liquid boils into steam, which is much harder to manage than the liquid in a single-phase system.
Waste HeatThere’s also another significant part of the equation: although we can use dry coolers and evaporative cooling to get rid of waste heat, isn’t there something better we could do with it?
This is a big challenge for datacenters, particularly ones that have been around for a while. A lot of datacenters are on industrial parks, well away from areas where heat can be easily re-used; we’ve all seen the stories about swimming pools being heated using datacenters heat, but this isn’t always practical. Water must be physically transported via pipes to other locations, and then heat exchangers used to heat the other water which results in energy losses, because no energy exchange process is completely efficient.
Additionally, some datacenters face limitations that hinder their ability to reuse waste heat. For instance, in some of our datacenters , the heated water only reaches around 45 degrees; you can safely put your hand on the ‘hot’ pipe. But this also means that the resulting heat is less useful – it’d take a considerable amount of time to heat a swimming pool with a 45 degrees heat source.
However, it’s important to keep pushing forward. In our German datacenters , for example, we don’t use gas boilers to heat our office, instead, we utilize the waste heat from the datacenters because it’s nearby. As an industry, we must continue to advance this innovation. When new datacenter sites are built, organizations should consider heat reuse from the outset.
Over the past two decades, liquid cooling technology has made significant strides and is now capable of managing increasingly high power and heat loads in both personal and industrial settings. Although we are still in the early stages of immersion technology’s evolution, it holds great promise for addressing components that operate at very high temperatures. However, like all technology, it’s not a one-size-fits-all solution, and we’ll almost certainly continue to see a mix of DTC and immersion cooling – not to mention a little air – across datacenter estates for years to come.
We've listed the best bare metal hosting.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
For most laptop manufacturers, the height of innovation is rolling out devices with powerful AI processors and a Copilot button, but Lenovo seems to be relishing pushing boundaries and offering users something different, and frankly, I’m here for that.
At MWC 2025, the firm has given us a range of futuristic concepts, including a physical AI personal assistant and multiple screens for its ThinkBook 16p Gen 6, and a frighteningly fragile-looking laptop with a foldable screen. I would buy all of these tomorrow if a) they were available and b) I had the money.
Not all of Lenovo’s ideas can be winners, of course, and the ThinkBook 3D Laptop might be one that doesn’t find its mark, which is a shame because it’s gorgeous. Although I’ve yet to go hands-on with it, I’m fairly certain that its key feature will be jaw-dropping.
Glasses-freeThe ThinkBook 3D Laptop concept brings immersive 3D computing to business and creative professionals through a glasses-free hybrid display. Lenovo explains it achieves this through the use of a Directional Backlight 3D solution that allows users to quickly and seamlessly switch between 2D and 3D modes, providing realistic depth and precision for digital modeling, content creation, and virtual collaboration.
Lenovo tells us the 3.2K resolution display (3200×2000, 100% DCI-P3) delivers “stunning clarity and color accuracy,” potentially making it an ideal tool for designers, engineers, and media professionals working on complex visual projects.
The ThinkBook 3D Laptop is far from Lenovo’s first attempt at delivering glasses-free 3D. We reviewed the ThinkVision 27 3D in 2024 and came away seriously impressed. You didn’t need to do anything clever to see objects in the third dimension - it was just a matter of sitting down in front of the monitor at a normal viewing distance and watching as the magic happened.
At the time, we said it was “expensive and niche, but this glasses-free 3D monitor opens up a host of exciting possibilities,” and it seems as if those promises could be fulfilled in the form of this new laptop.
As with the numerous other concepts Lenovo showed off at MWC, there’s no word on pricing or availability, but I’d definitely be interested in seeing how the ThinkBook 3D Laptop performs when it does arrive.
You might also likeAlthough Lenovo is unveiling a number of new devices at MWC 2025, that’s not all the tech manufacturer is showcasing.
One of its more unusual offerings is Tiko, which the firm is describing as a "compact AI emotional interaction companion."
Think of it a bit like a physical Microsoft Bob for the 21st century - Tiko is part of Lenovo’s Magic Bay ecosystem proof of concepts, which the company has developed for professionals using the ThinkBook 16p Gen 6. That laptop, which is currently not available in North America, is built for expandability and modularity, and Lenovo has gone all out with a series of attachable accessories.
(Image credit: Lenovo)Lenovo seems to have a thing for expandable displays at the moment. It unveiled the ThinkBook Plus Gen 6 Rollable at CES, and at MWC, the firm showed off its ThinkBook “codename Flip” laptop, which combines two 13-inch OLED displays into one giant 18.1-inch screen. For the ThinkBook 16p Gen 6, there’s the Magic Bay Dual Display Concept. This is a dual 13.3-inch attachable secondary screen that turns the ThinkBook 16p into a multi-screen workstation.
Lenovo says this will be ideal for “data visualization, content editing, and collaborative projects” and allows users to view multiple applications simultaneously without needing an external monitor.
In addition, there’s the Magic Bay 2nd Display Concept. This is a compact 8-inch screen intended to function as an AI dashboard for on-the-go professionals. It will provide quick access to productivity tools, messaging apps, and AI-generated insights.
(Image credit: Lenovo) Say hello to your little friendGetting back to the Magic Bay “codename Tiko” concept, Lenovo describes it as a “compact AI emotional interaction companion that displays real-time emoji-style status, provides interactive gesture-based responses, and offers personalized emoji notifications.” It has an expressive AI interface, because of course it does, to help users stay informed and engaged throughout their workday.
If that seems a bit childish for you, there's the Magic Bay “codename Tiko Pro,” which is a more serious alternative and offers a real-time widget interface and Lenovo AI Now integration and will act as an always-on assistant to help streamline information.
(Image credit: Lenovo) You might also likeThis year, we’ve had two big releases from Garmin so far: the Garmin Instinct 3 and the Garmin Fenix 8. Both scored very highly in their respective reviews, and both have made their way onto our best Garmin watches list for 2025.
They also share another common trait: both Instinct 2 and Fenix 7 watches previously only came with memory-in-pixel (MIP) screens, a duller display than most watches, and one that conserves energy. When Garmin introduced a version of the Fenix with a vibrant AMOLED screen, which is less power-efficient but brighter, like a proper smartwatch instead of a fitness tool, it called it something else – the Garmin Epix Pro.
As Garmin moves to streamline its watches, it’s gotten rid of the Epix line. Both the Instinct 3 and Fenix 8 arrived with three screen options for the user to pick at the point of purchase; a Fenix or Instinct E, a cheaper watch with a MIP screen that only comes in one size; a Solar option, which uses a low-power MIP screen in conjunction with Garmin’s Power Glass solar technology to extend battery life on long outdoor excursions; and a bright AMOLED screen.
Image 1 of 2(Image credit: Garmin)Image 2 of 2(Image credit: Garmin)Both watches now have AMOLED options, and looking at the promotional material above, Garmin has gone heavy on this as a selling point. In 2023, the Garmin Forerunner watches also moved from MIP screens to AMOLED screens, with the release of the Garmin Forerunner 265 and Garmin Forerunner 965. These did not get MIP solar-powered options: for that, you’d need to go back a generation and get the Garmin Forerunner 955 Solar. Its Venu and Vivoactive watches also bear AMOLED screens with no MIP options.
It’s clear there’s a trend happening, with Garmin slowly shifting its range over to AMOLED screens, possibly in order to compete with other smartwatch manufacturers such as Apple and Samsung – both of which are making rugged outdoor-focused Ultra watches to encroach on Garmin’s turf. The biggest barrier to making Garmin’s entire range AMOLED at the moment seems to be its Power Glass technology, which is only used with MIP screens at present, likely due to their low power output being offset by the solar power technology when used in bright light.
Due to the general shift that Garmin has taken over the last couple of years, I believe that once Garmin’s technology gets to the point where its Power Glass can offset the power consumption of its AMOLED screens, we’ll never see another Garmin watch with an MIP screen again. And that would be a real shame: the low power screen technology once symbolized, to me at least, everything Garmin watches were really about.
(Image credit: Matt Evans)The best Apple watches and best Android smartwatches always place health and fitness highly amongst their features, but they’re really extensions of phones: they’re designed to answer messages and take calls on-wrist, load on third-party apps, use maps and so on. I’m not knocking them: they’re incredibly useful, the sort of super-spy gadget I would have wished for growing up in the 90s, which only seemed possible on the wrist of James Bond. Now we’ve all got them. But with all these features, coupled with sleek black-screen looks, comes a short battery life.
Garmin watches are everything proper smartwatches aren’t. They are big, chunky things with raised bezels like G-Shocks. Most of them are covered with buttons, eschewing the slick teardrop look of the Google Pixel Watch, which can’t be used wearing gloves, in favor of rugged utilitarianism. Until recently, they didn’t have touchscreens, and they had dull MIP displays that reminded me of digital watches or old Nintendo Game Boys, two gadgets very close to my heart.
These low-power MIP screens were part of the reason that older Garmin models lasted so long, but as battery technology improves, the MIP screens are being phased out. I get it: it’s easier to see an AMOLED screen in the dark, and people looking for smartwatches are now more likely to spring for a Garmin over an Apple Watch. However, part of the reason I loved utilitarian Garmins is that I have enough bright, flashing screens in my life, and just want something dull and dark and visible in bright sunlight to capture my training metrics.
If an Apple Watch is the Tim Burton Batmobile, a Garmin watch is the Christopher Nolan one: lumpy and military and eminently useful, able to take a few knocks in the line of duty. The MIP screen contributed to the anti-flashiness of it all, and even though I loved a lot of the AMOLED Garmin watches during testing, I hope Garmin doesn’t completely wipe the MIP from, er, memory.
You might also like...We’ve finally seen our first glimpse of Alexa+, Amazon’s new subscription-based, AI-bolstered voice assistant, and it has me eating my hat.
No, not because of its new, exciting features, nor because of Alexa’s new, more personable nature, but because just a few days ago, I was dunking on the Echo Show 15.
I can separate my personal feelings from my ability to review a piece of tech, which is why the Echo Show 15 scored a respectable four out of five stars in my review. Still, as I noted then, I couldn’t fathom why Amazon decided it needed a refresh when so little had changed from the original Echo Show 15.
Now, I understand entirely, and it’s all to do with Alexa+.
A display-first Alexa (Image credit: Future / Lance Ulanoff)At the demo we attended in New York City this week, pretty much all of the demonstrations for Alexa+ were run on an Echo Show 21, which immediately struck my colleague Jake Krol as an interesting indicator for the future of Echo Show devices. Not a single one of Amazon’s best smart speakers were on display, and we’ve got little to no idea how Alexa+ may interface as a voice-only smart assistant.
However, for me personally, it served as another reminder that sometimes, these big brands have more in store for their devices than we can imagine.
While testing the second-generation Echo Show 15, I was pleased with many of the upgrades but couldn't quite fathom why Amazon felt the need to update a device so minimally. The audio has been improved, the now-13MP camera has a wider field, but generally speaking, it's a very iterative update.
For a long time, Echo Show devices have been slightly more on the periphery for Amazon's Echo smart speaker and display devices, especially in the larger 15 inch screen variation. Following my review process, my overwhelming feeling was that Amazon still didn’t know what to do with the Echo Show 15; it supports the Fire TV interface and now comes with an included Fire TV remote, but the audio chops and display mean the device can’t replace the best small TVs.
Add to that the fact that you can’t swap out the standard Echo Show user interface for the snazzy new smart-home first interface introduced on the Echo Hub, and I felt pretty justified in my criticisms of the fence-sitting feature set – how wrong I was.
A smarter Show future (Image credit: Future / Lance Ulanoff)From what we've seen so far, Alexa+ isn't just an AI-based improvement upon the original smart home voice assistant; it's actually a complete rethink of how we interact with Amazon’s voice assistant.
In addition to a litany of new features and improved smarts, Alexa+ relies heavily on touch-based interactions with the display to respond to Alexa's suggestions and interact with different widgets on the home screen. You can use Alexa+ for improved media searching, pull up important home documentation and feeds from compatible home security devices, and even use Alexa for booking reservations, cabs, and tickets through third-party services. All around, Alexa takes a more agentic role in the home now, which is more easily delivered through a screen than voice alone.
Add to that the fact that Amazon will be rolling out Alexa+ to users who have an Echo Show 8, 10, 15, or 21 in their home (but it will be compatible with a wider range of products), and you begin to build a picture of why Amazon might be moving to more priority on its smart displays than smart speakers. That, and the fact that the brand hasn’t quite been able to monetize the fundamental interactions between customers and their smart speakers, to the tune of $25 billion between 2017 and 2021.
I can admit when I’m wrong (but I still think I’m right) (Image credit: Future/Jacob Krol)All this is to say that I underestimated the new second-generation Echo Show 15 and the all-new Echo Show 21; with Alexa+, these devices can work well both as media centers and smart displays… if you have Alexa+
However, a lot of my criticism still stands, and I’m never best pleased by standalone devices where added subscription costs dictate value. Alexa+ costs $19.99 or comes for free as part of an Amazon Prime membership - at least, for now, but the Echo Show 15 and 21 aren’t cheap devices at $299 / £299 and £399 / $399, respectively.
As standalone devices without an Alexa+ description, these bigger Show displays still feel a little out on a limb compared to the well-rounded, smaller smart displays we've seen from Amazon and some of its competition.
That leaves me thinking that, really, the target audience Amazon is trying to carve out for its larger displays is those who are most interested in Alexa+, which is a slightly frustrating predicament when we’ve got little to no insight or control over the long-term pricing strategy. If Amazon rolls out a similar approach to its Ring subscription plan, which has seen several controversial rounds of iterations in recent years, Alexa+ enthusiasts who do invest in a larger Echo Show device might find themselves frustrated when they no longer afford or use Alexa+ and the device isn't quite as useful as it once was.
However, I’d be quite surprised if we see any major changes to the value proposition of Alexa+ or, indeed, Amazon’s larger Echo Shows for a good few years, so it might pay to be an early adopter.
Time will tell; perhaps when we get our hands on Alexa+, we can just ask it for answers.
You might also likeAs is always the case, Lenovo has been showing off a lot of new products at MWC 2025. In addition to introducing a wealth of concept products - which we love - the firm has also rolled out updates to its existing laptop lineups, including the ThinkPad range.
Probably the most attractive of these is the ThinkPad X13 Gen 6, which has been made even lighter. The previous-generation model, with the 41Wh battery and CFRP (carbon fiber-reinforced plastic) top cover, weighed in at just 1.12 kg (2.47 lbs), but the latest model is even lighter, starting at 0.933 kg (2.05 lbs), making it approximately 0.187 kg (0.42 lbs) lighter than the Gen 5.
Powered by Intel Core Ultra chips with Intel vPro or AMD Ryzen AI PRO processors, the ThinkPad X13 Gen 6 can be configured with up to 64GB LPDDR5x RAM, allowing it to handle demanding AI-driven tasks efficiently.
ThinkPad T14sLenovo says the new generation laptop has been optimized for modern hybrid work, aided by the Communication Bar, which features a 5MP + IR camera for improved clarity in virtual meetings.
The ThinkPad X13 Gen 6 supports Wi-Fi 7 and optional 5G connectivity and offers 41Wh or 54.7Wh CRU battery options. The device scores highly on sustainability, with a bio-based carbon fiber chassis, 90% recycled magnesium C cover and 55% recycled aluminum D cover.
It will be available for purchase starting June 2025, with prices from $1,139.00.
Lenovo also unveiled the ThinkPad T14s 2-in-1 at MWC, the first convertible laptop in the ThinkPad T series. Designed like the ThinkPad X13 Gen 6 for hybrid work, its 360° dual-hinge design allows the device to transition between laptop, tent, stand, and tablet modes.
Powered by Intel Core Ultra processors with Intel vPro, the new 2-in-1 sports a 500-nit low-power touch display or a 400-nit WUXGA touch option. It supports Wi-Fi 7 and optional 5G connectivity, while its 58Wh CRU battery promises long-lasting performance with improved repairability. The Lenovo ThinkPad T14s 2-in-1 will be available in June, priced from $1,719.00.
Lenovo is also introducing the ThinkPad T14s Gen 6 laptop, priced from $1,674.00, featuring Intel Core Ultra chips with Intel vPro or AMD Ryzen AI PRO processors.
(Image credit: Lenovo) You might also likeIt’s clear Lenovo loves to "Think Different", just as Apple once did. At CES 2025, it took the wraps off the ThinkBook Plus Gen 6 Rollable, the world’s first laptop with a rollable display, and now at MWC 2025 it’s showing off the ThinkBook “codename Flip” laptop, which combines two 13-inch OLED displays into one giant 18.1-inch screen.
It looks amazing, and I definitely want one, but I’m concerned it might be rather too easy to break in real life.
We actually wrote about the ThinkBook Flip in January 2025, but at that stage, many of the details regarding the laptop had yet to be revealed. Even though Lenovo is describing the device as a key highlight of its MWC 2025 plans, some questions remain, although we do at least have more information on it than before.
A choice of modes (Image credit: Lenovo)Designed to support AI-powered workflows and adaptable workspaces, the 18.1-inch OLED display folds outward, allowing users to switch between a compact 13-inch laptop form and a vertically expanded workspace for multitasking and collaboration.
Lenovo says the Flip will offer five distinct modes. Clamshell Mode functions as a traditional laptop setup, while Vertical Mode is designed for reviewing documents. Share Mode is for dual-display collaboration, Tablet Mode is for creative tasks, and Read Mode provides a distraction-free reading experience.
Powered by an Intel Core Ultra 7 processor, with 32GB LPDDR5X memory and PCIe SSD storage, the device includes Thunderbolt 4 ports for fast data transfer and a fingerprint reader for business-class security.
Lenovo says the device will offer AI-powered multitasking features such as Workspace Split Screen, which lets users run multiple applications side by side without needing external monitors. The Smart ForcePad introduces a three-layer illuminated dashboard, adding numeric keys and media controls for intuitive operation.
ThinkBook “codename Flip” is being described as a preview of the future of AI-powered hybrid work environments, blending flexible design with AI-enhanced productivity tools. Because it’s still a concept, there’s no word on availability, pricing, or whether screen insurance will be offered as peace of mind for the terminally butter-fingered - hopefully more details will be revealed in the coming months.
You might also likeYou can’t have failed to notice, but AI is everywhere these days. It’s being embedded in hardware, software, and services, making it a key selling point for new devices. If you're looking to buy a high-end business laptop, chances are it will feature the latest Intel or AMD chip with a built-in NPU and likely include a Copilot button for AI-powered assistance as well.
At MWC 2025, Lenovo introduced a number of new laptops, including upgrades to its ThinkPad and ThinkBook lineups, and of course they have all been optimized to handle AI workflows. If you’re in the market for a new laptop and want to use it for AI tasks, you’re spoiled for choice.
But what if you don’t want or need a new laptop, or can’t afford the latest model, but still want to benefit from on-device AI? The answer might be to purchase a new monitor. Yes, that sounds ridiculous, but one of Lenovo’s proof-of-concepts unveiled at MWC 2025 is an AI screen which can transfer its smart powers to a connected laptop or desktop system.
Adding AI smarts to your PCCalled the AI Display, Lenovo’s concept comes with a discrete NPU inside the screen, not that you’d be able to tell by looking at it. Lenovo says this is another demonstration of its commitment to “smarter technology for all.”
The dNPU not only expands the monitor’s capabilities - automatically rotating, elevating, and tilting the screen to give users the best viewing angle based on their seating position - but also adds intelligent functionality to non-AI PCs.
Lenovo says, “With the AI Display with NPU inside, non-AI PCs will be able to use Large Language Models, receiving commands from the user, analyzing and recognizing the intent, and allowing the Assistant to execute the request.”
It’s only a concept at the moment, as is the AI Stick (also unveiled at MWC), which brings artificial intelligence to non-AI PCs without requiring you to buy a new screen.
There’s no hint of when (or if) Lenovo plans to bring the AI Display to market, or how much it might cost. It is a genius idea though, and one that could be a welcome gamechanger, provided it launches soon enough to capitalize on the AI boom.
You might also like