As AI tools become more and more embedded in our everyday work, new research claims the challenge of not getting the best out of them may not lie solely with the technology.
A report from Multiverse has identified thirteen core human skillsets which could determine whether companies fully realize AI’s potential.
The study warns without deliberate attention to these capabilities, investment in AI writer systems, LLM applications, and other AI tools could fall short of expectations.
Critical thinking under pressureThe Multiverse study draws from observation of AI users at varying experience levels, from beginners to experts, employing methods such as the Think Aloud Protocol Analysis.
Participants verbalised their thought processes while using AI to complete real-world tasks.
From this, researchers built a framework grouping the identified skills into four categories: cognitive skills, responsible AI skills, self-management, and communication skills.
Among the cognitive abilities, analytical reasoning, creativity, and systems thinking were found to be essential for evaluating AI outputs, pushing innovation, and predicting AI responses.
Responsible AI skills included ethics, such as spotting bias in outputs, and cultural sensitivity to address geographic or social context gaps.
Self-management covered adaptability, curiosity, detail orientation, and determination, traits that influence how people refine their AI interactions.
Communication skills included tailoring AI-generated outputs for audience expectations, engaging empathetically with AI as a thought partner, and exchanging feedback to improve performance.
Reports from academic institutions, including MIT, have raised concerns reliance on generative AI can reduce critical thinking, a phenomenon linked to “cognitive offloading.”
This is the process where people delegate mental effort to machines, risking erosion of analytical habits.
While AI tools can process vast amounts of information at speed, the research suggests they cannot replace the nuanced reasoning and ethical judgement that humans contribute.
The Multiverse researchers note that companies focusing solely on technical training may overlook the “soft skills” required for effective collaboration with AI.
Leaders may assume their AI tool investments address a technology gap when in reality, they face a combined human-technology challenge.
The study refrains from claiming AI inevitably weakens human cognition, but instead it argues the nature of cognitive work is shifting, with less emphasis on memorising facts and more on knowing how to access, interpret, and verify information.
You might also likeWhile the new ‘Liquid Glass’ look and a way more powerful Spotlight might be the leading features of macOS Tahoe 26, I’ve found that bringing over a much-loved iPhone feature has proven to be the highlight after weeks of testing.
Live Activities steal the show on the iPhone, thanks to their glanceability and effortless way of highlighting key info, whether it’s from a first or third-party app. Some of my favorites are:
Now, all of this is arriving on the Mac – right at the top navigation bar, near the right-hand side. They appear when your iPhone is nearby, signed into the same Apple Account, and mirror the same Live Activities you’d see on your phone. It’s a simple but powerful addition.
Considering Apple brought iPhone Mirroring to the Mac in 2024, this 2025 follow-up isn’t surprising. But it’s exactly the kind of small feature that makes a big difference. I’ve loved being able to check a score, track a flight, or see my live position on a plane – without fishing for my phone.
(Image credit: Future/Jacob Krol)I’ve used it plenty at my desk, but to me, it truly shines in Economy class. If you’ve ever tried balancing an iPhone and a MacBook Pro – or even a MacBook Air – on a tray table, you know the awkward overlap. I usually end up propping the iPhone against my screen, hanging it off the palm rest, or just tossing it in my lap. With Live Activities on the Mac, I can stick to one device and keep the tray table clutter-free.
Considering notifications already sync, iPhone Mirroring arrived last year, Live Activities were ultimately the missing piece. On macOS Tahoe, they sit neatly collapsed in the menu bar, just like the Dynamic Island on iPhone, and you can click on one to expand and see the full Live Activity. Another click on any of these Live Activities quickly opens the app on your iPhone via the Mirroring app – it all works together pretty seamlessly.
(Image credit: Future/Jacob Krol)You can also easily dismiss them, as I have found they automatically expand for major updates, saving screen real estate on your Mac. If you already have a Live Activity that you really enjoy on your iPhone, there’s really no extra work needed from the developer, as these will automatically repeat.
All-in-all, it’s a small but super helpful tool that really excels in cramped spaces. So, if you’ve ever struggled with the same balancing act as I have with a tray table, your iPhone, and a MacBook, know that relief is on the way.
It's arriving in the Fall (September or October) with the release of macOS Tahoe 26. If you want it sooner, the public beta of macOS Tahoe 26 is out now, but you'll need to be okay with some bugs and slowdowns.
You might also likeHuawei has announced plans to make its CANN software toolkit for Ascend AI GPUs open source, a move aimed squarely at challenging Nvidia’s long-standing CUDA dominance.
CUDA, often described as a closed-off “moat” or “swamp,” has been viewed as a barrier for developers seeking cross-platform compatibility by some for years.
Its tight integration with Nvidia hardware has locked developers into a single vendor ecosystem for nearly two decades, with all efforts to bring CUDA functionality to other GPU architectures through translation layers blocked by the company.
Opening up CANN to developersCANN, short for Compute Architecture for Neural Networks, is Huawei’s heterogeneous computing framework designed to help developers create AI applications for its Ascend AI GPUs.
The architecture offers multiple programming layers, giving developers options for building both high-level and performance-intensive applications.
In many ways, it is Huawei’s equivalent to CUDA, but the decision to open its source code signals an intent to grow an alternative ecosystem without the restrictions of a proprietary model.
Huawei has reportedly already begun discussions with major Chinese AI players, universities, research institutions, and business partners about contributing to an open-sourced Ascend development community.
This outreach could help accelerate the creation of optimized tools, libraries, and AI frameworks for Huawei’s GPUs, potentially making them more attractive to developers who currently rely on Nvidia hardware.
Huawei’s AI hardware performance has been improving steadily, with claims that certain Ascend chips can outperform Nvidia processors under specific conditions.
Reports such as CloudMatrix 384’s benchmark results against Nvidia running DeepSeek R1 suggest that Huawei’s performance trajectory is closing the gap.
However, raw performance alone will not guarantee developer migration without equivalent software stability and support.
While open-sourcing CANN could be exciting for developers, its ecosystem is in its early stages and may not be anything close to CUDA, which has been refined for nearly 20 years.
Even with open-source status, adoption may depend on how well CANN supports existing AI frameworks, particularly for emerging workloads in large language models (LLM) and AI writer tools.
Huawei’s decision could have broader implications beyond developer convenience, as open-sourcing CANN aligns with China’s broader push for technological self-sufficiency in AI computing, reducing dependence on Western chipmakers.
In the current environment, where U.S. restrictions target Huawei’s hardware exports, building a robust domestic software stack for AI tools becomes as critical as improving chip performance.
If Huawei can successfully foster a vibrant open-source community around CANN, it could present the first serious alternative to CUDA in years.
Still, the challenge lies not just in code availability, but in building trust, documentation, and compatibility at the scale Nvidia has achieved.
Via Toms Hardware
You might also like