Monster Hunter Wilds developer Capcom has now confirmed that the game's next major content patch - Free Title Update 2 - is set to arrive at the end of June.
While no specific release date has been given as of yet, the official Monster Hunter X / Twitter account made the announcement alongside a teaser image of one of the update's highly-anticipated returning monsters - Lagiacrus.
Aside from Lagiacrus - who debuted in Monster Hunter 3 and hasn't been seen since Monster Hunter Generations Ultimate - there are a few things we know are coming in Free Title Update 2 thanks to Capcom's Director's Letter.
Posted to the official Monster Hunter website, the letter (written by game director Yuya Tokuda) confirms the second major update will bring a new high-difficulty Arch-tempered monster. Some weapons are also set to receive improvements, such as the Hammer and Dual Blades.
Several quality of life updates are also on the way, including improved navigation in the Grand Hub, "improved Seikret usability", photo mode adjustments and - perhaps best of all - layered weapons.
That last one, similar to layered armor, will let you cast a different appearance onto your equipped weapons. That's going to be awesome for players running a particular build that also might not like the way their weapon looks by default.
Additionally, Capcom has announced a new event quest will be arriving on June 17. Completion of the quest will earn you a Wudwud equipment set for your Palico companion, allowing you to dress them up as one of the adorable Scarlet Forest denizens.
You might also like...Will your AI confidently deliver the right answers or stumble through outdated knowledge while your customers grow increasingly frustrated?
Artificial intelligence (AI) may be changing how businesses interact with customers but there's a critical element that often gets overlooked: the knowledge that powers it. The quality of AI responses directly depends on the information it can access – a relationship that becomes increasingly important as more organizations deploy AI for customer service.
AI is really good at accessing unstructured and structured data and collating it into a well-packaged natural language response. Unlike when you do a Google search, and it comes back with multiple responses (where the level of those answers is largely driven by advertising or other sponsorship) AI looks at the body of knowledge that supports the question being asked.
So, when talking about knowledge-driven AI for customer experience, it's the idea that AI isn't accessing the full scope but rather a well-structured knowledge base. This means companies must carefully choose what information AI can leverage, especially when dealing with decades worth of data.
For example, a customer asking how to make a payment might receive outdated instructions about writing a cheque if the knowledge base contains too much legacy content. By providing a well-structured database which is rich enough to give as many answers as possible but also limiting AI to that particular knowledge base, you can really focus on giving AI the right information to deliver the answers you want customers to receive.
The specificity advantageWhen building AI knowledge bases, starting small and narrow before expanding works better than beginning with everything and trying to narrow down. Companies often make the mistake of giving AI access to their entire information universe.
This approach typically creates more problems than it solves. Contact centers especially struggle with AI accuracy when the knowledge base contains outdated information or when AI draws from too many different sources at once. This limitation becomes obvious when you consider AI-generated images. When AI attempts to create images of people, it often produces noticeable errors – too many fingers, oddly positioned hands, or unnatural facial features. AI conversations follow the same pattern.
They appear fine at first glance, but closer inspection reveals gaps in understanding, inappropriate tone, and mechanical empathy. The information provided might be technically correct but lacks the nuance and specificity that customers need. Just as with images, these conversation models improve over time, but the fundamental challenge remains – AI needs well-structured information to avoid these pitfalls.
Experiential learning over algorithmsUltimately, AI delivers its most reliable performance when confined to specific knowledge and topics. Unlike human agents, AI performs best when it follows a script. This creates an interesting contrast with what we've learned in the BPO industry. Our experience shows that human agents excel when given freedom to go off-script and apply their natural problem-solving abilities.
The best human interactions happen when agents bring their full selves to the conversation. AI, however, functions more like a trainee who needs clear boundaries. You want to keep AI narrowly focused on approved scripts and content until it develops more sophistication. Human agents can provide answers beyond their formal training.
They navigate complex systems, find creative solutions and interpret customer needs in ways that aren't documented. These skills develop through experience and remain challenging for AI to replicate. Today's AI systems can't navigate through interfaces like humans can. They can't click through multiple screens, follow complex processes or interact with CRM systems the way human agents do. AI only knows what exists in its knowledge base.
This limitation highlights why incorporating the lived experience of human agents into AI knowledge bases delivers such dramatic improvements. AI also differs from humans in its approach to uncertainty. It never lacks confidence, even when wrong. AI will state incorrect information with complete certainty if its algorithms determine that's the optimal response.
Human agents learn differently. When customers express frustration or correct a mistake, human agents experience that uncomfortable "oh my gosh" moment that embeds the learning in their conversational memory. Even with limited information, humans adapt quickly. Most AI systems lack this emotional feedback loop, which raises an important question: how do we configure AI to incorporate negative feedback into its knowledge in a meaningful way?
Information architecture is an investmentCreating effective AI knowledge bases requires ongoing attention across several dimensions. The foundation must be structured, current content that accurately reflects your products and services. This isn't a one-time effort but a continuous commitment to maintenance and accuracy. Equally important is establishing appropriate boundaries – giving AI enough knowledge to be helpful while limiting its ability to access irrelevant or outdated information. Improvement must be continuous rather than occasional.
By monitoring where AI struggles and systematically addressing those gaps, organizations keep their systems relevant and effective. Integrating successful human agent interactions represents another critical factor. When you capture what works in human conversations and incorporate those patterns into your AI knowledge base, performance improves significantly. Finally, robust feedback mechanisms allow AI to learn from customer responses without being susceptible to manipulation, creating a system that improves over time.
AI technology will continue evolving, but its effectiveness will always depend on the quality of its knowledge foundation. Organisations that invest in properly structured, well-maintained knowledge systems will see better results from their AI implementations. The future isn't just about deploying more sophisticated AI technologies but building better knowledge ecosystems these technologies can leverage. Your AI is only as good as the knowledge base it's built upon, and getting that foundation right is essential for delivering the customer experience you actually want.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The Browser Company has a new way to travel the web using AI. Best known for its Arc browser, the company has introduced a new browser called Dia, which was first teased at the end of last year. This release follows an announcement last month that active development on Arc was winding down and the company would place its full weight behind Dia.
Unlike traditional browsers that send users searching across tabs or toggling between tools to get things done, Dia places an AI assistant directly into the browser’s address bar.
The idea is that instead of opening ChatGPT in another tab or copying content into a separate tool to summarize or rewrite, you just type your question where you’d usually enter a URL. From there, the assistant can search the web, answer questions about the page you’re on, compare tabs, or even draft content in the tone of a specific site.
Dia is built on Chromium and resembles a standard browser at first glance, but the key differences are found in the way AI suffuses its structure. The AI is omnipresent and customizable, plus there is no need to log in to a separate service. You stay on the page, talk to the browser, and it responds.
In many ways, Dia's AI behaves similarly to most other AI chatbots. You can ask it to summarize an article you're reading, help write an email based on your calendar and browser activity, or generate code with your preferred programming language. You can also personalize how the assistant writes for you in terms of style.
One of the more distinctive features is the browser’s ability to take on the “voice” of a given webpage. If you’re reading a corporate blog or product page and want to generate a document in a similar tone, Dia can adapt its output to match the site’s style.
Dia AIThe features are designed to blend seamlessly with the browser and your other online activities. The AI not only sees your current tabs but also remembers previous interactions, allowing it to use context in its responses. The more you interact with it, the more personalized the AI is supposed to become.
Eventually, it will remember your writing preferences and know which tasks you ask for often and surface those options. Dia is currently in an invite-only beta for Mac, though you can sign up for a waiting list to gain access.
Dia is arriving as browsers race to incorporate AI, and many AI developers are working on browsers. Google Chrome is testing Gemini-powered overlays and sidebars, Opera has its Neon browser promising a full AI agent experience, and Perplexity has its new Comet browser with AI features.
For the many people understandably concerned about privacy when the AI is this clever, The Browser Company claims that Dia handles user context locally where possible and does not send browsing data to third-party providers unless required by the task.
Notably, Dia is centering AI as the main way to engage with the browser. The experience is meant to be rooted in user prompts and direct interaction, not automation. It's also worth noting that Dia means The Browser Company no longer sees Arc as worth spending resources on, despite praise for its design and rethinking of tab management. Dia is less about reinventing browser layouts and more about AI as core functions.
With AI rapidly becoming embedded in everything you touch online, Dia represents a very direct approach to making generative AI central to going online rather than treating AI as a bolt-on feature. The Browser Company is betting that it can be the primary interface for how users browse the web.
You might also likeA new computing system modeled after the architecture of the human brain has been activated at Sandia National Laboratories in the US state of New Mexico.
Developed by Germany-based SpiNNcloud, the SpiNNaker 2 stands out not only for its neuromorphic design, but also for its radical absence of an operating system or internal storage.
Backed by the National Nuclear Security Administration’s Advanced Simulation and Computing program, the system marks a noteworthy development in the effort to use brain-inspired machines for national security applications.
SpiNNaker 2 differs from conventional supercomputersUnlike conventional supercomputers that rely on GPUs and centralized disk storage, the SpiNNaker 2 architecture is designed to function more like the human brain, using event-driven computation and parallel processing.
Each SpiNNaker 2 chip carries 152 cores and specialized accelerators, with 48 chips per server board. One fully configured system contains up to 1,440 boards, 69,120 chips, and 138,240 terabytes of DRAM.
These figures point to a system that is not just large but built for a very different kind of performance, one that hinges on speed in DRAM rather than traditional disk-based I/O.
In this design, the system’s speed is attributed to data being retained entirely in SRAM and DRAM, a feature SpiNNcloud insists is crucial, stating, “the supercomputer is hooked into existing HPC systems and does not contain any OS or disks. The speed is generated by keeping data in the SRAM and DRAM.”
SpiNNcloud further claims that standard parallel Ethernet ports are “sufficient for loading/saving the data,” suggesting minimal need for the elaborate storage frameworks typically found in high-performance computing.
Still, the real implications remain speculative. The SpiNNaker 2 system simulates between 150 and 180 million neurons, impressive, yet modest compared to the human brain’s estimated 100 billion neurons.
The original SpiNNaker concept was developed by Steve Furber, a key figure in Arm’s history, and this latest iteration appears to be a commercial culmination of that idea.
Yet, the true performance and utility of the system in real-world, high-stakes applications remain to be demonstrated.
“The SpiNNaker 2’s efficiency gains make it particularly well-suited for the demanding computational needs of national security applications,” said Hector A. Gonzalez, co-founder and CEO of SpiNNcloud, emphasizing its potential use in “next-generation defense and beyond.”
Despite such statements, whether neuromorphic systems like SpiNNaker 2 can deliver on their promises outside specialized contexts remains an open question.
For now, Sandia’s activation of the system marks a quiet but potentially important step in the evolving intersection of neuroscience and supercomputing.
Via Blocks & Files
You might also likeDiamonds have emerged as a critical material in the development of quantum technologies due to their unique atomic properties, and Quantum Brilliance, a company based in Germany and Australia, has outlined an ambitious plan to develop portable quantum computers using diamond-based quantum processing units (QPUs).
These devices are being designed to operate at room temperature and may eventually be integrated alongside GPUs and high-end CPUs in servers or vehicles.
But while the company’s vision promises a future where quantum computing is as seamless as plugging in a GPU for AI inference, several technical and commercial hurdles remain.
Rethinking quantum computing with diamondsOver the past decade, researchers have increasingly focused on engineering high-purity synthetic diamonds to minimize interference from impurities.
Notably, a 2022 collaboration between a Japanese jewelry firm and academic researchers led to a new method for producing ultra-pure 2-inch diamond wafers.
In 2023, Amazon joined the effort through its Center for Quantum Networking, partnering with De Beers’ Element Six to grow lab-made diamonds for use in quantum communication systems.
Now, Quantum Brilliance aims to utilize nitrogen vacancies in diamond to create qubits, offering a more compact and power-efficient alternative to cryogenic quantum systems.
“We do have a roadmap to fault tolerance, but we are not worrying about that at the moment,” said Andrew Dunn, COO of Quantum Brilliance.
“People think of millions of qubits, but that will be very expensive and power hungry. I think getting an understanding of having 100 qubits in a car cheaply and simply - the use cases are very different."
This signals a departure from the prevailing trend in quantum computing, which focuses on building systems with millions of qubits.
The company is instead targeting inexpensive and practical use cases, particularly in applications such as AI inference and sparse data processing.
Quantum Brilliance is already collaborating with research institutions like the Fraunhofer Institute for Applied Solid State Physics (IAF).
IAF is currently evaluating the company’s second-generation Quantum Development Kit, QB-QDK2.0, which integrates classical processors like Nvidia GPUs and CPUs with the QPU in a single box.
In parallel, Oak Ridge National Laboratory in the US has acquired three systems to study scalability and parallel processing for applications like molecular modeling.
“The reason they are buying three systems is that they want to investigate parallelisation of systems,” Dunn added.
Quantum Brilliance is also working closely with imec to integrate diamond processes into standard chip manufacturing.
Beyond computation, the company sees potential in quantum sensing, and the technology may also be repurposed for defense and industrial sensors.
Ultimately, the company wants quantum computing to become as ordinary as any other chip in a server.
“Personally, I want to make quantum really boring and invisible, just another chip doing its job,” said Dunn.
Via eeNewsEurope
You might also likeGoogle’s Find Hub – previously Find My Device – has been a fairly proficient Android alternative to the always useful Apple Find My service, with both the Android and iOS options helping you locate your missing tech. But until now, Google’s service has lacked a key feature: ultra-wide band finding.
Find Hub can help you locate your phone, headphones, compatible Bluetooth trackers, and even close friends and family, all from one app. If you’ve not used the service (admittedly, it can feel a little hidden behind Google’s better known Android apps) it’s a useful one-stop finding shop that you’ll want to add to your home screen.
However, it has lacked one of Apple's core benefits of its Find My service: ultra-wideband tracking.
This upgraded variant of Bluetooth tracking allows your phone to more accurately track the precise location of the tag. Rather than simply being further or closer to the missing tag, the app can give you much more precise directions and distances thanks to UWB. But until now, no Find Hub devices offered UWB as an option.
(Image credit: Future/Jacob Krol)Now, finally, the Moto Tag does so thanks to a firmware update, as spotted by Android Police. Once installed via the Moto Tag app (currently rolling out through the Play Store), you can launch the Find Hub app, and the updated tracker will be discoverable via UWB.
You’ll also need a high-end smartphone. While a few years-old devices support UWB, the feature is exclusive to premium models like the Google Pixel 6 Pro, and Samsung Galaxy S21 Plus and Ultra. The standard flagships, unfortunately, lack the feature for now.
Hopefully, as other UWB trackers arrive for Android, there will be more reason for budget-friendly devices to support it. For now, Moto’s Tag appears to be the only UWB device supported by Find Hub.
Beyond UWB, Google’s Find Hub is also set to gain support for tracking some devices using satellites “later this year” (via Google’s blog), making the service even more useful than it currently is. That would let the service not just catch up to Apple, but effectively take the lead.
You might also like