At February 2025's International Solid-State Circuits Conference (ISSCC), researchers unveiled a new MEGA.mini architecture.
Inspired by Arm’s celebrated "big.LITTLE" paradigm, this universal generative AI processor, discussed at length in 'MEGA.mini: A Universal Generative AI Processor with a New Big/Little Core Architecture for NPU', an academic paper presented at the conference, promised a revolutionary approach to neural processing unit (NPU) design.
Arm's big.LITTLE architecture has long been a staple of efficient mobile and embedded systems, balancing high-performance cores with energy-efficient ones to optimize power usage. The MEGA.mini project seeks to bring a similar dual-core philosophy to NPUs, which are essential for running AI models efficiently.
MEGA.mini: A game-changing NPU designThis approach will likely involve pairing high-capacity "Mega" cores for demanding tasks with lightweight "Mini" cores for routine processing. The primary goal of this design is to optimize power consumption while maximizing processing capabilities for various generative artificial intelligence (AI) tasks, ranging from natural language generation to complex reasoning.
Generative AI tool workloads, like those powering large language models or image synthesis systems, are notoriously resource-intensive. MEGA.mini's architecture aims to delegate complex tasks to Mega cores while offloading simpler operations to Mini cores, balancing speed, and power efficiency.
MEGA.mini also functions as a universal processor for generative AI. Unlike traditional fastest CPUs that require customization for specific AI tasks, MEGA.mini is being developed such that developers can leverage the architecture for different use cases, including natural language processing (NLP) and multimodal AI systems that integrate text, image, and audio processing.
It also optimizes workloads, whether running massive cloud-based AI models or compact edge AI applications, assisted by its support for multiple data types and formats, from traditional floating-point operations to emerging sparsity-aware computations.
This universal approach could simplify AI development pipelines and improve deployment efficiency across platforms, from mobile devices to high-performance data centers.
The introduction of a dual-core architecture to NPUs is a significant departure from conventional designs — traditional NPUs often rely on a monolithic structure, which can lead to inefficiencies when processing varied AI tasks.
MEGA.mini's design addresses this limitation by creating cores specialized for specific types of operations. Mega cores are engineered for high-performance tasks like matrix multiplications and large-scale computations, essential for training and running sophisticated large language models (LLMs) while mini cores are optimized for low-power operations such as data pre-processing and inference tasks.
You might also likeRegistering a domain can be done with just a few simple clicks in 2025, but it hasn’t always been that way. Rewind to March 1985 and the internet’s first .com domain name was registered: Symbolics.com.
What makes this date particularly noteworthy is the fact that the World Wide Web didn’t even exist at that point, and it would be six years until the web came onto the scene and changed our world forever.
Put simply, the creation of Symbolics.com marked what many would regard as the beginning of the dot-com era; the embryonic phase of a tectonic shift in global business, commerce, and society in general.
Symbolics.comSo who was behind the registration? As the name obviously suggests, it was created by an organization called Symbolics Computer Corporation. Based in Cambridge, Massachusetts, the company specialized in the development of Lisp machines - early general-purpose computers running on the list processing (Lisp) programming language.
Registering a domain was no easy task during this period. The Domain Name System (DNS) was very much still in its infancy, and registrations were processed manually by the Stanford Research Institute (SRI).
To secure the domain, Symbolics was required to submit a paper request via fax machine or mail to the institute. Thereafter, it was a waiting game until it was processed and approved.
A far cry from the simple click-and-go experience of your modern web user.
A long road since the first domain name registrationIt would be an understatement to say the web has come a long way in the 40 years since the Symbolics registration. It’s now an ever-present aspect of our daily lives, defining how we access information, shop, communicate with friends and family, and crucially, how we work.
We’ve seen the impact of IT downtime in the last few years, and it’s safe to say that moving back to pen and paper and fax machines on a full-time basis simply isn’t an option.
From that first registration, the number of domains globally has surged steadily over the years. As of the end of 2024, the number of domains registered globally stood at 364.3 million, according to figures from DNIB.
For context, in 2014 the number of domains registered globally stood at around 250 million. This continued growth over the last decade hasn’t been restricted to business-related domains, either. Anyone can create one easily and at a fairly reasonable price.
From microbusinesses and blogs to professional portfolios and artistic showcase sites, millions of people globally have some form of website and associated domain.
Where is Symbolics now?Today, a quick visit to Symbolics.com will take you to what is essentially a web-based museum. In 2009, the domain was acquired by Aron Meystedt, a startup investor and founder of Napkin.com.
Meystedt has maintained the site since then, offering users a glimpse into historic events and milestones over the course of the web’s development. It still attracts tens of thousands of curious visitors each year, including myself while researching this article.
Aside from the interesting facts and tidbits available on the site, there is another interesting feature: an AI-powered domain name quality scoring tool.
It’s fantastic that, 40 years on from its creation, the world’s first domain has jumped on the AI bandwagon. Nonetheless, it is a handy tool and has been used by thousands of people to evaluate domain name strength and quality.
You might also likeIn my years as a gamer – stretching back to childhood afternoons blowing desperately into dusty Nintendo cartridges – I've watched countless tech innovations appear, dazzle briefly, and fade quietly away. Remember motion controls? Great for Wii bowling, less great for just about everything else. VR keeps promising to be revolutionary but always feels like it's still waiting in the wings. But artificial intelligence may have something that lasts longer.
That's probably why Microsoft is working on an AI "Copilot for Gaming," that will aid future Xbox players. But there’s no need to wait if you want to bring the power of AI to your next gaming session. ChatGPT can be a surprisingly pleasant companion on your gaming journey, here are a few ways I've deployed it to make playing video games more fun.
Game guide AI (Image credit: Insomniac Games / Sony)There was a time when official game guides were thick, glossy artifacts full of art and obscure easter eggs or enormous and often funny digital books written by paid games journalists. But these days, official guides are rare, and the investors in online guides mostly rely on messy wikis. The information you want is probably out there, but rarely in a cohesive and useful format.
Recently, while swinging through the streets of New York in Spider-Man 2 on my PS5, I decided to see if ChatGPT could recreate something resembling those classic guides, one that could chart every side mission, collectible, and hidden secret. I used the Deep Research tool to really delve into the internet and come back with more than just a single mission tipsheet. I asked the AI to "Create a personalized, comprehensive guide for completing Spider-Man 2 on PS5. Make sure it includes information on collecting suits, getting upgrades, and finding side-missions, so I don't miss anything."
Five minutes later I had a thorough game plan, with not just everything I should look for, but even what to prioritize at different stages of the story. I got optimal routes for collectibles, tips for efficiently earning upgrade tokens, and suggested combat approaches tailored to different types of encounters. I didn't need or want to follow its suggestions slavishly, but if I ever got confused or realized there was a blank in my costume options, it was right there to steer me to the solution.
Real-time guidance (Image credit: Nintendo)Open-world games offer freedom to do anything, but a sprawling fantasy RPG can sometimes almost be annoying if you simultaneously love them but also get impatient with them. While wandering the vibrant landscapes of Nintendo's The Legend of Zelda: Tears of the Kingdom, I stood atop a floating island and felt stuck despite the options for shrines, side quests, and mysteries scattered across the horizon.
Just to break the paralysis, I asked ChatGPT, "I'm feeling unsure what to do in Tears of the Kingdom. Should I prioritize shrines, exploration, or push forward with the main story?" I soon had a nice breakdown of possible rewards, pacing, and general vibe of the options, and I immediately had my next four hours filled with fun rather than dithering from my island.
Game mechanics education (Image credit: Warner Bros.)Complex game mechanics are part of the fun of a video game, but sometimes they're a little too difficult to work out from a game's own description. I don't like switching to easy mode on games even in a game with that option, but that can leave me stuck failing over and over. Hogwarts: Legacy is no one's idea of an ultra-hard pro-gamer only game, but that doesn't mean the mastering spell combos, potions, and talent trees doesn't take at least some practice.
Frustrated after a particularly disastrous duel, I turned to ChatGPT for help. My prompt was: "I'm struggling with the spell system and combat combos in Hogwarts: Legacy. Can you clearly explain how I can combine different spells effectively without making it feel like studying for finals?"
Happily, ChatGPT didn't just mock me, but explained some intricate mechanics in simpler ways and suggested strategies to improve. The advice included showing the synergy of some spells, combos to try for different enemies, and other variations to try. Suddenly I was a capable wizard, rather than another confused Muggle.
The next game (Image credit: Sony Santa Monica)Ever stare at your gaming library or scrolled endlessly through digital stores, unable to choose a new game? ChatGPT can be your guide on that too. After completing God of War: Ragnarok, I was keen to find something similarly exciting. Online guides helped, but I didn't want to waste time or money so I asked ChatGPT for help with the prompt, "I loved God of War: Ragnarok, particularly the story, visceral combat, and mythology connection. Can you recommend something similar for my next game?"
ChatGPT recommended options like Horizon: Forbidden West and Ghost of Tsushima, explaining how each matched my preferences. ChatGPT didn't just throw random ideas my way, it had an explanation for each. Ghost of Tsushima turned out to fulfill my interests perfectly.
You might also likeNokia has revealed its 5G 360 Camera, and the company's proprietary Real-time eXtended Reality Multimedia (RXRM) software powering it, has won 3 iF Design awards.
The "world-firs"t" 8K 5G-enabled 360-degree camera, combined high-resolution, low-latency 360° video streaming with 3D OZO spatial audio.
Tougher than everNokia originally touted the Extreme Temperature variant of its 5G 360 Camera as being engineered for harsh environments (with an IP67 waterproof rating), and coming with robust data privacy features that make it ideal for critical industrial use.
Key to that has been the RXRM software, which aids real-time remote operations by enabling the remote monitoring, inspection, and operation of industrial equipment. Its APIs allow customers to integrate 360° video and 3D OZO Audio into AI tool platforms, supporting analytics, overlays, and extended reality applications.
At the time, Sami Ranta, General Manager of RXRM at Nokia, said “Nokia RXRM allows industrial customers to enhance their business processes, saving costs from product support to field operations. Adding a 5G-enabled industrial camera product to RXRM now offers a complete solution for real-time remote use cases such as situational awareness, remote monitoring, teleoperation and stadium scale sports and entertainment events.”
RXRM has demonstrably enabled safer and more efficient industrial processes by delivering real-time, actionable insights.
Finnish company Callio Pyhäjärvi was an early adopter of RXRM technology at Europe’s deepest mine, the Pyhäsalmi Mine, which has now been transformed into a multidisciplinary environment for research, training, and remote operations, demonstrating that the 5G 360 camera's ability to transmit video and audio over private and public wireless networks has been pivotal for high-risk industries in enhancing operational efficiency, reducing risk, and enabling remote control.
"Previously, existing cameras have been unable to meet the challenges posed by the harsh conditions of mining operations in Callio Business Park," noted Sakari Nokela, Callio Pyhäjärvi's Chief Development Officer. "With the trusted Nokia product reliability and security, this camera effectively addresses a critical gap in the market.”
In case you missed it, Nokia's 5G 360 camera is certainly a bit beyond even the best business webcams available, streaming ultra high-definition 8K video (for the best 5K and 8K monitors out there) with near-zero latency, coupled with spatial audio, over 5G, Wi-Fi, and Ethernet.
You might also likeMost quotes about disappointment focus on the bright side: "Disappointment is a detour on the road to success," said Zig Ziglar. Maybe he's right but when the disappointment leads to an immutable fact or harsh realization, there may be no coming back from it. The Siri Intelligence delay and subsequent fallout is that kind of disappointment and became a wake-up call of sorts as everyone is reassessing their Apple point of view.
I'm sure by now many of you have read the various analyses and excoriations of Apple's failure to deliver on its Apple Intelligence and Siri promises. My favorite, by far, is Daring Fireball's epic "Something Is Rotten in the State of Cupertino" exploration of what went wrong and how "Apple pitched a story that wasn't true."
Our own John-Anthony Disotto calls Apple Intelligence "a fever dream" that perhaps Apple might like to forget. Fast Company's Harry McKracken is a bit more measured and while he thinks Apple might've failed to "emotionally bond with Siri" he writes that he'd rather see a "great" Siri than one that arrives "on time."
In some ways, they're all right. Apple is the most credible tech company on the planet. It did over-promise and create this mess, and sure, I'd like to see the very best Siri possible and, honestly have no choice but to wait for it.
But my disappointment is rooted in something far deeper and more disturbing than just Siri.
The long wait for a smarter Siri (Image credit: Apple)I've been chatting with Siri for almost 15 years and, in the early days, was impressed with its almost conversational capabilities. I wrote in detail about its numerous brain transplants and speech updates. Even as Alexa overtook it, I knew we were still in the horse-and-buggy stage of AI and I waited patiently for the magic I knew only Apple could bring.
My patience began to wane during the early days of the AI revolution as OpenAI and ChatGPT took the world by storm and then Microsoft supercharged awareness with Bing AI and eventually Copilot. Apple seemed to be sitting on its hands as Google and Samsung showed off impressive native AI feats in apps, on the web, and in Galaxy and Pixel phones, respectively.
WWDC 2024 changed all that and gave me hope that Apple was in the AI race, but there were worrisome signs even back then that because, well, it was Apple, I chose to ignore or forgive.
Conversations with Siri: Me: "Why?" Siri: "I don't know. Frankly, I've wondered that myself." #apple #iphone4sOctober 17, 2011
Chief among them was that Apple was quickly ceding key AI elements to the competition. The integration of ChatGPT and Google for complex natural language prompts was seen as a win, but it was also Apple throwing up its hands and saying, "Here. You handle this."
Anything more complex than "Hey Siri, play my Pump Up playlist" is handed over to ChatGPT. Essentially you are leaving Apple land for a world managed by an open source AI platform, albeit arguably the best one in the world.
I cut Apple slack because of the big promise: Siri would get better and not by a little bit. It would be the intelligent assistant you dreamed of. An AI that, with your permission, could see all on your best iPhone and on its screen. It could take action based on your written or spoken prompts, and keep the conversation going so you got the best result out of Apple's ecosystem and all your data that's embedded in it.
I believed because, like so many others, we believe in Apple.
When they were magical (Image credit: Lance Ulanoff)Apple is a special company. It all but appropriated the word "magical." Nobody launched products like Apple. No company has the aura. Its chief executives are mythical creatures. CEO Tim Cook is a bona fide celebrity and his warm Alabama cadence can lull you into submission: yes, Apple will do that.
But the hard realization is that Apple is just another tech company and one that is facing perhaps its most difficult technical challenge.
Yes, I appreciate the transparency. I've worked on many projects that took longer than I anticipated. It's hard to tell your boss: this will be delayed. For Apple, it had to share the news with almost a billion users.
Over the years, I've seen Apple fail or underdeliver and watched how it's held to an almost higher standard than others. Its efforts to bring us the thinnest phone ever resulted in the possibly bendable iPhone 6 but Apple recovered with a stronger iPhone 7 and future designs that almost challenged you to bend them.
Apple's not great at apologies. 15 years ago, the late Steve Jobs held an apology, non-apology press conference to explain away "Antennagate." For those who don't recall, that was when the iPhone 4 came out and some people reported connectivity issues that may have been related to their hands covering the ill-positioned antennas on the outside of the phone. The company initially said we were holding the phones wrong, and then Jobs held that press conference to clear the air. Sort of. He never exactly apologized and did his best to minimize the issue and encourage reporters to move on.
It's not that Apple is incapable of admitting fault.
The art of the apology (Image credit: Lance Ulanoff)Back in 2017, Apple invited me to Cupertino to talk about a Mac Pro do-over. This was unheard of. Not only was Apple saying it made a mistake, it was detailing where it went wrong and how it planned to recover and deliver a new Mac Pro for its devoted creative and development customers.
Oddly, what I did not take away from that day is that Apple is fundamentally a company like any other, with hits and misfires, and delays and struggles.
Similarly, I did not take Apple Intelligence promises with a grain of salt. Even as the company slowly stepped its way through delivering fresh AI updates, I waited patiently – and confidently for the big Siri update. I did have some frustration and tried, in my own way, to cajole Apple into action.
Even though Apple operates in secret, rumors and leaks are surprisingly precise about future activities. And for the longest time, they had the big Siri reveal pegged to iOS 18.4. When that didn't come, I was confused. And when Apple admitted that the update would be delayed to "in the coming year" I was surprised and upset.
That's when it finally sunk in.
My understanding of Apple as this precise and near perfect and well-oiled machine was, if not wrong, artificial. Yes, it is a massive and highly accomplished company with a spectacular campus that has done more to change the world than most, but it's also a gigantic enterprise of regular people operating in a demanding corporate bureaucracy trying to solve difficult engineering and programming challenges. Some of that is in evidence if you go by the latest Apple leak from Bloomberg, which describes an internal Apple meeting that sounds very much like your typical frustrated tech company leadership.
I don't know what went wrong, if it was the scale of the problem, the lateness of Apple's AI start, or someone inside over-promising about what they could deliver and when, but I should not have been so surprised.
Apple's not special. It's just a great company that often delivers great things. And sometimes it doesn't and we have to accept that.
You might also likeThe algorithms fueling AI models aren't sentient and don't get tired or annoyed. That's why it was something of a shock for one developer when AI-powered code editor Cursor AI told him it was quitting and that he should learn to write and edit the code himself. After generating around 750 to 800 lines of code in an hour, the AI simply… quit. Instead of dutifully continuing to write the logic for skid mark fade effects, it delivered an unsolicited pep talk.
"I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly," The AI declared. "Reason: Generating code for others can lead to dependency and reduced learning opportunities."
Now, if you’ve ever tried to learn programming, you might recognize this as the kind of well-meaning but mildly infuriating response you’d get from a veteran coder who believes that real programmers struggle in solitude through their errors. Only this time, the sentiment was coming from an AI that, just moments before, had been more than happy to generate code without judgment.
(Image credit: Screenshot from Cursor forum) AI failBased on the responses, this isn't a common issue for Cursor, and may be unique to the specific situation, prompts, and databases accessed by the AI. Still, it does resemble issues that other AI chatbots have reported. OpenAI even released an upgrade for ChatGPT specifically to overcome reported 'laziness by the AI model. Sometimes, it's less of a kind encouragement, as when Google Gemini reportedly threatened a user out of nowhere.
Ideally, an AI tool should function like any other productivity software and do what it’s told without extraneous comment. But, as developers push AI to resemble humans in their interactions, is that changing?
No good teacher does everything for their student, they push them to work it out for themselves. In a less benevolent interpretation, there's nothing more human than getting annoyed and quitting something because we are overworked and underappreciated. There are stories of getting better results from AI when you are polite and even when you "pay" them by mentioning money in the prompt. Next time you use an AI, maybe say please when you ask a question.
You might also likeESHYFT, a technology platform designed for nurses across the United States, reportedly kept an unprotected database online, exposing thousands of sensitive records to anyone who knew where to look.
Security researcher Jeremiah Fowler found the database, which contained 86,341 records, and that it exceeded 100 GB in size. The archive contained all sorts of sensitive data, from names and IDs, to medical reports, and more.
ESHYFT is a technology platform that connects nurses (CNAs, LPNs, and RNs) with per diem shifts at long-term care facilities across the US, offering flexible work opportunities for healthcare professionals and a reliable staffing solution for facilities.
Addressing the problemIt is not known for how long the database remained unprotected, or if any threat actors accessed it before Fowler did. We also don’t know if ESHYFT maintains the database itself, or if it outsourced it to a third party.
“In a limited sampling of the exposed documents, I saw records that included profile or facial images of users, .csv files with monthly work schedule logs, professional certificates, work assignment agreements, CVs and resumes that contained additional PII,” Fowler explained, noting he reported it to both Website Planet, and later - ESHYFT.
“One single spreadsheet document contained 800,000+ entries that detailed the nurse’s internal IDs, facility name, time and date of shifts, hours worked, and more.”
“I also saw what appeared to be medical documents uploaded to the app. These files were potentially uploaded as proof for why individual nurses missed shifts or took sick leave. These medical documents included medical reports containing information of diagnosis, prescriptions, or treatments that could potentially fall under the ambit of HIPAA regulations.”
After Fowler reported his findings to ESHYFT, the firm locked the database down a month later, telling him it was, "actively looking into this and working on a solution”.
You might also like