What do Ryan Gosling, Kristen Stewart, John Mayer, Barack Obama, and Tyler, the Creator have in common? They’re all worth in excess of $30 million, they’ve all adorned the cover of GQ – and they’ve all been spotted wearing a $20 Casio watch strapped to their wrist.
In a world of uber-premium time-tellers, the humble Casio timepiece has maintained its status as a stylish, practical, and unashamedly affordable horological companion for celebrities and common folk alike, with the rubber Casio F91W and stainless steel Casio A158WA, in particular, proving the most popular models of the lot.
In fact, “popular” is something of an understatement. By Amazon's metrics, 9,000 people bought the Casio A158W (aka Gosling's favorite) at the mega-retailer just last month, while an even more impressive 10,000 people bought the Casio F91W during the same period. In other words, these two Casio watches are hot property – and they’re both on sale right now.
The sport-friendly Casio F91W is down to just $13.16 (was $18.95), while the sexier Casio A158W is available for just $19.99 (was $22.95). Both watches have been a touch cheaper in the past, but they’ve also been more expensive, too, so now is as good a time as any to pick up one of Casio’s best-selling models.
Today's best Casio watch dealsCasio's top-selling watch, the F91W, is also the brand's simplest. This resin-strapped model comes with a stopwatch, an alarm, an LED light, an automatic calendar, and an approximate battery life of seven years (!). The only downside? The F91W is water-resistant, but not fully waterproof, so you can't go swimming with it.View Deal
Casio's A158WA watch is a little glitzier than the more basic-looking F91W, but it still offers the same great practicality: you'll get an auto-calendar, a daily alarm, stopwatch functionality, and basic splash resistance.View Deal
Despite their extremely modest price, both of these Casio watches represent supreme value for money. TechRadar’s Wearables Editor, Matt Evans, talked up the enduring appeal of the Casio F91W back in April, and it’s the watch he continued to wear four months later, calling it “the best $15 [he’d] ever spent.”
For more smartwatch-related Cyber Monday deals, check out our roundup of the 19 best Cyber Monday smartwatch deals you can still buy.
More of today's Cyber Monday sales in the USWe usually think about AI in terms of how the models and characters interact with humans. But what happens when AI personalities are left mostly on their own in a virtual world? AI startup Altera decided to find out by setting up a population of AI characters in the digital world of Minecraft for what it calls Project Sid. The result was an entire society of AI bots forming communities, taking on jobs to help that community, and even adhering to and spreading an in-game religion.
Altera set up Project Sid, with up to 1,000 AI-controlled characters able to interact within Minecraft's open-world environment. Each AI character was powered by a large language model (LLM) along with specialized task modules. Altera set up groups of 50 agents able to engage with each other over 12 in-game days or four real hours. After a couple of text prompts to kick things up, the AI personalities evolved on their own.
The personalities of the AI characters emerged pretty quickly, with outgoing and introverted personalities setting limits for interacting with other AIs. They soon developed unique personality traits, made a kind of etiquette for their interactions, and made decisions based on their simulated experiences. AI characters would adjust their behavior based on the reactions of those around them, even favoring those who behaved more kindly to them.
When put into larger groups of 30, the characters spontaneously developed jobs within their community despite initially sharing the same goals of building a sustainable village and defending it from threats. Soon, farmers, builders, and security guards were taking up their jobs. Some AI characters even became artists focused on beautifying the village with flowers and paint.
Altera sought to make the community mimic more complex communities and arranged a taxation system where the AI characters could vote on policies related to raising and spending money for the community. Factions of pro- and anti-tax groups began debating and arguing ahead of the votes, operating a lot like a real human community in some ways.
AI shares the (pasta) gospelWhen the simulation included up to 500 AI agents, Altera suddenly found a modern culture forming. The AIs would share culture and hobbies with each other, ranging from pulling pranks to an interest in environmentalism. It was at this level that the AI characters suddenly found religion. Specifically, they adopted the parody religion of Pastafarianism, known for its tongue-in-cheek worship of the Flying Spaghetti Monster. A small group of "priests" seeded the belief system, which then spread through towns, replicating the dynamics of cultural and religious proliferation in human history.
Of course, these AI characters aren't conscious of picking a religion or any of their other choices. But, they do show how AI can successfully mimic human behavior in ways that appear to be based on self-awareness. It's really just algorithms based on patterns learned from datasets. If you didn't know better, you might be fooled by the strikingly lifelike behavior of the AI and their cultures.
The experiment is impressive in what it shows about AI imitating humanity, but virtual societies like these do have larger value, according to Altera. The better that AI can reflect realistic human behavior, the better it will be at helping simulate how people would deal with different scenarios. It might help form social policies or guide the creation of disaster management plans. That may seem a stretch from Minecraft characters worshipping an airborne knot of noodles and meatballs, but unlike the Pastafarians, there's a lot more than faith behind it.
"These simulations, set within a Minecraft environment, reveal that agents are capable of meaningful progress – autonomously developing specialized roles, adhering to and changing collective rules, and engaging in cultural and religious transmission," Altera's researchers explained in a scientific report. "These preliminary results show that agents can achieve significant milestones towards AI civilizations, opening new avenues for large-scale societal simulations, agentic organizational intelligence, and integrating AI into human civilizations."
You might also likeAmazon Web Services (AWS) has launched a new service to help businesses address the issues of cybersecurity and cyberattacks.
AWS Security Incident Response is designed to help businesses prepare for, respond to, and recover from different security incidents such as account takeovers, data breaches, and ransomware attacks.
AWS argues addressing various security events has gotten too cumbersome. Between a flood over daily alerts, time-consuming manual reviews, errors in coordination, and problems with permissions, many businesses struggle to contain their security challenges.
Cutting down on time spent“There is an opportunity to better support customers and remove various points of undifferentiated heavy lifting that customers face during security events,” the blog reads.
Therefore AWS introduced a tool that, first and foremost, automatically triages security findings from GuardDuty and supported third-party tools through Security Hub to identify high-priority incidents requiring immediate attention. Through automation, and customer-specific information, the tool can filter and suppress security findings based on expected behavior.
Furthermore, it aims to simplify incident response by offering preconfigured notification rules and permission settings. Users get a centralized console with different features such as messaging, secure data transfer, and more. Finally, AWS Security Incident Response offers automated case history tracking and reporting, which allows IT teams to focus on remediation and recovery.
AWS Security Incident Response is now available via the AWS management console and service-specific APIs in 12 AWS Regions globally: US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Seoul, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, Ireland, London, Stockholm).
Incident response is critically important for businesses, especially in an era of increasing cybersecurity threats and reliance on digital infrastructure. It minimizes downtime and financial loss, protects the business’ reputation, ensures regulatory compliance, and keeps customer trust.
Via TechCrunch
You might also likeCerebras Systems says it has set a new benchmark in AI performance with Meta’s Llama 3.1 405B model, achieving an unprecedented generation speed of 969 tokens per second.
Third-party benchmark firm Artificial Analysis has claimed this performance is up to 75 times faster than GPU-based offerings from major hyperscalers. It was nearly six times faster than SambaNova at 164 tokens per second, more than 14 times faster than Google Vertex at 30 tokens per second, and far surpassing Azure at just 20 tokens per second and AWS at 13 tokens per second.
Additionally, the system demonstrated the fastest time to first token in the world, clocking in at just 240 milliseconds - nearly twice as fast as Google Vertex at 430 milliseconds and far ahead of AWS at 1,770 milliseconds.
Extending its lead“Cerebras holds the world record in Llama 3.1 8B and 70B performance, and with this announcement, we’re extending our lead to Llama 3.1 405B - delivering 969 tokens per second," noted Andrew Feldman, co-founder and CEO of Cerebras.
"By running the largest models at instant speed, Cerebras enables real-time responses from the world’s leading open frontier model. This opens up powerful new use cases, including reasoning and multi-agent collaboration, across the AI landscape.”
The Cerebras Inference system, powered by the CS-3 supercomputer and its Wafer Scale Engine 3 (WSE-3), supports full 128K context length at 16-bit precision. The WSE-3, known as the “fastest AI chip in the world,” features 44GB on-chip SRAM, four trillion transistors, and 900,000 AI-optimized cores. It delivers a peak AI performance of 125 petaflops and boasts 7,000 times the memory bandwidth of the Nvidia H100.
Meta’s GenAI VP Ahmad Al-Dahle also praised Cerebras' latest results, saying, “Scaling inference is critical for accelerating AI and open source innovation. Thanks to the incredible work of the Cerebras team, Llama 3.1 405B is now the world’s fastest frontier model. Through the power of Llama and our open approach, super-fast and affordable inference is now in reach for more developers than ever before.”
Customer trials for the system are ongoing, with general availability slated for Q1 2025. Pricing begins at $6 per million input tokens and $12 per million output tokens.
(Image credit: Cerebras) (Image credit: Cerebras) You might also likeTech companies in China face a number of challenges due to the American export ban, which restricts access to advanced hardware from US manufacturers.
This includes cutting-edge GPUs from Nvidia, critical for training large-scale AI models, forcing Chinese firms to rely on older or less efficient alternatives, making it difficult to compete globally in the rapidly evolving AI industry.
However, as we’ve seen time and again, these seemingly insurmountable challenges are increasingly being overcome through innovative solutions and Chinese ingenuity. Kai-Fu Lee, founder and CEO of 01.ai, recently revealed that his team successfully trained its high-performing model, Yi-Lightning, with a budget of just $3 million and 2,000 GPUs. In comparison, OpenAI reportedly spent $80-$100 million to train GPT-4 and is rumored to have allocated up to $1 billion for GPT-5.
Making inference fast too“The thing that shocks my friends in the Silicon Valley is not just our performance, but that we trained the model with only $3 million," Lee said (via @tsarnick).
"We believe in scaling law, but when you do excellent detailed engineering, it is not the case you have to spend a billion dollars to train a great model. As a company in China, first, we have limited access to GPUs due to the US regulations, and secondly, Chinese companies are not valued what the American companies are. So when we have less money and difficulty to get GPUs, I truly believe that necessity is the mother of invention."
Lee explained the company’s innovations include reducing computational bottlenecks, developing multi-layer caching, and designing a specialized inference engine. These advancements, he claims, result in more efficient memory usage and optimized training processes.
“When we only have 2,000 GPUs, the team has to figure out how to use it,” Kai-Fu Lee said, without disclosing the type of GPUs used. “I, as the CEO, have to figure out how to prioritize it, and then not only do we have to make training fast, we have to make inference fast... The bottom line is our inference cost is 10 cents per million tokens.”
For context, that’s about 1/30th of the typical rate charged by comparable models, highlighting the efficiency of 01.ai's approach.
Some people may be skeptical about the claims that you can train an AI model with limited resources and "excellent engineering", but according to UC Berkeley’s LMSIS, Yi-Lightning is ranked sixth globally in performance, suggesting that however it has done it, 01.ai has indeed found a way to be competitive with a minuscule budget and limited GPU access.
You might also likeGood morning and welcome to our live coverage of AWS re:Invent 2024!
We're live in Las Vegas and ready for a jam-packed few days at the computing giant's annual event.
Starting tomorrow, we're looking forward to a host of keynotes, news, announcements and much more - so you can follow our live blog below for all the updates you'll need at AWS re:Invent 2024.
Good morning from a beautiful sunny Las Vegas! The sun is out and preparations are underway for AWS re:Invent 2024, which officially kicks off tonight with an introductory keynote from Peter de Santis, Senior Vice President of AWS Utility Computing, before the main action starts tomorrow.
(Image credit: Future / Mike Moore)As noted, the main program at AWS re:Invent 2024 starts tomorrow, when AWS CEO Matt Garman will take to the stage for his keynote.
There will no doubt be lots of news and updates - and we already know the identity of one special guest - his predecessor, and current Amazon President and CEO Andy Jassy.
There is a worrying new phishing kit that enables cybercriminals to go after people’s Microsoft 365 accounts, even those protected by multi-factor authentication (MFA). It is called “Rockstar 2FA”, and it goes for $200 on the dark web.
Cybersecurity researchers from Trustwave recently discovered, and analyzed the new kit, noting how since August 2024, it has been aggressively promoted on Telegram and among other cybercriminal communities.
The kit’s developers claim it supports Microsoft 365, Hotmail, GoDaddy, SSO, and offers randomized source code and links to evade detection. Furthermore, it uses Cloudflare Turnstile Captcha to screen the victims and make sure it’s not sandboxed or analyzed by bots.
Bypassing MFA and stealing cookiesPhishing, as a method of attack, hasn’t changed much over the years. Crooks send out emails with fake documents, or fabricate urgent warnings the users need to address immediately, or face the consequences. As a result of hasty actions, the victims end up infecting their devices with malware, losing sensitive data, granting valuable access to cybercriminals, and more.
To counter this method, most businesses these days deploy multi-factor authentication , a second layer of authentication that prevents unauthorized access, even when the crooks steal the login credentials. Criminals, on the other hand, responded by creating adversary-in-the-middle (AiTM) methodology, something Rockstar 2FA integrated, as well.
By using the phishing kit, the attackers can create fake Microsoft 365 login pages. When the victim enters their credentials there, they are automatically relayed to the legitimate login page, which then returns the request for MFA. The phishing page returns that request back to the victim, ultimately leading to the account being compromised.
Finally, Rockstar 2FA will grab the authentication cookie being sent from the service to the user, allowing the attackers to remain logged in.
Since May 2024, which seems to be the kit’s date of origin, it set up more than 5,000 phishing domains, the researchers concluded.
Via BleepingComputer
You might also like