With businesses increasingly using AI tools for their key processes and tasks, hallucinations are proving to be a growing challenge.
To try and tackle this, Amazon Web Services (AWS) has announced a new tool to tackle hallucinations.
Revealed at its AWS re:Invent 2024 event, the new Automated Reasoning checks system looks to cut down on potentially damaging errors caused by hallucinations, which could see businesses face security risks or financial losses.
An end to AI hallucinations?At its simplest level, hallucinations are when an AI system or service behaves incorrectly, or becomes unreliable, often due to issues with errors in the data it has been trained on.
Described by the company as "the first and only generative AI safeguard that helps prevent factual errors due to model hallucinations", AWS' Automated Reasoning checks look to solve this by cross-checking the responses generated by a model against information provided by the customer. If it can't determine if the answer matches up exactly, the response gets sent back to the model for checking.
Available as part of Amazon Bedrock Guardrails, the company's system for keeping AI models accurate and reliable, the new checks will also attempt to see how the model came up with its answer, and if it deems it to be erroneous, will compare to the customer's information.
It will then present its answer alongside the initial response from the model, meaning customers can see the possible gap between the truth and the response, and tweak their model accordingly.
AWS gave the example of a healthcare provider using the tool to make sure customer enquiries about specific policies are given accurate answers.
"Over time, as generative AI transforms more companies and customer experiences, inference will become a core part of every application," said Dr. Swami Sivasubramanian, vice president of AI and Data at AWS.
"With the launch of these new capabilities, we are innovating on behalf of customers to solve some of the top challenges, like hallucinations and cost, that the entire industry is facing when moving generative AI applications to production.”
You may also likeIf you’ve recently received an SMS about your Netflix account being suspended, chances are it’s a scam. Fraudsters are targeting phone numbers in 23 countries with a new text message campaign, trying to swindle Netflix users out of their account credentials and payment information.
With more than 280 million paid subscribers worldwide, it’s no surprise that scammers would use Netflix as the hook in a phishing scheme. Even if the fraudulent message is sent out with a scattergun approach, there’s a good chance that many of the recipients will have a Netflix account – and potentially be tricked into parting with their personal information.
According to screenshots we’ve seen, there are a few variations of the fake SMS, including versions in several different languages. Each text message (see below) has the same basic structure: it claims to be from Netflix and states that there’s been an issue processing the subscription payment for your account. It then asks you to update your details and shares a URL.
If you click on the link, it will take you to a fake sign-in page which is designed to look convincingly like the real Netflix website. Enter your details here and you’ll be handing them over to fraudsters, who can use them to access your Netflix account. With these details, certain scammers may try to sell your account on the dark web.
The scam also goes a step further. The next screen shows a warning message claiming that your account is temporarily suspended due to a payment issue. It then asks you to make a payment using a credit or debit card. Do this and the scammers will have your card details. The scam also gives you the option to pay using a Netflix gift card, which would give the value of that card to the fraudsters.
How to keep your Netflix account secure (Image credit: Apple)Like most phishing scams, the Netflix SMS con relies on a few factors to trick you into parting with your personal information. The text message and website look real enough that people might give it their attention. On top of that, the account suspension alert is designed to create a sense of urgency.
For many users, losing Netflix account access to due to a missed payment would be a serious issue. A sense of panic, as well as a desire to resolve the problem quickly, could cause people to act without thinking twice, giving their sensitive personal information to fraudsters even if alarm bells should be ringing.
Netflix accounts are particularly vulnerable to phishing attacks as Netflix doesn’t offer two-factor authentication.
Netflix accounts are particularly vulnerable to phishing attacks as Netflix doesn’t offer two-factor authentication. That means anyone with your username and password will be able to sign in to your account. Because of this, you need to be vigilant about messages claiming to be from Netflix.
Netflix has a dedicated article about phishing emails and texts on its website. It says, “If you get an email or text message (SMS) asking for your Netflix account email, phone, password, or payment method it probably didn't come from Netflix.”
Users need to be especially careful when it comes to clicking links. Netflix says, “If the text or email links to a URL that you don't recognize, don't tap or click it. If you did already, do not enter any information on the website that opened.”
If you’re ever in doubt, the safest thing to do is navigate directly to Netflix.com and sign in there, to check the status of your account. If you believe your account has been compromised, you should change your password. You can also sign out of unrecognized devizes. Netflix has more information about how to keep your account secure here.
If you do receive an SMS or email which you believe to be a scam, you should forward it to phishing@netflix.com, then delete the message.
You might also like...The many AI-powered chatbots emulating famous fictional personalities are leaving the digital halls of Character.AI as the company seeks to crack down on AI imitations of intellectual property. The oft-ignored specter of copyright law hovers over the ranks of virtual companions imitating your favorite fictional personalities, taking a scythe to the names you recognize from films, books, TV shows, and comics.
Character.AI confirmed in a statement to Futurism that it is seeking to comply with copyright law seems like an obvious choice at first glance. No company wants to be vulnerable to legal attacks by giant corporations with infinite lawyers. That said, a huge amount of discourse on Character.AI involves users engaging with AI versions of fictitious people (or cartoon rabbits, hobbits, and more). Fans roleplaying friendship with the AI simulacra are upset, but Character.AI's larger goals demand a bit more fidelity to intellectual property laws.
Character.AI's statement just restates the relevant part of a blog post explaining how the company recently updated its terms and conditions. The changes emphasized making the platform safer for children and upping content moderation as well as boosting copyright law enforcement.
"We conduct proactive detection and moderation of user-created Characters, including using industry-standard and custom blocklists that are regularly updated. We proactively, and in response to user reports, remove Characters that violate our Terms of Service," Character.AI explained in its post. "Users may notice that we’ve recently removed a group of Characters that have been flagged as violative, and these will be added to our custom blocklists moving forward."
Harold Putter and the Magician's RockThe effort appears incomplete so far. The characters with the exact name of a character are mostly gone, but more elaborate or silly variants survive. No more Harry Potter or Daenerys Targaryen but Harold Putter and Dany Dragonlady live to talk for at least a little longer.
Even with the loopholes in place, it's questionable if Character.AI's popularity will survive the cull. Talking to figures from favorite films and books, even just an AI imitation, will entice plenty of people otherwise uninterested in AI. Features like audible voices and the two-way voice conversations available with Character Calls might not have the same draw. Will they stick around for original characters or do they prefer those based on historical celebrities?
You Might Also Like...I like Tim Cook. The now long-time Apple CEO is gracious, smart, and as close to a human sphinx as you can imagine. He rarely drops major news, either casually or when the media are grilling him.
Cook did not disappoint in his latest wide-ranging interview with Wired's Steven Levy. One of the best in the business, Levy peppered Cook with questions about everything from the iPhone's 16's new Camera Control button to Apple Intelligence, the company, and his own legacy. Cook didn't exactly break news, but there were areas where he revealed a bit more about himself and some of Apple's strategic decisions relating to AI, mixed reality, and what comes next for Cook himself.
Apple Intelligence, Apple's brand of AI that Cook insists is not a pun, has been slowly rolling out to supported iPhones, iPads, and Macs, with each iteration getting a bit closer to what Apple promised during its June WWDC 2024 keynote. Cook didn't walk through any new features, though he does have a point of view on the fine line between utility and taking over. Cook tends to believe that AI is an assistant (like a copilot, I guess) and is not straight-up doing things for you.
However, Cook's perspective on charging for additional and maybe more powerful AI Apple Intelligence features was more interesting. It's not a discussion they've been having on the Apple Campus.
"We never talked about charging for it," Cook told Levy. Now, that doesn't mean it's off the table, but since Apple and Cook view Apple Intelligence as similar to multitouch on the iPhone, AI is likely a feature that adds value to all the other products and services Apple charges for. Apple could simply raise the prices on them to cover the cost of building and supporting Apple intelligence features.
Vision Pro realitiesApple has been mum on Vision Pro sales. The powerful VR and mixed-reality headset is undoubtedly the apex of Apple's consumer electronics capabilities and the company makes you pay dearly for it – $3,500 – which may account for consumer apathy.
Cook didn't speak directly about sales performance, but he's still bullish about the headset. I think, though, he may have acknowledged that the pricey wearable is not for everyone. Here's how Cook characterized it to Levy:
"It’s an early adopter product, for people who want tomorrow’s technology today."
Cook insisted that the ecosystem is flourishing, which may be a sign of product category health, but then he added one encouraging bit of almost news about what might come next.
Levy asked about Meta Orion and Snap AR glasses. These lighter and more glasses-like wearables focus on AR experiences, and I wondered if Vision Pro's next iteration could be headed in that direction.
"Yes," Cook told Levy, "It’s a progression over time in terms of what happens with form factors."
I think the market cannot wait to try out those next form factors.
After CookSome believe that Apple Hardware lead John Ternus is the next likely Apple CEO, but for Ternus to step in, Cook would have to step away. The current Apple CEO, however, did not paint a picture of someone running out of steam or one who is becoming less engaged with the brand.
The Apple-Tim Cook love affair is still very much alive. Cook is not planning his exit and told Levy that he would not "do it until the voice in my head says, 'It's time."
Cook said he loves the job and can't imagine his life without it. Put another way, Tim Cook will be steering the Apple ship and building upon his legacy, which Cook wants to be health. "We have research going on. We’re pouring all of ourselves in here, and we work on things that are years in the making," Cook told Levy.
I think it will likely be Apple Silicon for many years to come, though.
You might also likeThe subject of whether Macs can get viruses – and if they do, whether you should install antivirus software – is a contentious one among Apple fans.
A new report from Mac security firm Moonlock suggests the threat is now on the rise from AI-powered malware. Yet on one side are those who believe antivirus apps are more hassle than they’re worth, slowing down your computer in the face of a minimal threat level. On the other, there are people who urge caution against a changing world of hackers and virus creators.
It's all a bit of a mess, and it can often be hard to know which side to believe. But with this new report shedding light on some of the tactics hackers are using to victimize Mac users, could it be that that’s all about to change? Here's our verdict.
The myth: Macs don’t get viruses (Image credit: Shutterstock)There’s a long-held belief that Macs don’t get viruses, with adherents claiming that a mixture of common sense (don’t download torrents and pirated software, for instance) and built-in macOS tools like Gatekeeper are sufficient to keep you protected from anything that’s thrown your way.
There’s some weight to those claims – Macs certainly get far less malware than Windows PCs thanks to a combination of macOS’s sturdy antivirus tools and Apple’s much lower market share being less attractive to would-be attackers. But the idea that Macs are totally invulnerable to spyware, trojans, and other digital nasties is wide of the mark.
In fact, we’ve seen reports of Mac virus threats increasing at a rapid rate over the last few years, with malware writers honing their skills in order to target Apple fans. Even North Korean hackers are getting in on the act, such is the growing importance of macOS to threat actors.
The reality: They can – but the threat of AI tools may be overblown (Image credit: Unsplash)With the simultaneous rise of artificial intelligence (AI) chatbots, there’s been notable concern among some that tools like ChatGPT will empower even novice hackers to create devastating malware strains that can get around the most robust of Mac defenses.
Now, a new report from Mac security firm Moonlock seems to confirm some of those fears. It cites cases of hackers creating working malware just by prompting an AI chatbot to start coding.
For instance, Moonlock’s report includes messages posted by hacker known as 'barboris', who listed code produced by ChatGPT on a malware forum. There, barboris explained that they had little coding experience, but were still able to get ChatGPT to do their bidding with a little creative prompting.
However, before we get too panicked, ChatGPT is not quite the all-powerful malware-creation tool that it seems. As with any other experience of using an AI chatbot, it can be prone to mistakes and garbled nonsense, which has the potential to ruin any would-be hacker’s day. If someone with no malware experience were to use ChatGPT to create a virus, they might struggle to troubleshoot it and forge something workable.
The risk posed by chatbot-generated malware remains relatively low at this time.
Martin Zugec, BitdefenderI previously spoke to a range of security experts on this very subject, and they were skeptical about ChatGPT’s ability to create effective malware. Chatbots have built-in guardrails to prevent people from crafting malware code, and for Martin Zugec, the Technical Solutions Director at Bitdefender, if a person is relying on ChatGPT to create code for them, they probably don’t possess the skills to bypass these guardrails.
Due to that, Zugec says, “the risk posed by chatbot-generated malware remains relatively low at this time.” What’s more, Zugec adds that “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”
In other words, while barboris may have been able to put together a virus using ChatGPT despite their limited hacking knowledge, a more experienced coder would likely get better results and more effective malware from public repositories and their own honed skills.
Still, clearly it is possible for inexperienced hackers to code up working viruses with little more than ChatGPT, a handful of effective prompts, and plenty of patience. This is something we’ll have to keep a close eye on over the coming years.
You might also likeChinese electronics manufacturer Teclast has launched its latest tablet, the P50AI. Powered by an Allwinner A733 octa-core processor with a 3 TOPS NPU, it can handle a number of AI-driven tasks such as video upscaling, color optimization, hands-free gesture control, and text extraction. It also offers an AI posture awareness feature, which reminds users to sit up straight if it spots them slouching.
The device’s 11-inch IPS display offers a 90Hz refresh rate for smooth visuals, and the tablet comes with up to 16GB of memory (6GB LPDDR5 RAM and 10GB virtual) and 128GB of UFS3.0 storage, expandable by an additional 1TB via MicroSD.
Running on Android 15 with TeclastOS, the tablet offers a handy PC Mode, which lets users switch to a desktop-like experience. They can open multiple windows or apps, resize and rearrange them, and pair the tablet with a keyboard and mouse. Multi-window management and single-app screen recording features are also on offer.
Dual USB-C portsA standout feature of the P50AI, for me, is its dual USB-C ports, which can be used simultaneously. The primary port supports 10Gbps data transfer and video output via DisplayPort (DP), allowing the tablet to connect to external displays. A low-cost tablet like this with video-in capability, enabling it to function as a portable monitor, remains on my wish list.
Connectivity options are provided through Wi-Fi 6 and Bluetooth 5.4. Other features include a 7000mAh battery, dual speakers, a 3.5mm headphone jack, and dual-mic noise reduction. The camera setup includes a 13MP rear camera with an AI secondary lens for quicker focus and shooting, and a 5MP front camera for selfies and video calls.
The tablet has a sleek metal body with rounded edges and comes in a Guava Teal finish. It measures 258mm x 170mm x 8.3mm and weighs 530g. Priced at around $145 on Temu and just $125.32 on AliExpress, the P50AI should be available on Amazon soon.
You might also likeSome of the Windows 11 testers trying out the Recall feature (which recently went live for Windows Insiders) ran into a baffling issue where it didn’t work at all, and Microsoft has just explained the problem – but failed to provide a fix for those affected.
As we saw last week, after Recall was finally deployed in the Dev channel for Windows 11, it was immediately hit by some bugs. Indeed, some testers complained that it refused to save any snapshots at all (those being the regular screenshots Recall takes to facilitate its AI-supercharged search functionality).
According to an update on Microsoft’s blog post for the preview build in question, the glitch happens to Windows 11 users who first install patch KB5046740 – which is the preview update for November – and then go on to install build 26120.2415.
Essentially, something in the KB5046740 optional update for Windows 11 clashes with the Dev channel build, and throws a serious spanner in the works for Recall.
Microsoft’s advice is: “We recommend you not install this preview update before joining the Dev channel until we fix the issue in a future update.”
(Image credit: Shutterstock) Analysis: Already in this pickle? You’re out of luck, it seemsOf course, Microsoft doesn’t tell us what to do if you’re already in this pickle, and you’ve installed the preview update before deciding to join the Dev channel in order to test Recall. So, we can only presume that you’re going to need to reinstall Windows 11 to fix this (or just put up with Recall not working – and maybe never working, going forward with Dev test builds).
Recall is, of course, a feature for Copilot+ PCs only – and just Arm-based Snapdragon devices to begin with, too – so there will be a limited pool of testers anyway. And an even smaller subset who went this route before installing the Dev build.
Even so, that’s still a bunch of users who are left in the lurch, but such are the perils of being a Windows 11 tester. Especially in the earlier testing channels, Canary and Dev, where the changes brought in are fresher and more likely to suffer from bugs.
Via The Register
You might also like...