While it didn’t start on time, once Meta CEO Mark Zuckerberg took the stage at Meta Connect 2024 for the opening keynote it’s safe to say that it was a fast-moving and news-making event. Zuck and co showed off a ton of AI tools – including new celebrity voices and live translation for Meta AI, new hardware in the shape of the Quest 3S and a futuristic-looking AR glasses prototype, and AI updates for the Ray-Ban Meta Smart Glasses.
You can catch up with everything that was announced at our Meta Connect live blog, including some hot takes from our Editor-at-Large Lance Ulanoff, who was at the event, but below, we’re sharing the six biggest things that Meta unveiled at its Connect keynote.
1. A first glimpse at the prototype Orion AR glasses (Image credit: Future/Jacob Krol)The honor of biggest surprise at Meta Connect goes to Mark Zuckerberg showing off its first fully holographic AR glasses, dubbed Orion. These take the familiar shape of glasses, much like the Ray-Ban Meta Smart Glasses, albeit a bit thicker.
There's a lot of technology packed inside, including a new display architecture. These glasses are fully see-through, and you’ll control them through a neural-network-based hand gesture that requires a wristband.
In several demos, we saw the Orion glasses being used for games and opening windows overlaid onto whatever you’re looking at. We got a quick snippet of video calling using personal avatars as well.
Suffice to say, the Orion prototypes were impressive, even from the pre-recorded demos and sizzle reel of folks trying them out. Folks mentioned the lightness of them, and Mark Zuckerberg waxed lyrical about the work that's gone into them. A magnesium frame is being used to ensure the glasses aren’t heavy – at least in this stage of development – and rather than glass lenses, theyre silicon carbide, which light gets projected through for displays. They're still a few years away, and Meta noted that it needs to bring the price down, although it will be releasing early developer kits for a select few.
2. The Meta Quest 3S is official (Image credit: Lance Ulanoff / Future)It’s been rumored for quite some time, but Meta made the Quest 3S officially, official. It’s now the entry point to the Quest mixed-reality headset lineup, and given its place it has an affordable price to match, starting at just $299.99 / £289.99 for the 128GB model.
Facing the world, the Quest 3S has a similar overall build to the Quest 3 with a redesigned front, notably with the camera stack, but the goal is to deliver “high-quality mixed reality” to the masses at a cheaper price point. In our early hands-on, the onboard Qualcomm processor was plenty to run through Horizon OS, made it a breeze to play games like Batman: Arkham Shadow and multitask with multiple windows.
We can’t wait to spend more time with the Quest 3S, but in the meantime, check out our hands-on Meta Quest 3S review for what’s shaping up to be a compelling and affordable headset.
3. You can now chat with Meta AI, and even 'chat' with celebrities (Image credit: Future / Lance Ulanoff)Meta announced some huge Meta AI updates at Meta Connect 2024, including the introduction of voice to the AI chatbot. Not any voice, however, celebrities like John Cena, Keegan-Michael Key, and Kristen Bell. Now, you can have realistic, human-like conversations with your smartphone using apps like Facebook, Instagram DM, WhatsApp, and Messenger. This brings Meta AI right up to date with the other big-name AI companies like OpenAI, Google, and Apple, which are all introducing voice modes to their AI products.
4. Major updates for Ray-Ban Meta Smart Glasses, but no new model (Image credit: Ray-Ban / Meta)While it’s not as monumental as the prototype Orion glasses, Meta announced a laundry list of updates for its Ray-Ban Smart Glasses. First, these will now offer hands-free access to pulling your favorite songs, audiobooks, or even podcasts through services like Spotify or Audible. What's more, instead of asking for a specific song, you could ask for a pop tune by Taylor Swift or maybe a classic folk song. Again, Meta AI is pushing towards more natural conversations.
Beyond song requests, these smart glasses will be able to handle some language translation queries by the end of the year. You can look at something written Spanish, French, or Italian and ask Meta AI to translate it. We gave this a go, and it worked in a demo space. Ray-Ban Meta Smart Glasses will also score visual reminders, lenses that can transition into sunglasses faster, and Meta AI for video. All of these are due to arrive before 2025.
5. Meta AI translation for Instagram Reels (Image credit: Facebook)An incredible new AI translation tool for Instagram and Facebook Reels allows content creators to reach a wider audience thanks to automatic dubbing and lip-syncing. The AI feature is currently in testing in the US and Latin America but Meta hopes to expand these translation tools to more languages in the future.
Imagine a world where you can watch portrait video content from around the world and hear it in your language without the need for subtitles - this could be a revolutionary new AI feature for anyone who consumes video content on Instagram or Facebook.
6. AI image generation on Facebook and Instagram (Image credit: Lance Ulanoff / Future)Meta is adding its Imagine features to Facebook and Instagram to compete with some of the best AI Image generators on the market. Similar to Apple’s Image Playground or X’s Grok LLM which allows you to create images of anything you can dream of.
Built into Meta’s social media platforms, you’ll be able to easily share these AI-generated images with your friends as well as create posts for your feeds and stories to add a new dimension to your Facebook and Instagram accounts.
You Might Also Like...Fitbit's app is changing. Last week, the Google-owned health and fitness app (which is one of the best fitness apps overall) altered the way its heart rate page was laid out. This time, it's the turn of Stress Management, Body Response, and MIndfulness.
Spotted by 9to5Google, these changes are reportedly made with Google's unified app-design language, Material 3, in mind for cross-app consistency. As Fitbit.com gets shuttered and migrates to the Google store, Google is presumably keen for Fitbit's app to be consistent with the rest of its Android and Pixel portfolio.
The changes, which will come into effect with update version 4.26.1, will involve toolbar tabs replacing chronological timelines. Stress Management and Body Response get "Day" and "Week" tabs to better monitor your last seven days, while Mindfulness gets "Week", "Month" and "Year" tabs to better record meditation streaks.
A few features also appear to have been renamed, with the "Responsiveness" metric under your activity score renamed "Physical Calmness", and "Exertion Balance" rechristened to "Activity Balance". The changes can also be seen in posts by eagle-eyed users on the Fitbit subreddit.
It's unclear if these renamings imply a change in the ways in which your scores are calculated, but we don't believe this is currently the case.
To get the updated version of the app without waiting for an automatic update, you'll need to head to the Play Store app. Under the menu icon, tap the "Update" button next to the Fitbit app.
You might also like...At Meta Connect 2024 Mark Zuckerberg gave us our first look at Meta's first fully functioning holographic AR glasses codenamed Orion. And they look pretty damn cool.
As a one more thing end to his section of the presentation which also introduced us to the Meta Quest 3S, new Ray-Ban Meta Smart Glasses features, and Meta AI updates including voice, Zuckerberg gave us a first look at his company's AR glasses efforts. The culmination of what he says is a decade's work.
Even at this prototype stage Orion looks pretty damn impressive. Acting just like you might expect these glasses can provide a virtual overlay to augment what you see in the real world; this includes highlighting certain objects, letting you video chat with another person's life-like persona and browse the web.
And all of this comes in specs that look pretty normal and are lightweight coming in at under 100g, though Zuckerberg himself admitted the design isn't yet finalized as they aren't as fashionable as he wants them to be (and I'll be the first to agree that fashion is vital for tech wearables).
This is a breaking story, as we learn more we'll be updating this article.
Meta isn't redesigning its increasingly popular Ray-Ban Meta Smart Glasses, but it is pouring attention on them in the form of new colors, faster transition lenses, and a lot more Meta AI capabilities.
At Meta Connect 2024 at the Meta Headquarters in Menlo Park, California, I got a chance to try out the smart glasses' latest capabilities, and it's clear to me that this little brain transplant will help raise Ray-Ban Meta's wearable companion capabilities.
Most of my experiences were in a sample Ray-Ban Meta Pop-up store, the first of which will soon appear in Los Angeles. It's a place where you can experience all of the smart glasses' various features, including Meta AI-powered ones, buy glasses, and even get them personalized with a custom engraving.
Some of Meta AI's updated capabilities are thanks to better third-party app integrations. Meta AI can now play Spotify Music, and the requests can be based on artists, tracks, or even moods.
Sing a songI donned a tortoiseshell pair of Ray-Ban Metas and asked Meta AI to play me a happy song by Taylor Swift. Granted, that was a bit of a fumbled request, so I guess I wasn't surprised when Meta started playing the Happy Birthday song through the smart glasses' speakers, which are positioned in the stems and near my ears.
I regrouped and asked for a Taylor Swift song (I know, shockingly not Hot to Go by Chappell Roan), and after a second delay, it played Shake It Off.
Meta AI now has vision capabilities, so there were places throughout the demo store where I could look at things and ask Meta AI what I was looking at.
(Image credit: Future)There was an LA city street diorama in the middle of the store. I looked at it and asked Meta AI using the prompt, "Hey, Meta" what I was looking at. It took a few moments for Meta AI to reply, and I thought it didn’t hear me, but then it finally said, “You’re looking at a diorama featuring various cars and palm trees,” which was accurate.
With Meta AI, Ray-Ban Meta Smart Glasses also have some basic translation capabilities. I looked at a Spanish-language sign in the pop-up store and ask Meta AI to “translate this sign.” I heard the smart glasses take a picture (it sounds like a camera shutter), and then Meta AI said, “Flower market, Mercado De Flores.” I thought the Spanish pronunciation at the end was a nice touch.
(Image credit: Future)You can also use Meta AI to set visual reminders, though it's a kind of clumsy process where you look at something and then tell Meta AI to remind you to do something later. I asked it to remind me to buy a book in five minutes. It might be just as quick to take out your phone and put in a reminder, but if you're in the habit of doing this a lot, using Ray-Ban Meta and Meta AI might be a time saver.
(Image credit: Future)We also used Meta AI to read a QR code and send the link to a phone, and I spotted some people in the corner of the room getting their glasses engraved. One cool little demo was Ray Ban Meta's improved transition lenses. I watched as a Meta employee used a blue light to turn lenses a nice, dark shade quickly. I didn't even know blue light could do that.
(Image credit: Future)Ray-Ban Meta Wayfarer Smart Glasses still start at $329. Remember that not all of Meta AI's features (like Vision) are available worldwide, so check before you buy this wearable companion.
You might also likeThe Ray-Ban Meta Smart Glasses might be the most popular smart glasses around, and that’s likely thanks to their feature set and housing. Meta partnered with Ray-Ban to bring the iconic Wayfarers into the technological age with slightly thicker frames, two cameras, some speaker tech, microphones, and connectivity.
These smart glasses started out as a unique way to capture photos or videos, at times even more 'in the moment', given that you didn’t need to take your phone out and launch the camera. In recent months, Meta has infused these smart glasses with Meta AI, enabling you to look at something and simply say, “Hey Meta, what’s this?” and let it look, analyze it, and then provide you an answer. It’s pretty neat.
Now, though, at Meta Connect 2024, the team working on the Smart Glasses wants to make these even smarter – and if you guessed that they're doing that with AI, you’d be correct.
Language translations and visual remindersKicking things off with what might be the most helpful new feature, the Ray-Ban Meta Smart Glasses will get live language translation later this year. Similar to what Samsung accomplished with the Galaxy Buds Pro 2 or Google with the Pixel Buds Pro 2, the Ray-Ban Metas will be able to translate languages near-live, initially between English and Spanish, Italian, and French.
This could prove pretty helpful, and more natural than attempting to do this with earbuds in, as it’s baked into the smart glasses, which you might already be wearing daily if you’ve opted to install prescription lenses.
Furthermore, beyond asking Meta to set a reminder verbally, you can now set up reminders based on things that you see, and therefore Meta is viewing – so it could be as you’re getting milk out of the fridge and realize you’re almost out, or maybe a package that you left near your front door that you need to be sure you take with you. This feature should be rolling out sooner rather than later.
Similarly, you’ll now be able to scan QR codes for events, phone numbers, and even full contact information. If a QR code is visible, you’ll be able to ask Meta via the Ray-Ban Meta Smart Glasses to scan it – we imagine the information will then appear in the Android or iOS companion app.
An ambitious video stepLikely the most ambitious forthcoming feature, also set to arrive later this year, Meta AI for video, meaning that Meta can view what you’re looking at in real time, not just an image snapshot, and provide clarity or answer questions. This could be helpful for navigating around a city, cooking a meal, completing a math problem, or helping you finish a Lego set.
This is likely a big step and would raise some privacy concerns, as it’s a live view from your glasses that's being processed immediately. You’ll also need the Ray-Ban Meta Smart Glasses to be connected to the internet via an iPhone or Android for this to work, as it would need to process the information in real time.
Still, though, it does likely give us an idea of where Meta is headed with the smart glasses category, and it’s great to see that Meta is continuing to roll out new features to the smart glasses. And that’s the good news here – these updates don’t, so far as we know, require new hardware. These are set to arrive as over-the-air updates for folks who already have the Ray-Ban Meta Smart Glasses, or who purchase them.
Another update that's on the way is integration with Amazon Music, Audible, iHeart, and Spotify integration, which will give you easier access to your favorite songs, artists, podcasts, and books hands-free. You’ll also see new Transition lens options arriving from EssilorLuxottica, the eye giant behind brands ranging from Dolce & Gabbana to Oakley.
So, if you haven’t loved the available looks enough to get a pair yet, or want to freshen yours up, once those hit the scene it’ll a good time to consider them again. We’ll be going hands-on with these new features, from language translation to Meta AI for video, as soon as we can, so stay tuned to TechRadar.
You Might Also Like...Bringing AI agents into the workforce will soon be as common as onboarding human employees, as they work together to make businesses smarter and more efficient, Nvidia CEO Jensen Huang has predicted.
Speaking at the recent Dreamforce 2024 event in San Francisco, Huang noted how integrating AI into the workplace will soon feel a lot more natural, as agents and assistants become a common sight in businesses everywhere.
In a fireside chat at the event, Huang told Salesforce CEO Marc Benioff that AI agents won’t replace human workers, but instead augment them, boosting productivity by removing time-consuming tasks such as data entry, freeing employees up for more productive work.
Onboarding AI“It’s going to be a lot more like onboarding an employee than writing software,” Huang said, “It’s going to be a lot more like introducing and welcoming a team member, who will help you do something…and you’ll communicate with that person and with that agent, and explain what is the mission, show it examples and what the output would look like."
AI agents were a major theme at Dreamforce 2024, as Salesforce took the wraps off its new Agentforce platform, a collection of AI-powered offerings looking to help businesses across the board.
The event also saw Nvidia and Salesforce announce a partnership that will bring together the Nvidia AI platform with Agentforce to provide even more powerful and useful insights and productivity boosts for workers, with Nvidia’s NIM microservices and NeMo offering customized models alongside the Salesforce platform AI tools.
This could be particularly useful in high-stakes situations such as crisis management, where a company might need to rapidly scale customer service interactions in case of a product recall or service outage, or allow for real-time adjustment of delivery dates in the case of extreme weather situations.
Elsewhere, Benioff and Huang joked about their respective usage of AI, with the Salesforce CEO saying he sometimes uses ChatGPT as a therapist, with the Nvidia chief replying, “It must be working - you look pretty chill.”
Quizzed by Benioff on what keeps him motivated, Huang replied that the opportunity offered by this period of rapid AI innovation was all he needed.
“I can’t really tell whether I’m running for food or running away from being food, but I’m running all the time, and I don’t know where that comes from,” he said.
“I realize that our company is in a once-in-a-lifetime position to be able to make a real contribution,” Huang added. “We now have the instruments, the tools, this capability called artificial intelligence, that will solve all of those other problems that we’ve been excited about ever since we were kids.”
“Nobody should miss the next decade…you’re not going to want to miss this movie."
More from TechRadar ProSamsung says it has begun the mass production of its 1Tb quad-level cell (QLC) 9th-generation V-NAND, an industry first. The development follows the company's production of triple-level cell (TLC) 9th-generation V-NAND earlier in 2024, and reflects a broader industry shift from TLC to QLC technology to meet the growing demand for higher-capacity and more efficient storage solutions.
The new QLC V-NAND incorporates innovative technologies such as Channel Hole Etching, which enables a higher layer count through a double stack structure, resulting in approximately 86% higher bit density compared to the previous generation of QLC V-NAND. Samsung says the increase in density allows for greater storage capacity in the same physical space, addressing the needs of applications that require large amounts of data storage.
Designed Mold, Predictive Program, and Low-Power Design technologies boost reliability by 20%, double write performance, improve input/output speed by 60%, and reduce power consumption during read and write operations by up to 50%.
A shift from TLC to QLCThe industry is experiencing a shift from TLC to QLC NAND flash memory, driven by the need for higher storage capacities and cost efficiencies. QLC NAND stores four bits per cell instead of three, effectively increasing storage density. This transition enables manufacturers to produce SSDs with larger capacities at potentially lower costs, which is particularly important for data centers and AI applications that require vast amounts of storage.
“Kicking off the successful mass production of QLC 9th-generation V-NAND just four months after the TLC version allows us to offer a full lineup of advanced SSD solutions that address the needs for the AI era,” said SungHoi Hur, Executive Vice President and Head of Flash Product & Technology at Samsung Electronics. “As the enterprise SSD market shows rapid growth with stronger demand for AI applications, we will continue to solidify our leadership in the segment through our QLC and TLC 9th-generation V-NAND.”
Applications of the QLC V-NAND will start with branded consumer products before extending into mobile Universal Flash Storage (UFS), PCs and server SSDs for customers, including cloud service providers.
More from TechRadar ProFace it, no AI is complete until it has a voice, and now Meta AI has one and is ready to engage with you on your favorite Meta platform, including WhatsApp, Facebook, Instagram, and Messenger.
I had a chance to try Meta's new Llama-powered chatty capabilities at Meta Connect 2024, which is taking place September 24-26 at the iconic 1 Hacker Way Meta headquarters.
Admittedly, the conditions were suboptimal. Meta AI was on a phone out on the Meta Campus, where people milled about and helicopters buzzed overhead. A Meta employee told me the phone's mic was on, and Meta was listening, and I stood before it talking.
"So I can just talk to it?" I asked.
Meta piped up, "Yes you can. I'm here to help. Just say what's on your mind, and I'll do my best to assist you. Go ahead."
Suddenly flummoxed, I couldn't think of a question, so I asked Meta AI if it knew where New York City is," and it immediately gave me a detailed answer.
As Meta AI talked, I interrupted and told it I was thinking of moving there, but I didn't know the best place.
"That's exciting," Meta AI responded and began outlining the city's five Burroughs. I interrupted again and told Meta AI I was considering Manhattan.
Without missing a beat, meta AI told them Manhattan features diversity.
(Image credit: Future / Lance Ulanoff)Everything Meta AI said also appeared in text on the screen.
I asked Meta AI if it thought I could get a condo in Harlem for under $500,000. To my surprise, it said yes and gave me detailed examples.
At this point, there was a bit too much sound interference, and Meta AI did not hear me when I asked about a moving company or when I asked it to stop responding. It really seemed to enjoy going through Harlem condo opportunities.
By turning off the speaker for a second, we were able to regain control of Meta AI, which quickly gave me some moving company capabilities.
Even with that glitch at the end, this was an impressive little demo. Meta AI's speech capabilities are smart, understand context, and can pivot if you interrupt.
Meta AI's speech capabilities are rolling out now in the US, Canada, Australia, and New Zealand. It can chat, tell stories, and figure things out, like where to move and how to find a home within your price range.
You might also likeMark Zuckerberg has announced a ton of cool AI features at Meta Connect 2024 which apply to Meta AI, the chatbot found inside its popular social media apps like Facebook, Instagram DM, WhatsApp, and Messenger. One of the coolest new features he revealed is the ability to ask questions about photos and edit them in Meta AI.
Meta AI is now multimodal, which means it can ‘see’ photos in your chats and answer questions about them. So, you can simply post an image of a bird in your chat with Meta AI and ask it what kind of bird you’re looking at and you’ll get the right answer. But it goes beyond simply identifying animals and plants. Show it a photo of a meal and ask it how you’d make it and you’ll get a list of instructions as well as ingredients.
Intelligent backgroundsOne of the most fun things you can do with AI is to manipulate your photos. This doesn’t just involve adding an Instagram filter or changing brightness levels. With AI you can do things with your photos that wouldn’t otherwise be possible, like add or remove elements or change the background.
Even more impressively, Meta AI can now edit your photos, too. That means that if you want to remove somebody from a photo you can get Meta AI to do it for you. Or maybe you just want to change the background to something else? No problem, Meta AI can do that too. It can even add things to your photos, so if you want to be standing next to a lion, you can make it happen without risking your life. If you want to see what you’d look like in a different outfit then Meta AI is your new fashion consultant.
Plus, if you want to reshare a photo from your Instagram feed to your Instagram Story, Meta AI’s new backgrounds feature is on hand to intelligently pick a background that will go with the image for your story.
We expect the ability to see and edit photos to roll out to Meta AI users in the US, Canada, Australia, and New Zealand over the next month.
You might also like...Meta Connect 2024 is up and running and Mark Zuckerberg has just announced some impressive updates to Meta AI, the chatbot you can find inside its popular social media apps like Facebook and Instagram. One of the most impressive features he has just announced is the addition of a voice mode to Meta AI that works across all its social media apps.
Right now it seems like every big tech company is adding voice mode to their AI, and Meta AI isn’t being left behind. Voice mode means that you can hold a conversation with your smartphone and it will answer back in a human-like manner, using AI to generate the responses. While it's talking you can see a circular image on the screen that looks similar to Siri or ChatGPT's new Advanced Voice Mode.
John Cena is there, but you can't see him (Image credit: Meta AI)Thanks to the latest Meta AI update you’ll be able to talk to your smartphone in Messenger, Facebook, WhatsApp, and Instagram DM. What’s more, Meta AI will respond to you in the voice of one of your favorite celebrities! The selection of voices available includes Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell. So, you’ll be able to keep John Cena in your pocket, if you like.
Meta AI is used by 400 million people with 185 million people using it across Meta’s social media apps each week, so this move is putting voice mode in the hands of an incredible number of people. Meta AI’s voice mode rolls out in the US, Canada, Australia, and New Zealand over the next month.
You might also like...Meta AI’s Imagine features are coming to Facebook, Instagram, and Messenger so you can generate images right in your Facebook feed, Instagram Stories, or profile pictures.
Announced alongside a whole host of Meta AI updates including a new voice mode at Meta Connect 2024, Meta AI’s Imagine tools are set to take over your most beloved social media platforms. Straight from Instagram and Facebook, you’ll be able to generate an image of anything you can imagine (see what I did there?) and then share it with your friends.
Mark Zuckerberg also announced that Meta AI will suggest captions for your Stories on Facebook and Instagram, allowing you to completely generate posts with the power of Imagine.
Meta AI has more to offer (Image credit: Meta)Image generation isn’t the only new AI feature coming to Facebook, Instagram, and Messenger, however. New AI theme generation for your Messenger and Instagram DMs will allow you to create a personalized look for your apps with just the tap of a button.
And that’s not all, Meta is also testing AI-generated content in your Facebook and Instagram feeds where you’ll see images created by Meta AI just for you, which the company says is curated based on your interests and current trends.
Meta is going all in on AI across its social media platforms with loads of AI-powered features coming across the next few months. With so many new quality-of-life improvements across Facebook and Instagram, such as an AI translation tool, currently in testing, that will allow you to watch Reels filmed in other languages with ease, there’s lots to get excited about.
The competition is heating up for the best AI image generators, so we’re excited to see where Imagine built into FaceBook compares to some of the heavy hitters such as MidJourney, Adobe FireFly, and even Image Playground when it launches on Apple devices later this year.
You might also like...Meta just announced a new AI translation tool that will allow you to watch Facebook and Instagram reels recorded in different languages, translated into your own.
Taking to the stage at Meta Connect 2024, Mark Zuckerberg revealed a whole host of new Meta AI features and one of the coolest is a work-in-progress that could change the way we all consume portrait video on social media.
Meta says the AI translation tool will automatically translate the audio of Reels, so you can enjoy content that’s been recorded in a different language. The tool uses automatic dubbing and lip syncing to ‘simulate the speaker’s voice in another language and sync their lips to match.’
The company is currently conducting small tests on its social media platforms where it’s translating Reels from English and Spanish for creators based in Latin America and the US — Meta plans to expand to more languages and creators in the future.
Language barrier? What language barrier?Meta put a lot of focus on AI at Meta Connect 2024, introducing loads of new AI features across its social media platforms, but the new AI translation tool is one of the most exciting, and could see the way we consume Instagram and Facebook Reels completely change. By automatically translating content and making it appear natural with dubbing and lip syncing, there could be a future where you get access to content from other countries in your Instagram algorithm that’s easily watchable without subtitles.
As Meta tries to emphasize Instagram’s prowess as a portrait video platform against its rivals like TikTok, this new AI translation tool could prompt creators to opt for Meta’s platform due to its wider reach across multiple regions and languages.
Meta also announced a new voice mode for Meta AI, which will roll out to US, Canada, Australia, and New Zealand over the next month.
You might also like...Meta has launched the Meta Quest 3S (you can check out our hands on Meta Quest 3S review for our thoughts on it) and introduced us to a new affordable member of the Quest family, and also killed off three others. It’s time to officially say goodbye to the Meta Quest 2, Meta Quest Pro, and the 128GB model of the Meta Quest 3.
The Meta Quest 2 has been sold out everywhere since July, so this announcement isn’t a surprise – it already happened. It’s sad, but with the 3S here, there’s no reason to pick up a Quest 2, frankly. It was fantastic, and I loved it, but it’s time for the next generation of XR hardware.
The Meta Quest Pro has been hanging on, but its demise shouldn’t be a shocker. It’s never a good sign when a product drops one-third of its price in just six months on sale, and since the Quest 3 launch – a way cheaper headset that even performs better in a few key areas – the Quest Pro has very much faded into the background.
Meta says these devices will still be on sale until the end of the year or until its remaining stock runs dry – whichever comes first. So if you want one for your collection, act fast, but if I were you, I’d pay attention to a different Quest that’s being killed off. My favorite of the trio: the Meta Quest 3 128GB.
So long Meta Quest Pro (Image credit: Meta) Buy the 128GB Quest 3 while you canAs leakers reported the day before Meta Connect, the Quest 3 128GB model is also being discontinued. The 512GB Meta Quest 3 will remain on sale, however, at a reduced price of $499.99 / £469.99 (Australian pricing to be confirmed).
From now until the stock runs out, the 128GB Meta Quest 3 will be on sale for $429.99 (UK and Australian pricing to be confirmed), and it looks like this could be the best headset to buy right now as it has all the upgrades of the full-Quest 3 while still being fairly affordable. Plus, given that VR games and apps aren’t big files 128GB is more than enough – especially as it’s so easy to delete and later redownload titles if you run out of storage.
The only downside of this deal is you’ll miss out on the Batman: Arkham Shadow purchase bonus included with the Meta Quest 3S models and 512GB Meta Quest 3. This is a slight incentive to opt for the 512GB model instead, but even if you buy it separately the Meta Quest 3 128GB model is still a solid deal.
Just make sure you make a decision quickly, as it’s not clear how quickly Meta and third-party retailers will run out of stock.
You might also likeMicrosoft has just issued a fresh update for Windows 11 in testing, bringing some handy improvements including a change to the Start menu, and also the lock screen, alongside a new file-sharing button.
The new features are currently being tried out by testers in the Release Preview channel for Windows 11 (23H2 and 22H2 builds), which is the last round of user testing that preview builds go through before final release. This means that we can expect these features to arrive pretty soon.
One move I’m pleased to see is Microsoft revisiting one of its recent changes to the Start menu - namely, the new Microsoft Account-related settings.
You may recall that the introduction of the new account manager section to the Start menu caused controversy because it buried a useful option under another layer of the menu. This was the option to sign out, which was shifted from being right there in the Start menu’s profile panel, to being hidden behind the three-dot menu (top-right) in the new account manager panel (that replaced the previous profile UI).
The good news is that Microsoft has been listening to the folks complaining that they don’t want to be forced to perform another click to sign out of their account, so in this preview build, the option is back where it was before, with no need for any extra clicking to reach it. Also, Microsoft notes that you can switch user profiles by clicking the three-dot icon and choosing one from the menu that appears.
Another part of Windows 11 that’s been modified is the lock screen, which now has media controls towards the bottom of the screen whenever media playback is underway on the PC.
Also, you will soon be able to share files stored on your device when they come up in Windows search results via a new share button.
These are not the only changes on the menu here, and you can see the full list of additions with more details on Microsoft’s Windows Insider blog post.
(Image credit: Future) A minor update, yes - but a useful one nonethelessWhile this isn’t the most dynamic update, it delivers some handy tweaks and bug fixes that should make the Windows 11 experience a little smoother, and as noted, I’m really pleased to see the change for the Start menu.
As for the new sharing function for files in Windows search, that might be of more use if Microsoft spent some time further improving and fine-tuning the core of the search experience - the process and results. Let’s hope for some more work in that respect.
The features mentioned here are also being tested for Windows 11 24H2, ahead of its launch which should be coming pretty soon.
YOU MIGHT ALSO LIKE...FiiO, a new player in the audio game that's in the pleasing habit of delivering superb products at low prices, has just announced the first USB-C hi-res Bluetooth add-on that supports aptX Adaptive (including aptX Lossless), Snapdragon Sound, and Sony's LDAC wireless tech and works with iPhone (as well as all kinds of other USB-C devices).
So if you've got an iPhone 15 or iPhone 16, plus a pair of the best wireless headphones that support these higher-quality Bluetooth options (up to 24-bit/96kHz) – which is, at the premium level, basically all of them expect the AirPods Max – then you can finally unlock the extra dynamic range and detail of your lossless Apple Music or Tidal subscription.
We've seen USB-C dongles add this kind of tech before, but four key things make the FiiO immediately stand out: the small size compared to the best wireless DACs, so you can attach it to your phone and forget about it; the support for aptX, Snapdragon Sound and LDAC, covering 99% of the biggest headphones options (and FiiO says Bluetooth LE Audio support for the LC3 codec is planned in an update); support for multi-point pairing; and the price of just £42 / $44 (about AU$82).
(Image credit: FiiO)We've seen an even tinier and cheaper option than this in the form of the Creative BT-W3, but it's less elegant-looking and doesn't support aptX in its Adaptive flavor or LDAC.
Inside the FiiO BT11 is a "flagship" Qualcomm chip, which FiiO says offers "a quad-core processor architecture and two 240MHz Qualcomm Kalimba audio DSPs." You can use an app to control the device and to enable multi-point pairing, and the app will be able to update the BT11 in the future. FiiO says you can expect "the LC3 Codec, volume adjustment, indicator light control, channel balance adjustment, and pairing lists" to be added in the future. Though I always recommend that you buy something based on the features it has now, not on promises
The reason we're so hot on FiiO here on TR should be obvious if you've read our five-star FiiO FT5 review, our five-star FiiO R9 review, our 4.5-star FiiO M11S review, or our 4.5-star FiiO M23 review. They're all as good as as products that cost a ton more when it comes to audio quality – the bang for your buck is just outrageous.
So it's great to have that in a dongle that can connect to the best phones, laptops, PS5s, Nintendo Switches…, and all kinds of stuff and bring the best wireless audio quality you can get. iPhone-loving wireless audiophiles, your long nightmare is probably over.
(Image credit: FiiO BT11 dongle attached to an iPhone with some earbuds nearby) You might also like…The US Department of Justice has filed an antitrust lawsuit against payment service giant Visa for its alleged monopoly over the debit card network.
The DOJ says Visa has been engaging in anticompetitve business practice by suppressing competition and innovation, in turn violating Sections 1 and 2 of the Sherman Act.
It claims Visa controls over three in five US debit transactions, resulting in $7 billion in processing fees annually.
Visa accused of monopolistic practicesThe US government’s lawsuit argues Visa has maintained its dominance of the payments market by striking up exclusionary agreements that penalize merchants and banks for using its competitors.
In doing so, US officials say consumers have had to deal with inflated costs, while smaller payment processors and fintech companies have been unable to grow. The DOJ also found Visa guilty of partnering up with would-be competitors by offering generous monetary incentives in order to “insulate itself” from competition.
Attorney General Merrick B. Garland confirmed: “We allege that Visa has unlawfully amassed the power to extract fees that far exceed what it could charge in a competitive market.”
Principal Deputy Associate Attorney General Benjamin C. Mizer added: “Today’s action against Visa reminds those who would stifle competition rather than competing on price or investing in innovation that the Justice Department will never hesitate to enforce the law on behalf of the American people.”
This isn’t the first time that Visa has come under fire in the US. In 2020, the DOJ set out to stop the company from acquiring fintech app technology company Plaid. The proposed $5.3 billion deal was indeed called off.
TechRadar Pro has asked Visa to respond to the DOJ’s announcement, but the company did not immediately respond.
More from TechRadar ProGoogle’s attempt to block infostealer malware grabbing data stored in its Chrome browser seems to have been short-lived, with multiple variants claiming to have already successfully bypassed it.
In late July 2024, Google released Chrome 127, which introduced App-Bound Encryption, a feature which looked to ensure sensitive data stored by websites or web apps was only accessible to a specific app on a device. It works by encrypting data in such a way that only the app that created it can decrypt it, and was advertised as particularly useful for protecting information like authentication tokens or personal data.
Now, mere months after it was introduced, the protection mechanism has already been cracked by some of the most popular infostealers out there, BleepingComputer reports, claiming the likes of MeduzaStealer, Whitesnake, Lumma Stealer, Lumar, Vidar, and StealC have all introduced some form of bypass.
Prioritizing impactSome of the upgrades are also confirmed to be working with Chrome 129, the newest version of the browser available at press time. TechRadar Pro has reached out to Google for comment, and will update our article if we hear back.
“Added a new method of collecting Chrome cookies,” Lumma’s developers allegedly told its customers recently. “The new method does not require admin rights and/or restart, which simplifies the crypt build and reduces the chances of detection, and thus increase the knock rate.”
Exfiltrating information from browsers is a key feature for most prominent infostealers out there. Many people save things like passwords, or payment data, inside their browsers for convenience and quick access. Many also use cryptocurrency wallet add-ons for their browsers, as well. By stealing cookies, crooks are even able to log into services protected by multi-factor authentication (MFA). All of this makes browsers one of the most important targets during data theft.
More from TechRadar ProWindows 11 could get another feature to develop a tighter level of integration between your PC and smartphone, and it’s called ‘Hand Off.’
Windows Latest noticed that a Windows tester on X, @techosarusrex, pointed out the feature in a tweet, having discovered the hidden capability in a recent Beta channel preview of Windows 11 (build 22635).
Windows 11 will be getting some new continue from device/handoff capabilities (seen in build 22635.4225) https://t.co/xxD7GiQz5J pic.twitter.com/kOMNYUJGDmSeptember 21, 2024
Windows Latest observes that the Hand Off feature looks to be related to the ‘Continue from Phone’ functionality the tech site previously wrote about a few months back (also in testing).
Whatever the case, the screenshots provided on X give us more clues as to how it might work, as there are settings to control which apps can use Hand Off (only OneDrive right now), as well as a toggle to turn on (or off) the actual Hand Off feature itself.
The overall idea is that when a Hand Off is available - meaning the ability to carry on what you were doing on your phone, on your PC instead when you return to sit at your computer - an icon will appear in the taskbar to notify you. As shown, when you’re back at your PC, it’ll allow you to, for example, pick up work on a document you were previously editing on your phone.
As far as we can tell, this ability won’t be limited to Android phones, and you should be able to carry on your activity from an iPhone as well (we don’t see why not). While OneDrive is the only option to use with Hand Off currently, this is very early (still hidden) work in testing - so broader support for more apps could be brought in yet.
(Image credit: Shutterstock/sergey causelove) How does Hand Off fit in with Continue from Phone?Exactly how Continue from Phone and Hand Off are related remains to be seen - the latter could even be the new name for the former. Clearly they are tied together closely, though, but we’ll just have to wait for more details on exactly how Hand Off will work (assuming it ever goes live in testing, that is). It’s possible Hand Off could work in both directions, meaning you can pick up activity from your PC on your phone, too.
Some of you may have noticed the similarity between this hidden Windows 11 feature and Apple’s ‘Handoff’ functionality, which allows users to seamlessly continue activities between their iPhone, iPad, and Mac. With Apple’s Handoff, you can start an email on your iPhone, for instance, and pick up where you left off on your Mac.
Windows 11’s Continue from Phone and Hand Off features aim for a similarly seamless experience, but what’ll be interesting is that given Apple’s Handoff is tied to its iCloud ecosystem, whether Microsoft’s take will only be usable with OneDrive. Or, as mentioned above, whether support will be made more expansive.
Smoother and more robust integration between Windows 11 and smartphones will give users a reason to stick with the OS, and I think if Microsoft focuses on improving this aspect, it will prove fruitful.
YOU MIGHT ALSO LIKE...Independent auditors have confirmed that Opera's free VPN never logs what you do online. This is especially great considering that the security tool comes built into one of the best web browsers on the market.
Security experts at leading cybersecurity auditing firm Deloitte inspected Opera VPN's technical infrastructure to ensure users' data linked to their activities is never stored or logged as stated on Opera's no-log policy. Auditors did not identify any instances of non-compliance.
"The audit reaffirms our commitment to protecting user privacy by certifying that your data is never logged or stored when you use our free VPN," Jan Standal, VP at Opera, told me. "We have had a no-log policy since we launched the service in 2016. Since then it has become the norm for major VPN providers to do third-party audits as part of their transparency commitment."
We’re excited to announce that we completed an independent audit of our no-log policy for our free browser VPN, conducted by Deloitte! The audit reaffirms our commitment to protecting user privacy by certifying that your data is never logged or stored when you use our free VPN. pic.twitter.com/BDrsWtC5InSeptember 25, 2024
Deloitte spent time thoroughly reviewing the controls in place and the technical configuration of the system and VPN infrastructure. It did the same with Opera's internal documentation, policies, and procedures.
After careful consideration, experts could confirm that Opera VPN respects the company's strict no-log policy.
While also offering a premium version, the free virtual private network tool comes directly integrated into the Opera browser apps for desktop, Android, and iOS. This means you don't need to download any extra apps or extensions to start browsing more privately. Even better, Opera VPN doesn't require any registration, so you can use it completely anonymously.
The importance of no-log VPNsAs a privacy expert, I particularly value a strict no-log policy when I assess any VPN software – even better when this is regularly independently audited. Such a feature is especially crucial if the main reason you're using a virtual private network in the first place is boosting your privacy.
As mentioned earlier, a no-log VPN ensures that none of your personal data is ever recorded. So, while some logs, like the number of users connecting to the same server and the email address associated with a user's account, are inevitable as they are necessary for the software to work properly, a no-log policy is your guarantee that no data linked to you or your activities is ever collected.
This is very advantageous for a few reasons. For example, if a hacker manages to compromise your VPN account or provider, there will be almost no sensitive data that could be compromised. The same goes if authorities ordered the company to disclose users' data. These details simply won't exist.
The importance of this feature has already been proved in real life on a few occasions. In 2019, Proton was unable to help authorities identify a Proton VPN user as they didn't store any identifiable information. More recently, Swedish authorities were left empty-handed after an inconclusive police raid on Mullvad's servers last year.
Now, you can be sure that also when using Opera VPN, everything you do online will remain as private as it can be.
Microsoft has unveiled a new tool which looks to stop AI models from generating content that is not factually correct - more commonly known as hallucinations.
The new Correction feature builds on Microsoft's existing ‘groundedness detection’, which essentially cross-references AI text to a supporting document input by the user. The tool will be available as part of Microsoft’s Azure AI Safety API and can be used with any text generating AI model, like OpenAI’s GPT-4o and Meta’s Llama.
The service will flag anything that might be an error, then fact check this by comparing the text to a source of truth via a grounding document (i.e. uploaded transcripts). This means users can tell AI what to view as fact in the form of grounding documents.
A stop gap measureExperts warn that whilst the existing state of play may be useful, it doesn’t address the cause of hallucinations. AI doesn't actually ‘know’ anything, it only predicts what comes next based on the examples it is trained on.
"Empowering our customers to both understand and take action on ungrounded content and hallucinations is crucial, especially as the demand for reliability and accuracy in AI-generated content continues to rise," Microsoft noted in its blog post.
"Building on our existing Groundedness Detection feature, this groundbreaking capability allows Azure AI Content Safety to both identify and correct hallucinations in real-time before users of generative AI applications encounter them."
The launch, available in preview now, comes as part of wider Microsoft efforts to make AI more trustworthy. Generative AI has struggled so far to gain the trust of the public, with deepfakes and misinformation damaging its image, so updated efforts to make the service more secure will be welcomed.
Also part of the updates is ‘Evaluations’, a proactive risk assessment tool, as well as confidential inference. This will ensure that sensitive information remains secure and private during the inference process - which is when the model makes decisions and predictions based on new data.
Microsoft and other tech giants have invested heavily in AI technologies and infrastructure and are set to continue to do so, with a new $30 billion investment recently announced.
More from TechRadar Pro