Sora, OpenAI's new AI video generation platform, which finally launched on Monday, is a surprisingly rich platform that offers simple tools for almost instantly generating shockingly realistic-looking videos. Even in my all-too-brief hands-on, I could see that Sora is about to change everything about video creation.
OpenAI CEO Sam Altman and company were wrapping up their third presentation from their planned "12 Days of AI," but I could scarcely wait to exit that live and, I assume, not AI-generated video feed to dive into today's content-creation-changing announcement.
The announcement crew. (Image credit: Future)Ever since we started seeing short Sora clips created by chosen video artists and shared by OpenAI, I and anyone with even a passing interest in AI and video have been waiting for this moment: our chance to touch and try Sora.
Spoiler alert: Sora is stunning but also so massively overloaded I couldn't create more than a handful of AI video samples before the system's servers barked that they were "at capacity." Even so, this glimpse was so, so worth it.
Start with your birthday. (Image credit: Future)Sora is important enough that its generative models do not live inside the ChatGPT large language model space or even inside OpenAI's homepage. The AI video-generation platform warrants its own destination at Sora.com.
From there, I logged into my ChatGPT Plus account (you need at least that level to start creating up to 50 generations a month; Pro gets you unlimited). I also had to provide my age (the year is blurred because I am vain).
You can build new videos off these ideas. Note the prompt field at the bottom. (Image credit: Future)The landing page is, as promised, a library grid of everyone else's AI-generated video content. It's a great place to seek inspiration and to see the stunning realism and surrealism capable through OpenAI's Sora models. I could even use any of those videos as a starting point for my creation by "Remixing" one of them.
I chose, though, to generate something new.
There is a prompt field at the bottom of the page that lets you describe your video and set some parameters. That field includes options like the aspect ratio, resolution, duration, and the number of video options Sora would return for you to choose from. There's also a style button that includes options like "Balloon World," "Stop Motion," and "Film Noir."
I'm a fan of film noir and am intrigued by the idea of "Bubble World," but I didn't want to hamper the speed in any way, so I instead started typing in my prompt. I asked for something simple: A middle-aged guy building a rocketship near the ocean and under a moonlit sky. There'd be a campfire nearby and a friendly dog. It was not a detailed description.
These presets look like fun. (Image credit: Future)I hit the up arrow on the right-hand side of the prompt box, and Sora got to work.
Within about a minute, I had two five-second video options. They looked realistic. Well, at least one of them did. One clip featured a golden retriever with an extra tail where its head should've been. Over the course of the video's 5-second runtime, the extra tail did become a head. The other video was less distressing. In fact, it was nearly perfect. The problem was the rocket ship – it was a model and not something my character could fly in.
I got two option, only one of which was useful. (Image credit: Future)At this point, I could edit my prompt and try again, view the video's storyboard, blend it with a different video, loop it, or remix it. I chose the video with the normal dog and then selected remix.
You can do a light remix, a subtle one, a strong one, or even a custom remix. My system defaulted to a strong remix, and I asked for a larger rocket, one large enough to take the man to the moon. I also wanted it repositioned behind him and finally asked for the campfire to be partially visible.
This was my first AI video. (Image credit: Future)The remix took almost five minutes, resulting in another beautiful video. Sure, Sora knows nothing about spaceflight or rocket science, but it got the composition right, and I can imagine how I could nudge this video in the right direction.
In fact, that was my plan, but when I tried another remix, Sora complained it was at capacity.
A gif of my remix. Note the fire. (Image credit: Future)I also tried using Storyboard to create another video. In this case, I entered a prompt that became the first board in my storyboard; Sora automatically interpreted this and then let me add additional beats to the video via additional storyboards. I had a video in mind of a "Bubble World" scene with two characters sharing a romantic pasta dinner, but again, Sora was out of capacity.
The storyboard tool looks pretty powerful. (Image credit: Future)I wanted to try more and see, for instance, how far you could take Sora; OpenAI said they're starting off with "conservative" content controls. Which may mean things like nudity and violence would be rejected outright. But you know, AI prompt writers always know how to get the best and worst out of generative AI. I think we'll just have to wait and see what happens on this front.
My Ai video library. (Image credit: Future)Server issues aside, it's clear Sora is set to turn the video creation industry on its head. It's not just its uncanny ability to take simple prompts and create realistic videos in a matter of minutes; it's the wealth of video editing and creation tools available on Day 1.
I guarantee you the model will get more powerful, the tools even smarter, and servers more plentiful. I don't know exactly what Sora means for video professionals worldwide, but the sooner they try this, the faster they'll get ready for what's to come.
Watch closely for the extra tail, (Image credit: Future) You might also likeCybersecurity researchers from Mandiant claim to have discovered a new way to get malware to communicate with its C2 servers through the browser, even when the browser is isolated in a sandbox.
There is a relatively new method of protecting web-borne cyberattacks, called “browser isolation”. It makes the victim’s browser communicate with another browser, located in a cloud environment, or a virtual machine. Whatever commands the victim inputs are relayed to the remote browser, and all they get in return is the visual rendering of the page. Code, scripts, commands, all get executed on the remote device.
One can think of it as browsing through the lens of a phone’s camera.
Limits and drawbacksBut now, Mandiant believes that C2 servers (command & control) can still talk to the malware on the infected device, regardless of the inability to run code through the browser, and that is - via QR codes. If a computer is infected, the malware can read the pixels rendered on the screen, and if they’re a QR code, that is enough to get the program to run different actions.
Mandiant prepared a proof-of-concept (PoC) showing how the method works on the latest version of Google Chrome, sending the malware through Cobalt Strike’s External C2 feature.
The method works, but it’s far from ideal, the researchers added. Since the data stream is limited to a maximum of 2,189 bytes, and since there is a roughly 5-second latency, the method cannot be used to send large payloads, or facilitate SOCKS proxying. Furthermore, additional security measures such as URL scanning, or data loss prevention, may render this method completely useless.
Still, there are ways the method could be abused to run destructive malware attacks. Therefore, IT teams are advised to still keep an eye on the flow of traffic, especially from headless browsers running in automation mode.
Via BleepingComputer
You might also likeMicron recently unveiled the 6550 ION SSD, marking the launch of the industry’s first 60TB storage device featuring a PCIe 5.0 x4 interface.
In an announcement, the firm revealed the new SSD is catered specifically to handle bulky applications as well as AI training and inference workloads.
Speaking at the time, Alvaro Toledo, vice president and general manager of Micron’s data center storage group, said the drive boasts extreme performance and capacity alongside industry-leading energy efficiency, calling it, "a game-changer for high-capacity storage solutions to address the insatiable capacity and power demands of AI workloads."
Micron has been highly vocal about the capabilities of the new ION SSD, so here’s everything you need to know about the 6550 ION SSD.
Under the hood of the 6550 ION SSDAs mentioned by Toledo, the new drive comes in an E3.S form-factor, meaning it offers “best-inclass” storage density and boasts 232 active layers. According to Micron, this significantly reduces rack storage requirements by up to 67%.
The 6550 also provides users with read and write speeds of 12GB/s despite operating on just 20 watts of power. This, the company noted, makes it 20% more energy efficient than comparable drives currently available in the market.
Micron said users do have the option to operate the drive at 25 watts, but this will only be utilized by a small portion of customers.
Compared to competing 60TB drives, the company also highlighted a number of key advantages, including:
With power efficiency a key talking point in the new SSD, the company revealed it also relies upon active state power management (ASPM), meaning the 6550 ION only consumes 4 watts in low-power modes. Micron added that the drive also offers 20% improved idle efficiency compared to others on the market.
Performance built for AI workloadsA key aspect of the 6550 ION is its ability to run AI workloads, according to Micron.
The drive boasts a 147% higher performance for NVIDIA® Magnum IO GPUDirect Storage (GDS) compared to competing models while also offering 104% better energy efficiency in this regard.
Similarly, with 30% higher efficiency in deep learning Unet3D testing and a 151% improvement in completion times for AI model checkpointing, the SSD provides enterprises with a powerful piece of hardware tailor made for the AI era.
You might also likeBiomemory, a French startup established in 2021, has long been working to develop DNA-based data storage technology.
It was the first company to make a DNA storage device available to the general public, marking an early step in commercializing this technology. Biomemory's approach involves encoding digital data within synthesized DNA strands by translating the DNA bases - A, C, G, and T - into binary code. Data can then be retrieved by sequencing the DNA and converting it back into binary.
DNA storage is viewed as a potential solution to the growing global demand for storage, driven by increasing data generation. It is estimated that by 2025, humanity will produce 175 zettabytes of data, a figure that challenges the capacity and sustainability of existing storage methods. DNA’s compact and durable nature offers an alternative that could reduce spatial and environmental footprints while providing long-term stability.
Funding securedA number of startups have entered the DNA storage space in recent years, including Catalog, Ansa Biotechnologies, and Iridia in the United States, as well as Helixworks, DNA Script, and BioSistemika in Europe. Biomemory is focusing on creating end-to-end solutions for data centers, using bio-sourced DNA fragments that are designed to last for thousands of years without requiring energy for maintenance.
To further its efforts, Biomemory recently secured $18 million in Series A funding.
“This investment marks a pivotal moment for Biomemory and the future of data storage,” said Erfane Arwani, CEO and Co-founder of the startup. “With our DNA storage technology, we’re not just addressing today’s data challenges - we’re building solutions that will sustain the ecosystem for the next century and beyond. By sharing this value with our partners and collaborators, we aim to collectively advance the sector and foster a thriving data storage ecosystem.”
Biomemory intends to use the funds to develop its first-generation data storage appliance, optimize biotech processes, and quicken commercialization. Additional goals include forming partnerships with industry players and cloud providers and recruiting experts in molecular biology and engineering.
The technology offers the potential to store all of humanity’s data in a single data center rack and Biomemory plans to scale its molecular storage solutions to exabyte capacity by 2030, listing sustainability and durability as its key priorities.
You might also likeLove hi-res audio but haven't heard of Activo? That's okay, you'll almost certainly know its parents, Astell & Kern. Last year, the sub-brand's inaugural product, the Activo's P1 music player, truly wowed us for sound-per-pound value, proving that you don't have to pay an A&K premium to get excellent portable sound. It was so good in fact that it quickly made its way onto our best MP3 player buying guide.
And now the South Korean company has announced the hotly anticipated in-ear monitors to go with it. They certainly look the part, which checks out when you know that these wired earbuds are a collaboration with Singaporean brand, DITA Audio.
A&K says the partnership brings together "over 45 years of expertise in the audio industry between Activo and DITA, culminating in the perfect earbud match for the newly released Activo P1 Digital Audio Player."
Activo Q1: specs, pricing and availabilityThe Activo Q1 is, the company tells us, a hybrid IEM featuring DITA’s new PM1+ 9.8mm dynamic driver with brass housing, plus a single Knowles balanced armature driver.
DITA Audio has a lovely habit of custom-designing cables to suit every earbud model it's had a hand in, which means the Q1 is a unique proposition – and the white coating certainly makes the earbuds look like they're going to be perfect sonic partners to the wallet-friendly Activo P1.
To drill down into that cable a little more, it's the MOCCA2 cable from Cardas, made in the USA. This is custom-built and constructed of 32 strands of Cardas conductors per cable, then twisted to DITA’s own specifications. Your purchase also includes both 4.4mm balanced and 3.5mm single-ended connecting plugs – the P1 has both of these ports, along its top edge.
And in case the earbuds look a little dainty, also included with the Q1 is an Activo x DITA branded systainer mini hard case made by Tanos, plus five different sizes of eartips to help you achieve the optimum fit.
Have the player, now want the earbuds? You're in luck! The Activo Q1 is available to purchase from Amazon today and is priced at £299 / $349 / €399 (around AU$590). Will they join our collection of the best wired earbuds we've ever tested? We're working on it – when we know, so shall you.
You might also like