OM System just doubled down on its retro appeal with a new OM-3 mirrorless camera. It's a stunning take on the original Olympus OM-1 film SLR from the 1970s, and the closest fans are getting to a reboot of 2016's popular digital Olympus Pen-F.
Costing £1,699 body-only, or £1,999 with the 12-45mm F4 Pro lens (US and Australia pricing to follow), the OM-3 sits between the flagship OM-1 II and the enthusiast-level OM-5. It delivers OM System features we already know; the same stacked 20MP micro four thirds sensor and TruePic X processor as the OM-1 II, together with class-leading in-body image stabilization rated up to 7.5EV and quad-pixel autofocus with AI-subject detection, wrapped in a robust IP53-rated weatherproof body.
The OM-3 also features a creative dial à la Pen-F. Olympus was ahead of the curve with the Pen-F, since we now see the likes of the Fujifilm X-T50 and Panasonic Lumix S9 capitalizing on the interest in custom color profiles with dedicated controls providing quick access to a catalog of custom looks.
In the OM-3, we get a creative dial with four modes; color profile, monochome profile, Art Filters plus Color Creator. Color profile has four customizable settings that emulate film looks, as does the Monochrome profile with its four black-and-white looks.
There's much to like about the OM-3. It packs the same power as the pricier OM-1 II into a more affordable, retro body with direct access to key features. This is a camera that OM System fans in particular have been waiting for, and a compelling Fujifilm alternative.
(Image credit: OM System) More than a pretty faceThe OM-3 is a stunner that weighs under 500g and is smaller than the OM-1 II, but its appeal isn't just surface-deep. It's a highly robust metal camera that's dustproof, splashproof and freezeproof, plus it has a well thought-out design with direct access to many of its powerful, modern features.
There's a new dial for photo, video, or slow and quick motion options, with slow motion recording up to 60fps in 4K and up to 120fps in Full HD. OM System has introduced new Cinema modes with Log color profiles for video that maximize the dynamic range of the camera for grading clips later.
There's also a new button to access computational photography features directly, with OM System's full suite of modes to hand; Live ND filters with six levels of strength from ND2 to ND64, the graduated ND filter effect introduced in the OM-1 II, live composite for light trails, together with a High Res Shot mode to increase resolution up to 80MP, plus a focus stacking mode. These are clever features that minimize the accessories you need and time spent editing at a computer.
Price-wise, the OM-3 is pitted against the likes of the Fujifilm X-T5, despite its lower pixel count and smaller sensor size, which will always be a strike for some. However, I can't think of a faster camera for the money – this 120fps-shooting stunner could be the ultimate travel and outdoors camera, especially for those with an eye for design. OM System also launched mark II versions of three of its lenses alongside the camera, all of which add weather-sealing to the original versions; the M.Zuiko Digital 17mm F1.8 II, the 25mm F1.8 II and the ED 100-400mm F5.0-6.3 IS II. I can see the 17mm lens being a lovely pairing with the OM-3 in particular for street photography. We're currently conducting a full OM-3 review, coming soon.
(Image credit: OM System) You might also likeTikTok parent company, ByteDance, is showing off a new AI video creator that can produce vivid videos of people talking, singing, and moving around from a single photograph. The new OmniHuman model can bring an image to life with eerily accurate body movements, facial expressions, and gestures.
OmniHuman’s breakthrough involved training on more than 18,700 hours of video. The AI can now mimic how humans move, speak, and interact in videos. Notably, this AI can create fully moving characters rather than just animating a face or upper body. That means a single picture can be turned into a video of someone giving a speech, dancing, or even playing an instrument.
The result is a very realistic video, whether the character is a human from a photograph or one from a more stylized painting. You can see examples below.
OmniHuman everywhereIf and when ByteDance does make OmniHuman available, it's easy to imagine it blowing up on TikTok. The company already offers an AI video-maker named Jimeng on the platform, and something like OmniHuman could entice many more people to play with TikTok and its other features.
Of course, ByteDance won't enter the space without competition. OpenAI's Sora has drawn accolades and is a big name in the AI video space, but there are plenty of others, such as Pika, Runway, Pollo, and Luma Labs' Dream Machine.
There's a lot of potential use for ByteDance's model, whether recreating actors of the past for more movies or teaching students history from the simulated mouths of historical figures. Even digital avatars for social media and gaming could become more lifelike, adapting in real-time based on user input.
OmniHuman is still a research project for now, but the fact that ByteDance is already showcasing its capabilities suggests that practical applications aren’t far behind. The AI character below could be the next face of a video trend on TikTok.
You might also likeIf you’re looking for the most immersive visual and sound experience when watching Super Bowl LXI, you might be very thankful for subscribing to Comcast or wish you had this Sunday, February 9, 2025. Comcast is scoring a touchdown as, for the first time, the provider will broadcast Super Bowl LIX with Dolby Atmos sound.
Best of all, it’s alongside Dolby Vision, which isn’t a first, but it’s kind of the perfect pairing and made possible by a partnership between Comcast and Dolby. You’ll need to be subscribed to Xfinity X1 to enjoy this ‘Enhanced 4K’ presentation, which Comcast describes as “an unmatched viewing experience with the best picture and audio quality delivered to the home in the fastest way possible so the action customers see in their living room is only seconds behind the game unfolding in New Orleans.”
That feed will end up on the big screen through a Comcast X1 equipment box or the Xfinity Stream app running on one of the best streaming boxes or sticks. The app is also compatible with iOS and Android, so you can watch it on your phone or tablet.
A look at the Comcast Xfinity X1 interface on a TV from one of the X1 cable boxes. (Image credit: Comcast)Even so, the best way to experience the Super Bowl in Dolby Vision and Dolby Atmos would be on a big screen – like one of TechRadar’s picks for the best TV – and with one of the best soundbars or home theater setups. This way, you can feel the immersion as the audio from the game gets presented in a full Dolby Atmos mix.
Paired with a big, crisp 4K TV, you’ll likely get the feel you’re inside the Superdome in New Orleans. Of course, Fox Sports will still present the game in 4K through Comcast and other providers, including for free on Tubi. You’ll also want your TV – and streaming box – to support the Dolby Vision visual format and the Dolby Atmos audio format. So yes, some specific hardware beyond being an X1 subscriber is required. You'll need either the Xi6, XiOne, or XG1v4 Comcast box, but with the latter Dolby Vision isn't yet supported.
The whole idea with Comcast’s Enhanced 4K product, though, is the best possible resolution – aka 4K – delivered with super low latency in both Dolby Vision and Dolby Atmos. Previous Super Bowls streamed on X1 were broadcast with Dolby Vision, but Dolby Atmos at the 2024 Paris Olympics checked off all the Dolby boxes. This won’t be the first sport presented in Dolby Atmos either; Apple also regularly streams all MLS Season Pass matches on Apple TV Plus with a Dolby Atmos mix.
Still, this is an exciting test for Comcast’s Enhanced 4K product, as the Super Bowl will be one of the most-watched events of the year. And if you have the proper setup at home, you’ll be in for a treat as the Kansas City Chiefs face the Philidelphia Eagles in Super Bowl LIX.
And if you're looking to upgrade your home entertainment setup before the big game, check out the best Super Bowl TV deals here.
You might also likeGigabyte, through its subsidiary Giga Computing, is offering qualified individuals and organizations the opportunity to test one of the world’s most advanced supercomputers, the Gigabyte G383-R80 server powered by AMD MI300A APUs for free.
There's a catch; after the seven-day trial period, the configured price of this high-performance system is $304,207.
The price isn't the only thing that might ward you off; in addition to it being a strict time-bound trial period, distributors aren't eligible to apply, and users must have a clear project in mind to qualify.
Claiming Gigabyte's G383-R80 offerTo participate, applicants must fill out a form on the Gigabyte Launchpad website. Giga Computing will then review the application based on the commercial value or innovative potential of the proposed project.
If approved, the company will contact the applicant within three business days to confirm details. The trial period lasts for seven days, though extensions of up to two weeks can be requested through a sales representative or via email. Access to the server will be granted within two weeks, and users must initiate their project within three days of receiving the access link.
The Gigabyte G383-R80 server is designed for demanding workloads such as AI training, AI inference, and high-performance computing (HPC). It features a 3U rack-mount chassis and supports up to four AMD Instinct MI300A APUs, which combine CPU and GPU cores for accelerated computing.
For storage, it has eight 2.5-inch NVMe Gen5/SATA/SAS hot-swap drive bays, 12 PCIe Gen5 x16 slots, and a variety of storage options, including M.2 NVMe SSDs and U.2/U.3 NVMe SSDs with capacities ranging from 400GB to 61.44TB.
Networking capabilities include onboard 10Gb/s Ethernet ports and support for PCIe expansion cards like RJ45, SFP+, QSFP28, and QSFP56.
You may also likeA team of researchers at the University of Hong Kong has designed and tested an advanced aerial robot capable of navigating complex environments at high speeds of up to 20 meters per second while maintaining precise control.
Named SUPER, the quadcopter drone uses cutting-edge LiDAR technology to detect and avoid obstacles, even thin wires that have posed challenges for traditional drones.
In research published on Science Robotics (via Techxplore), the team noted while SUPER has potential applications in search and rescue, its ability to operate autonomously in unknown environments suggests it could also be used for law enforcement and military reconnaissance.
The power of LiDAR for precision flightUnlike conventional aerial robots that rely on cameras and sensors, SUPER uses 3D light detection and ranging (LiDAR) to map its surroundings and process spatial data in real time, allowing it to function in low-light conditions.
With a detection range of up to 70 meters, the LiDAR system feeds data to an onboard computer that continuously analyzes the environment.
This system enables SUPER to chart two distinct flight paths, one prioritizing safety and another allowing for exploratory movement.
To demonstrate its capabilities, the research team tested SUPER against a commercial drone, the DJI Mavic 3.
While the DJI model avoided larger obstacles, it struggled to detect thin wires of smaller diameters. In contrast, SUPER successfully avoided all obstacles, including 2.5-mm thin wires, thanks to its high-resolution point cloud processing.
The test result also reveals the drone glided through dense forests, tracking moving targets without colliding with trees or branches.
You may also likeGoogle has dropped a major upgrade to the Gemini app with the release of the Gemini 2.0 Flash Thinking Experimental model, among others. This combines the speed of the original 2.0 model with improved reasoning abilities. So, it can think fast but will think things through before it speaks. For anyone who has ever wished their AI assistant could process more complex ideas without slowing its response time, this update is a promising step forward.
Gemini 2.0 Flash was originally designed as a high-efficiency workhorse for those who wanted rapid AI responses without sacrificing too much in terms of accuracy. Earlier this year, Google updated it in AI Studio to enhance its ability to reason through tougher problems, calling it the Thinking Experimental. Now, it’s being made widely available in the Gemini app for everyday users. Whether you’re brainstorming a project, tackling a math problem, or just trying to figure out what to cook with the three random ingredients left in your fridge, Flash Thinking Experimental is ready to help.
Beyond the Thinking Experimental, the Gemini app is getting additional models. The Gemini 2.0 Pro Experimental is an even more powerful one, albeit a somewhat more cumbersome version of Gemini. It's aimed at coding and handling complex prompts. It’s already been available in Google AI Studio and Vertex AI.
Now, you can get it in the Gemini app, too, but only if you subscribe to Gemini Advanced. With a context window of two million tokens, this model can simultaneously digest and process massive amounts of information, making it ideal for research, programming, or rather ridiculously complicated questions. The model can also utilize other Google tools like Search if necessary.
Lite speedGemini is also augmenting its app with a slimmer model called Gemini 2.0 Flash-Lite. This model is built to improve on its predecessor, 1.5 Flash. It retains the speed that made the original Flash models popular while performing better on quality benchmarks. In a real-world example, Google says it can generate relevant captions for around 40,000 unique photos for less than a dollar, making it a potentially fantastic resource for content creators on a budget.
Beyond just making AI faster or more affordable, Google is pushing for broader accessibility by ensuring all these models support multimodal input. Currently, the AI only produces text-based output, but additional capabilities are expected in the coming months. That means users will eventually be able to interact with Gemini in more ways, whether through voice, images, or other formats.
What makes all of this particularly significant is how AI models like Gemini 2.0 are shaping the way people interact with technology. AI is no longer just a tool that spits out basic answers; it’s evolving into something that can reason, assist in creative processes, and handle deeply complex requests.
How people use the Gemini 2.0 Flash Thinking Experimental model and other updates could show a glimpse into the future of AI-assisted thinking. It continues Google's dream of incorporating Gemini into every aspect of your life by offering streamlined access to a relatively powerful yet lightweight AI model.
Whether that means solving complex problems, generating code, or just having an AI that doesn’t freeze up when asked something a little tricky, it’s a step toward AI that feels less like a gimmick and more like a true assistant. With additional models catering to both high-performance and cost-conscious users, Google is likely hoping to have an answer for anyone's AI requests.
You might also like