The Open-Source Bet on AI Video Nobody's Talking About
Lightricks just made a Linux-style play for the future of video creation. Here's why I think it's going to work.
The most interesting thing about LTX-2 ↗ isn’t what it can do. It’s what Lightricks decided to give away.
In a space where OpenAI locks Sora behind a subscription, Google keeps Veo in a walled garden, and Runway has no public API at all. Lightricks shipped the full model weights. Apache 2.0. Free to use, fine-tune, run locally, build on top of. They published the training code, the inference code, the LoRA trainer, the ComfyUI nodes. Everything.
This is the Linux play for AI video. And I don’t think enough people are paying attention to it.
I’ve spent years watching the AI video space evolve from embarrassing 4-second loops into something that makes cinematographers nervous. I’ve played with most of the major models, Sora, Veo, Runway, Kling. They’re all impressive. They’re all locked. LTX-2 is the first model that makes me feel like the power is shifting somewhere more interesting.
What LTX-2 actually is
Think of LTX-2 as two models fused into one body. There’s a video brain (14 billion parameters) and an audio brain (5 billion parameters), and they talk to each other at every single step of generation. Not sequentially, not one slapped on after the other. The audio and video co-evolve during the diffusion process, which is why the lip-sync actually holds and why the foley sounds like it belongs to the image rather than being randomly assigned to it.
That’s not how most AI video works. Most pipelines generate video first, then add audio as an afterthought. LTX-2 bakes them together at the architectural level.
The other specs: 4K native output at up to 50fps, clips up to 20 seconds, trained on licensed data from Getty Images and Shutterstock (more on why that matters), and it runs on a consumer GPU. An RTX 3060 with 8GB of VRAM. Not an H100, not a data center, the card a student might have in their gaming PC.
LTX-2.3 and what just shipped
Three weeks ago, LTX-2.3 ↗ dropped alongside something that stopped me mid-scroll: LTX Desktop, a free, local, fully open-source video editor built on the same weights as their cloud platform.
No per-generation cost. No internet required after setup. Same model.
I’ve been messing around with the desktop app on my machine, and it’s genuinely usable in a way I didn’t expect from a beta. The LTX-2.3 upgrade itself also fixed the most annoying thing about the previous version, that Ken Burns effect where image-to-video would just slowly pan across your input frame and call it “animation.” Now it actually moves. The prompt adherence is tighter, the hair detail (historically a disaster in AI video) is noticeably better, and they added native portrait video for the first time, trained on actual vertical content, not landscape crops rotated sideways.
For TikTok, Reels, Shorts creators: this matters.
The platform play
Here’s where it gets strategically interesting, and where I think Lightricks is playing a longer game than most people realize.
LTX Studio ↗ isn’t just their own model wrapped in a nice UI. It’s a production platform that integrates Kling 3.0 Pro, Google Veo 3.1, Runway Gen-4.5, Flux, and a handful of image models alongside LTX-2. You pick your model per shot depending on what you need.
They’re not betting you’ll always want LTX-2. They’re betting you’ll want to work in their environment, from script to storyboard to generation to timeline editor to export. The model is the hook. The platform is the product.
The timeline editor exports .xml directly to Premiere Pro or DaVinci Resolve. The audio-to-video feature (launched with ElevenLabs in January) lets you upload a track and have the video driven by the sound. Timing, pacing, camera motion, all responding to the audio waveform. There’s an enterprise brand kit that keeps visual consistency across a whole team’s output.
As a designer and builder, the product craft here impresses me. These aren’t bolted-on features. They’re telling a coherent story about where the workflow lives.
The honest part
LTX-2 is ranked #36 overall on the Artificial Analysis leaderboard ↗. That’s not a typo. Kling, Runway, Veo, even a handful of models I’d never heard of before, beat it on pure video quality in blind evaluation.
The visual realism gap is real. Long-form temporal stability. Keeping a scene coherent beyond 8 seconds, is still a work in progress. If you need the most photorealistic output available right now, Kling 3.0 is probably your answer.
But here’s the thing: LTX-2 Pro costs $3.60 per minute of generated video. Sora 2 Pro costs $30.00.
That’s not a small gap. That’s an order of magnitude. And Sora doesn’t run locally, doesn’t give you the model weights, doesn’t let you fine-tune it on your brand’s visual identity, and can’t be deployed in an environment where your footage never leaves your building.
The licensed training data Getty Images and Shutterstock ↗ matters enormously for enterprise buyers. It’s the reason a Fortune 500 legal team might actually sign off on using this, while they’d never touch a model trained on scraped data of unknown provenance.
Five million downloads since the January open-weight release isn’t hype. That’s a community building on top of something. The LoRA ecosystem, the ComfyUI integrations, the speed optimizations, the community has already done meaningful work with this model in under three months.
What I actually think this means
The AI video market is on track to go from $32 billion to $133 billion by 2030. Every major player is sprinting. The walled gardens will keep getting better. Sora will keep improving. Veo will keep improving. Runway will keep raising money and building.
But there’s a version of this future where a fully open model, with a thriving developer ecosystem, a clean enterprise licensing story, and a platform that abstracts away which model you’re actually using, ends up being where most professional video work happens. Not because it’s the best model today, but because the developer community keeps making it faster, better, and more specialized. Because fine-tuned variants emerge that outperform the base model on specific tasks. Because it’s running on your hardware, not theirs.
That’s the Linux trajectory. Linux wasn’t the prettiest operating system in 2001 either.
I’m not saying LTX-2 wins. I’m saying the open-source bet is coherent and strategic, and Lightricks. A profitable company with $250M+ in annual mobile revenue and no existential funding pressure, is well-positioned to play a long game here. That’s a different situation than most open-source AI projects, which run on VC patience and hope.
If you haven’t tried it yet check out the LTX Desktop app ↗ is free, and it’s the fastest way to form your own opinion. And if you want the cloud version with access to every model, LTX Studio ↗ has a free tier worth poking around.
Let me know what you make of it. I’m genuinely curious what others in this community are seeing in their own testing.
Linus Ekenstam
31 March 2026


