
27 Feb 2026
Wonder Team

Seedance 2 is close to release, so Justin Hackney and Yigit Kirca spent 90mins on stream generating sequences, stress-testing prompts from the Wonder community, and building a short film in real time.
Our first impression: Seedance 2 has taken us beyond the era of stitching clips together and into a new chapter of AI filmmaking. Cuts, transitions, match cuts, and sound design baked into the output have changed the game.
From single shots to edit sequences
Previous video models gave you moments. Good moments, occasionally exceptional moments, but isolated moments nonetheless. You'd generate a shot, then another, then spend hours in Premiere trying to make them feel like they belonged together. A lot of the craft lay in the stitching, and the stitching often took longer than the generation.
Seedance 2 works differently. Justin described the output as:
"edit sequences, sound design woven into these different sequences that when you generate a few of them, you can actually use these different pieces to create a commercial, to create a film."
The short film he built to test the model covered zombies, fantasy battles, claymation, pirates, and sci-fi - all featuring himself as the main character, all generated from a single headshot. A claymation sequence morphed into live-action so subtly that Yigit didn't clock the switch until it had already happened.
Match cuts the model figures out itself
The bit that got the most reaction during the stream was how Seedance 2 handles transitions. Justin fed it a start frame and an end frame from two different environments, prompted for seamless transitions, and the model worked out how to match cut between them.
One sequence had his character falling from the sky, which match-cut into landing on a pirate ship. Continuous movement across two completely different scenes. He didn't choreograph it frame by frame. Two reference points, a direction, and the model found the connective motion on its own.
For AI filmmaking workflows, this is far more useful than any resolution bump. You can upscale anything. But getting AI-generated sequences to cut together with the rhythm of actual filmmaking has been hard. A model that handles continuity of motion between scenes opens up a different kind of production.
Consistency across complex prompts
Community prompts during the stream showed another side of it. One creator sent a detailed brief describing emotional tones and visual styles without any timestamps or complex shot breakdowns. The model chose its own cuts, framed its own sequences, and produced something with consistent pacing.
Yigit put it simply: "It's crazy how consistent it is - they didn't even give timestamps or anything, but even the cuts and the framing of the cuts were super consistent."
You can describe the feeling of a scene, its mood, and the model turns that into edit decisions. For anyone used to AI tools that need extremely literal instruction, that's a big leap forward.
Our community submitted prompts throughout the stream, and the pattern was hard to miss: people who wrote like directors got better results than people who wrote like prompt engineers.
Mood and movement beat technical specifications. We've been saying this for a while, and it was great to see it play out live.
Speed changes how you work
Seedance 2 is fast.
When you can test an idea and try a different angle in a minute or two, the workflow starts to feel more like directing than engineering. Justin's approach looked like a filmmaker on set: watching takes, picking the best moments, deciding what to try next.
The timeframe between idea and proof-of-concept keeps shrinking.
Where AI filmmaking goes from here
Generating coherent edit sequences rather than isolated shots means filmmakers can spend time on the decisions that actually matter: which stories to tell, how scenes should feel, where the weight sits.
We're tool-agnostic here at Wonder, working with whichever models produce the best results at any given moment. Seedance is simply the best we've seen yet.

