
Storm Bride - AI Workflow Exploration

Release Year
2026
Role
All Aspects
Category
Motion Media / Commercial
AI Workflow
Toolkit
Nano Banana 2
Kling 3.0 Omni
Lovart
After Effects
Premiere Pro
Topaz Video
ACE Studio
Overview
Storm Bride is my third project exploring AI-assisted creative workflows. After completing Ford Mustang Commercial and YEEZY 350 Commercial, I wanted to push this direction further by creating a realistic live-action-style video centered on a human character. My goal was straightforward: to develop the character design, costume design, environment design, shot design, and sound generation almost entirely through AI, while relying as little as possible on traditional tools from live-action production and motion media. At the same time, I wanted the final result to feel realistic enough that most viewers would not immediately recognize it as AI-generated. Since I had already explored Seedance in earlier projects, I decided to use another powerful video-generation platform for this piece: Kling 3.0 Omni.
To begin, I used Nano Banana 2 to generate a virtual female character based on my reference images, then developed a wedding dress design through written descriptions and repeated iteration until I arrived at a version I felt was both elegant and visually distinctive. I continued refining the results until I had a set of three-view character references and close-up detail images for both the character and the dress. Using the same process, I also developed the stormy grassland environment with thunderclouds and dramatic weather conditions. Through discussions with ChatGPT and Gemini, and while continuously generating additional style frames in Nano Banana 2, I gradually finalized the visual direction of the project.
While reviewing these style frames, I simultaneously developed the storyboard and shot list—thinking through camera angles, how the character should move, what kinds of animation should happen, and how the lighting and atmosphere should evolve from shot to shot. As I continued generating more images in Nano Banana 2, I also kept refining and adjusting the shot list. Once I was satisfied with the overall shot design, I brought together the final shot list and all reference images and used them as inputs for Kling 3.0 Omni to generate the video footage.
From the generated clips, I selected the strongest moments from different outputs and edited them together in Premiere Pro. At this stage, I found that neither Kling nor Seedance could yet generate footage that followed a shot list perfectly in every case, which was something I had already expected. Because of this, the workflow still depended heavily on selecting, combining, and refining the strongest segments from multiple generations. After the edit was assembled, I upscaled the footage to 4K using Topaz Video. During post-production, I also used After Effects for additional adjustments, incorporating techniques such as camera animation and masking.
I then used ACE Studio to generate and compose sound effects, and completed the final composite in Premiere Pro.
I am quite satisfied with the final result, but the project also made some of the current limitations of generative AI especially clear—whether in Kling 3.0 Omni or in Seedance 2.0, which I had used previously. With sufficient reference images, current AI tools can maintain a fairly strong level of consistency in faces, body shapes, and objects across many shots, which is extremely promising. However, under more challenging camera angles or unconventional camera movement, the face can sometimes break down visually, and skin tone may not remain fully consistent from shot to shot. For that reason, although this video was generated entirely with AI, I do not believe that AI-generated images or footage can fully replace traditional workflows in video and motion media production at the current stage. What they can do, however, is significantly save time—especially during the style frame testing, brainstorming, and early visual development stages, where they are highly effective for exploring ideas quickly.
Style Frames

Exploration Process

























