팔차원 | Eighth Dimension (2025) - AI Animation Tool
Preparing for the mandatory Korean military service, Joseph, a Korean-American, stores his belongings in a church attic with his father.
STARRING: Haneol Lee, Chris S. Ahn, Do Yuen Ko.
Writer/Director - Haneol (John) Lee.
Producer - Jared LaCroix
Director of Photography - Tanner Grandstaff
Editor - Haneol (John) Lee
Gaffer - Ben Voorhees
Sound Recordist - Tony Pecorini
1st AD - Shi Zhang
2nd AD - Ford Cowan
Grip - Jalen Paulk
Sound Designer/Mixer - JD Tiner
Colorist - Wes Langdon
Painter - Carolina Rocah Lima
Painter - Maxine Giller
Music Producer - Se Young Ko
Saxophone Player - AJ Huang
BTS Photographer - Rahul Koul
Production Assistant - Brian Kim
In association with Contrast Cine and Citation Support
Special Thank You to:
Jim Cummings
Danny Madden
Ben Weissner
Jordan Bellamy
Grayson Proctor
Thomas Percy Kim
Benedict Ballman
Scott Dente
FILM FESTIVALS | AWARDS
ACADEMY QUALIFYING: Short Shorts Film Festival & Asia, SHORTLISTED
Short to Feature Lab: Developed with Jim Cummings (THUNDER ROAD)
“Currently on the festival cycle. Please email for a screening link!”
Making of the AI Animation Sequence - Using AI As Extension of Artistry
Step 1
AI BELONGS IN ARTISTIC CINEMA! It is not only reserved for the tech people. IT IS A TOOL FOR ARTISTRY.
Our big Philosophy: To create a 120 second AI animation sequence that does not take away from the creator in any capacity. Rather, it should help him execute his vision with clarity.
Main Goal: The character is experiencing a out of body experience and free expression. So, the animation has to be textured smooth like oil painting and NOT something that is robust, clean, or overtly artificial . (Directing this would become my biggest challenge)
In order to control framing, movement, and timing of every second of the sequence, we shot everything in live action first.
High Key lighting with high contrast.
Lastly, everything was shot on a low frame rate to create a step printing effect, ultimately having control of the shutter speed of the animation sequence as well. Additionally, we thought it would help with creating more of a organic look in the animation.
Step 2
In order to control tone, texture, color, and the aesthetic of the type of animation, we hired two painters to paint references of what exactly the AI will be referencing.
We did this not only because we couldn’t find the exact references we loved on the web, but also we wanted to keep it true to our first philosophy. We wanted these paintings to mean something.
As you see above, the paintings are actually part of the story. The characters sees these paintings and is inspired to play his own medium of art, which is his saxophone.
Step 3
We ended up settling on the AI program Runway Gen-1 to create our animation.
It had the option to plug in our live action shots and prompt it with our painting references.
Step 4
Without changing any of the detailed adjustments the AI spit something out that was quite disheartening.
Despite it trying to mimic oil painting, it was extremely clean, very artificial looking, and frankly very fake.
It failed to understand to keep the videos integrity. For some reason it started to animate and create videos from the painting I gave… not the video…
Additionally, it seemed to add and pick and choose colors that are not fully present in the paintings.
Finally, it failed to register the scene I was trying to do. I mean it changed the guy into a girl… And it added another person… There is clearly only one character…. LIKE WHAT!!!! UGH
Step 5
When looking at the advanced setting, I discovered that you can change couple different things to dial in what you want.
Although it’s not a lot, each of these toggles drastically changed the out put we got.
Structural Consistency - Adjusts how much the output video differs structurally from the input. Higher values result in more significant changes.
I realized I needed to keep this at 0 at all times!
Style: Weight - Determines the emphasis on matching the style reference over the input video. Higher values prioritize the style more.
Seed - A randomly generated number that, when locked, allows for consistent style application across multiple clips, facilitating experimentation while maintaining core output similarities.
This became crucial in keeping similar style in all my animations.
Frame Consistency - Controls the temporal coherence between frames. Values below 1 decrease consistency, while values above 1 increase it.
Those were the main creative control of this AI. Other than Structural Consistency, I had to find out what the exact setting was needed for the style I wanted.
I spent 6 hours experimenting and re-prompting until I got to something I was slightly happy with.
Step 6
I ran out of my paid credits so I made a separate email and used free version to get more credit…. I was blowing this AI up.
ANYWAY, after hours of prompting, as you can see above side by side comparison, it looked more like what I was looking for in the animation.
It was able to animate on top of the actual video I gave, much like rotoscoping.
Additionally, although the animation was WAY TOO clean for the style I was looking for, it was very close to creating the organic textured look of oil painting.
However, it was still having difficulty animating the face exactly to the expression of my character and his movement. The character has his eyes closed the whole time, but for some reason the AI, for the love of my life, had to keep the guys eye open!
Cinema is a medium where we fixate on facial expression. So, compromising on facial-expression animation was a non-negotiable.
Step 7
After many many many hours and days of prompting and experimenting with the limited toggles I ended up with what you see on the left.
I ended up decreasing everything down to the minimum. You would think the Style: Weight would be somewhat high to keep the structure of the painting but for some reason when it was increased, the animation ended up looking more artificial.
After couple more tries of tweaking 0.1 - 0.2 on each of these settings we ended up with what you see below.
Step 8
What you see on the left is the animation sequences. What you see on the right is the references we were using.
The animation now looked fluid, organic, and nothing like it was made artificially.
After hours of toggling, we were able to prompt it to keep the eyes closed once and we kept that SEED number!
The desaturated look also was due to us lowering our frame consistency more. I have no idea why that caused that because technically that toggle should control frame not color…? But it helped me!
Copying what we found using the SEED numbers and small changes I was able to create in total 120 seconds of animation sequences that looked exactly the texture, movement, camera angle, and color I wanted.
I’m DONE RIGHT???? Yeah… no…
Step 9
When I imported it to my editing software Premiere Pro, I had a painful realization.
The original film was shot in RED 6k RAW on a RED Komodo Camera. The AI animation, however, took this incredibly high definition quality and essentially made it a lower quality.
This is puzzle because I can’t just scale the whole film to a smaller file because of the animation sequence.
On the other hand, if I didn’t do that, the size difference of the actual frame is quite insane.
The left is the 6k dimension live action shot. The right is the AI animation after upscaling to the highest resolution in Runway. Yeah the animation is pretty bad…
We can’t just zoom into the animation sequence because that would cause pixelization. Therefore, we used another software called TOPAZ.ai.
Step 10
Topaz Video AI allows you to input a video file and upscale it.
Even if the video file is of smaller file it allowed me to upscale the file to a 6k footage.
After doing this we ended up with what you see below.
Step 11
At this point I’m like “AI is really freaking cool…”
I was able to upscale everything to the point I wanted and there was no issue with the animation until… we got to the color grading part of the filmmaking.
Because I put this through a generative system I do not know nor does it provide any of the meta data of this footage. In other words, I don’t know the color space of what this animated sequence is in actually.
So, when we changed out color space to REC.709, which is the standard color space in the film industry, the animated sequence went completely berserk and overly saturated. You can see it below.
Step 12
I actually lost more data in the color. The footage for some reason went bright and it was something we couldn’t work with in my limited knowledge of color grading.
Therefore, I hired a professional colorist to collaborate to fix this issue.
It ended up becoming a big issue…
The actual color grading of big chunks of the footage was no problem. My colorist ended up just using the mystery color space that the AI gave us. I was able to export him a high quality file like a Prores 444 file to help him play around with the color he wanted.
What became a big challenge was when we morph into the animation sequence from the live action sequence.
Because we are slowly morphing into the sequence those frames have to color match with the RED live action footage to color grade properly.
To color match, you need the color space of each footage. Except… we don’t know the color space of the AI.
So my colorist spent hours color grading frame by frame in the transition section of the sequence. You can see it below.
Step 12
After grueling hours in the color session, we overlayed the sequence with the title of the film over the animation.
It was 2 months of straight work and trial by fire.
I remember my team and I watching the short afterwards and simply saying “Wow this is the future.”
Reflection
First of all, if you want to see the film email me at haneoljohn@gmail.com. I’m happy to send a screener link ;)
In this project, we made sure to not make AI the sole creative generation. Rather, I made sure I, the director, was on the driver seat at all times.
With this principle in mind, working with AI was life changing. It gave me ideas that I could have never thought about. It made me realize the short comings of AI in many ways.
Finally, it made me realize that making movies are truly for everyone.
I can not wait for the future where people have the ability to create incredible films from just their home.
I want to make this future happen in independent Cinema.

















