Dylan Hoffman and Kelly Loosli, Center for Animation
I knew from the beginning that this project – centered around improving the collaborative process of making a 3D animated film across several “stages” and made by several artists of different specialties, temperaments, and backgrounds – would be two-fold. The first phase was learning more about the process itself, and how each phase related to another. The second, zoning in on an area that has been a weakness for BYU Animated Films in the past – character effects and cloth simulation. The project evolved much over the course of the year, with far more time and value being placed in the first stage than I foresaw. This was due largely to my opportunity to work at two different studios, Walt Disney Animation, and Lucasfilm Animation, as well as learning from various BYU mentors at PIXAR. Given that I was exposed to three very different companies, all of which are hugely successful and effective in their own 3D animation processes, I took advantage of the time to learn all I could about how BYU could learn from and emulate these companies.
Making a 3D animated film is a remarkably complicated process. It can take professional studios millions of dollars and upwards of ten years to get a film from conception to the big screen. Though this process has been around for over 20 years, beginning with PIXAR’s revolutionary hit, Toy Story, in 1995, it is still in constant evolution, made clear by the varying way in which major studios operate. The BYU Center for Animation has always sought after emulating “real world” studios in their own process, believing that it would benefit not only our films, but the students involved in them. It was for this reason, as well as seeing the difficulty in which some of our past films have had in completing on time, (if at all) that I felt we could benefit greatly from reevaluating our process.
This “process” in question, in its barest, simplified form, appears sequential in nature. (Figure 1) Concept Artists sketch, paint, and sculpt out initial ideas for characters and sets, while story teams are hard at work flushing out the evolution of the plot and characters. The next phase in the process involves creating the “assets” that will be used in the film. The advantage of animation is also its greatest challenge, that literally everything must come from the artist’s imagination. This begins with 3D modeling, where artists use a computer to “sculpt” and “model” digital characters, as well as the set they will “live” in. Here, the process breaks into two tracks that (in theory) work simultaneously. Rigging and Texturing. During Texturing, “Look” artists set about bringing color and texture to a world that, upon leaving modeling, is grey and flat, much like an initial clay sculpture. Afterwards, lighting artists will drop in digital “lights,” much like the physical lights used on a live-action set, where everything from color, intensity, direction, and focus must be considered in order to correctly capture a film’s mood.
While look artists are moving forward, Character “Riggers” begin the largely technical process of giving the characters an underlining skeleton and movement system, with corresponding controls, so that what would otherwise be a lifeless sculpture is more like an interactive puppet. Once rigging is complete, animation can begin. This is where the acting and subtleties of emotion take place, with animators painstakingly moving each and every character. Once animation is complete, simulations can begin. Typically, there are two different groups: the “Special Effects” team, and “Character Effects” or Technical Animation. While special effects are typically environmentally based (water, snow, rain, fire, etc.) character effects/simulations are more directly connected to the actions of the character. Common examples are clothing and hair, where the movement must be directly connected to the movement of the character. Once all of this is completed, it is all brought together by a team of “Render Wranglers,” who utilize several computers networked together, called “Render Farms” to render out each and every frame. This is incredibly time consuming, with a single frame taking anywhere from 12-72 hours to render, depending on how many simulations, lights, and characters are present.
My original project proposed researching more closely how the phases of modeling, rigging, and animation where related, and how the relationship could be improved. However, upon further investigation, particularly via watching how studios like Disney Feature and Lucasfilm Animation operate, I realized how inseparably connected the ENTIRE PROCESS is. A 3D model must have very specific parameters in order to be correctly rigged. But, if the 3D modeler begins with a design from the concept team that isn’t thought through adequately in regards to how it will move/shape in 3D space, the modeler is set up to fail. Likewise, a rig must balance a large amount of features with efficiency for it to work well for an animator, BUT, if attention isn’t paid to what shapes are created during movement, or functionality built in to account for things like changes in silhouette, both the lighting and simulation teams will run into major problems down the line. In other words, the 3D film-making process, or “pipeline” as it is referred to in the industry, is actually much more cyclic than was originally believed.
There is no way to account for every need/outcome in any one phase. If concept artists were bogged down with being concerned not only with appealing characters, but how their silhouettes would affect lighting, and how their clothing would move and flow, the entire process would be slowed to the point of being impractical. Iteration, or repeating a process in order to get an improved result, is not a foreign concept among artists. However, this often happens in isolation, with a concept artist iterating on their own design several times before they consider it “done” enough to pass on. The error with this way of working is that valuable time is lost, often only to find (by pushing the design through the pipeline) that a “completed” design will not work at all. I found that larger studios, while operate using the same linear pipeline that we use at BYU, go through it MUCH more quickly, doing fully rendered “tests” that include everything, from texture to animation, to effects, in order to see how it’s all working together. The process is then begun anew; using what was learned from the tests to better improve every phase, beginning with “exploration” in story and visual development. An example of this might be that story originally called for a character or sequence that, in story, appeared to be very exciting and appealing. But, as it was pushed further through the pipeline, was discovered to be far more complex and time consuming for the rigging and special effects teams. The story team could then reevaluate whether or not the proposed designs were necessary given the time and resources that would be required.
The ways this will be applied to the pipeline at BYU will be to replicate, as closely as possible, the process used by major studios. Iteration will remain a key part of what we do, but will take place throughout the entire pipeline. All designs, models and assets will be evaluated in context, requiring a rough texture and lighting pass before truly being considered complete. This allows the entire team to remain more involved throughout the entire filmmaking process, rather than waiting for “their turn” to participate. Teams like character effects, that historically were given anywhere from a few months to a few weeks to produce desirable outcomes, will now be able to work much earlier on, with iterations taking place as the whole team works simultaneously.
Figure 1: The “typical” pipeline process, wherein every step leads to another, appears very linear. The result of this project revealed that, while accurate, “real world” studios take this process as one step in a larger, iterative cycle, repeating it until the desired result is achieved.