Description
Weta Digital has a proud history of bringing groundbreaking films to the big screen. We’re known for performance-driven animated characters such as Gollum, Kong, Neytiri, and Caesar as well as well-known fantasy worlds like Middle-earth in Lord of the Rings and Avatar’s Pandora. Along the way, our work has earned six visual effects Academy Awards®, twelve Academy Sci-Tech Awards, and six visual effects BAFTA Awards, in addition to the 34 Visual Effects Society Awards awarded to us by our peers. In this talk, we’ll explore the innovation behind the magic and share an inside look at the proprietary pipeline we’ve built on top of Autodesk Maya that enables our artists to create their best work.
Key Learnings
- Have a clear understanding of the benefits of Maya for visual effects pipelines.
- Understand how Weta Digital identified areas to extend our Maya pipeline.
- Be familiar with Weta Digital’s suite of custom tools.
- Understand how Weta Digital uses these tools today and how we plan to implement them going forward
Speaker
- JMJoe MarksJoe Marks is Weta Digital’s Chief Technology Officer, overseeing the company’s technology initiatives across visual effects and animation. Joe brings 35 years’ experience as a technology executive. He was most recently Executive Director of the Center for Machine Learning and Health at Carnegie Mellon University where he worked at the forefront of innovation in digital healthcare. Previously, he was Vice President and Research Fellow at Disney Research, leading research and development across labs in Pittsburgh, Zurich, Boston, San Francisco and Los Angeles, and Research Director at Mitsubishi Electric Research Labs.
JOE MARKS: Hi, everyone. My name is Joe Marks, and I'm the Chief Technology Officer at Weta Digital, and I'm here to talk to you about our cloud pipeline initiative, which we're doing in partnership with Autodesk.
So most of you on this call will have heard of Weta Digital. We've got a storied history, starting with one person and one machine in Wellington, New Zealand. And have achieved a worldwide reputation in VFX with awards, software, publications, patents, and collaborations.
So we have our own pipeline that we use internally, like pretty much every other major film studio, game studio, VFX studio. There's tremendous duplication in those pipelines, as you probably know. One of our Sci-tech awards on the previous slide, that we won this year, was for our hair simulation. But Disney, ILM, and Pixar also won awards for hair simulation in this cycle. And three of those four groups were past or present colleagues of mine, so I know the amount of duplication between that.
Basically, studios keep their pipelines proprietary and internal. We've got new management, and we've made a decision to take our pipeline and put a version of it in the cloud that we will make available via subscription to anyone. And that's a relatively new thing in this industry.
So when we decided to do this, we started out with some goals that we wanted this pipeline to have. The very first one is we want it to be capable. We want it to be something where people can do Weta caliber work, potentially, with the pipeline. But basically all types of commercial work in this. So it has to be fully capable.
We also wanted it to be familiar. So rather than build all the software ourselves, taking our software and adding to it to make a complete pipeline, we want to do what everybody else does, including ourselves with pipelines. To take existing familiar software, beginning first and foremost with Maya, and put that in the cloud. Form partnerships so that this cloud offering will be familiar to people coming right out of film school, or right from one job to the next. They'll know how to be minimally competent in it to start, and then they can add capabilities as they need them.
We wanted it to be accessible. We want people all over the world to be able to use this without special hardware, and without the funds to get that hardware to create a server farm. And that's where the cloud comes in.
We want it to be affordable. So paying a subscription model that is commercially sensible for people. And we want it to be community-oriented. If some of you see echoed in these terms the kind of characteristics you'd see in Adobe's Creative Platform, that's exactly right. And in fact, what the Adobe Creative Platform is to graphic designers, we want to create this cloud pipeline to be the same thing to VFX and animation folks.
So taking those goals and putting them into an engineering approach, we're going to have software from multiple companies in there. As I said, beginning with Maya and our own software, and others, as well. We're open to that.
We're going to go all in on the cloud. We're not looking at a-- sorry about that, my system decided now's a good time to do software updates. Everything is in the cloud, and that's a little different than what some other studios are doing. We're not going to do a hybrid model for engineering purposes. So we're going to have everything in the cloud and accessed over a remote terminal, which means you don't need specialized hardware. You just need a good internet connection and a smart terminal. And insofar as one can make the complicated business of VFX and animation simple we're going to go with ruthless simplicity, to make it easy to engineer and easy to maintain.
So what does that actually look like, in terms of what capabilities will be in there? So you can see on the screen here, this is our model from the very beginning, when we were thinking, well, at least we'll start with Autodesk and Maya. In fact, by the time you see this, there may be additional partners that will offer their software in this. But this is a complete pipeline built on Autodesk software with Weta plug-ins in there. So this is our roadmap. It will take us some time to do all of this.
And it's a basic USD pipeline, completely in the cloud. You've got the Weta magic from the plug-ins that are listed here, A through Q, and I'm going to show examples of many of those. It's going to be familiar to all CG artists because, at its heart here, it is a Maya pipeline. But it has the additional capabilities for the Weta software, which can be learned over time as needed.
Artists will be able to run a thin client using TC/IP, something similar to that, initially to AWS local zones, but potentially to other service partners. The local zones is a very important part of this vision. They're not available all over the world, but they will be soon, and that will reduce the latency, and makes the all in the cloud offering supportable.
It'll be completely self-contained, no need to be interoperable with anything external. And that's to keep the engineering simple and maintainable. And we're going to use it ourselves to begin with.
One of the goals that I have is to increase the global nature of our work force beyond New Zealand. And we need a pipeline that our global colleagues are going to be able to use for some projects. We're going to use this. And then we're hoping to be able to make this available to everybody after we've kicked the tires on it as a subscription model by 2022.
So let me show you some examples of the kinds of tools that we hope to add to Autodesk offerings that you're already familiar with on this call. So let's start with some modeling software, and we're going to start with some vegetation.
Lumberjack is Weta Digital's vegetation tool that handles digital vegetation in all areas of the pipeline. From modeling to simulation and animation, scene layout, and final rendering. And has been used extensively on a number of movies, including all three films in The Hobbit trilogy, Wolverine, and Dawn of the Planet of the Apes, and was built exclusively for feature film use.
NARRATOR 1: The Lumberjack modeling package is a Maya plugin which provides a rich set of tools for creating and managing digital vegetation assets. An asset can either be created from scratch, or by extending an existing [INAUDIBLE] model. The core lumberjack skeleton is a set of curve hierarchies which are used to animate skin and drive many aspects of the asset.
Many tools are provided for explicit modeling and editing, such as branch drying, brushing, and cutting. Geometry is mostly procedurally generated, but can be provided as [INAUDIBLE] geometry when very specific art directions are required. [INAUDIBLE] tools are available to produce a high quality image, which will remain robust during displacement and deformation. A growth simulation system complements the modeling, and allows artists to create realistic, art directable trees generated using biologically accurate growth rules.
JOE MARKS: Let me stop there, because I want to show other things, as well. But a couple of things to point out here. One is a lot of my examples are from older movies. This doesn't mean the tools are old, we've constantly been updating the tools. But it's very difficult to get permission from studios to show work from productions. And it's easier to get it from older productions. So the movies are old, the tools are constantly updated and are modern in every aspect of that.
The other part to point out is that was a window in Maya. Lot of our tools are tightly integrated with Maya, and that will make it easy to offer them as plug-ins in the cloud offering. So Lumberjack is about growing individual trees, but you really, for many, many things that we do, we need whole forests.
NARRATOR 2: To create the forest covering the mountain, we used a simulator called Totara that grew the forest procedurally as a single ecosystem. It simulated 100 years of growth and decay, as individual trees and plants competed for light and resources. All growth is simulated in a physically dynamic environment, where the branches bend under their own weight, and also under the weight of the snow we added later.
The premium, fully dynamic forest was especially helpful when it came time to destroy everything. The trees reacted to the force of the avalanche, shaking off snow as they swayed, bent, and fell over.
JOE MARKS: So the natural environment is something that features in a lot of our movies. And so the tools as you see, Lumberjack and Totara are very important tools for us, but the built environment is also important for many things. And here is our CityBuilder tool, which was used most recently on Mulan.
[MUSIC PLAYING - "REFLECTION"]
JOE MARKS: There's another element of vegetative growth, it's what you get on your face and your head. And that's very important for a lot of our characters, as well, is hair and fur. And here I'll show you the Barbershop tool.
NARRATOR 2: Barbershop development began six years ago, and was designed from the ground up to maximize artistic control, and minimize the time spent endlessly tweaking parameters. The system's focus is on the direct manipulation of hair strands, while simultaneously allowing our artists to work at the full resolution of the final groom. There is no interpolation of new strands that render time, instead giving our artists complete control over 100% of the groom. And they can either work in broad strokes, or right down to the individual strand level.
What we're going to show you here is a very quick overview of how we created the grooms for one of our hobbit tools, Bombur. What you will see are some of our most commonly used tools, all of which work by directly manipulating strands of the groom. As you'll notice, we also have ways to extract the groom, so we don't have to manipulate every single strand if we don't wish to.
But as our artists have found, they enjoy having direct access to every single strand. We've found this to be invaluable for matching very specific reference and hitting the notes from our clients and directors. For example, let's break down Bombur's mustache.
We start by growing the hair from the surface, quickly adjusting the length, giving it a quick brush, and then we're straight into painting identity map. Our maps, by the way, are all stored internally on the fur system, so there's no need for texture maps. After that, our artist proceeds by adjusting the length, first scaling, and then smoothing the lengths. This is one of several ways we have for controlling length, by the way.
Our artist then begins to flesh out the rough volume by sculpting the fur directly. Perhaps going back and adjusting the length slightly, before continuing to refine the silhouette and making it look a bit more like a mustache. And now to add some details. Some clumping, adding a little break up, giving the hair a bit of a wave and curl. Even adding a bit of a twist to the mustache, like any good mustache would have. Now that we've got one side looking good, let's move to the other side, which is quick and easy to do with our Clone tool.
And now for the finishing details. Adding some asymmetry, doing a bit more detailing, and how easy to groom a mustache, then, by giving it a trim directly with our aptly named Trim tool. Another way of adjusting lengths, as you can see.
Of course, that's a very brief and simplified overview of our process for creating grooms. As you can see here, our artists are now moving onto Bombur's beard, and using many of the same tools.
JOE MARKS: I'll stop there. So that gives you an idea of the Barbershop tool. So that was from modeling. Now I'll mention rigging, basically the puppeteering of models. Here Maya has excellent tools for this. Sometimes we want high-performance feedback for our artists, so we use a tool called Koru, which is really an evaluation engine to speed up the computation of the rigging.
And no audio on this, but this is what aerobics classes look like in Wellington, New Zealand. You go to a gym down there and you see this kind of thing all the time. But we want this, this is a real-time performance, so that the artists can interact, and see what they're doing when they operate the rigs in real time, or at least near real time.
Actually, every time I look at this, it looks really creepy, doesn't it? But it is fun. Yeah, OK, we've got to stop there.
All right, now let's move on to animation and simulation. Another important part, and as you can see, you can read along here, you know all the tools, the Maya tools that are here. I'm focusing on the Weta additions to the Maya tools that we'll have. We've got some animation tools around faces, which are very important to us.
I'm actually going to skip this video, because this will be one of the later additions to the cloud pipeline, because it requires on-set scene capture. But it is something that we're known for, right from the Lord of the Rings movies, up to the Apes movies, and so on. And eventually that will be something that we would want to add, but we've got to solve the hardware problem there to make that widely available.
But let me instead go on to simulation. So there are pretty good simulation tools already there, but we really push the envelope on movies like Avatar and others, as well. And so we have our own simulation tools for bubbles, and for water, we're always blowing things up, of course, and for the smoke that comes from that, or other kind of atmospheric effects.
So we call those, collectively our tool's called Loki. And then we have a physical simulation system called Odin that combines all of these. So sometimes you want to have multiple different systems that are combined together.
NARRATOR 4: The motivation behind Odin was to achieve a unified model that can scale to very large numbers, and turn around existing simulations a lot quicker. This is particularly important when dealing with very tight deadlines or last-minute changes. We had existing solvers that could run on distributed systems to be used, or were multi-threaded. But none of them worked well together and solved the specific effects-related problems.
As soon as large scale or complex interactions were required, artists had to revert to time-consuming workarounds, or break up the existing simulations into more digestible chunks. Scaling was a big issue, as artists were limited by the fastest computer in the company, and had to rely on these specialized resources. Depending on the shot, these machines eventually find the nemesis, turnaround times were hard to control or predict.
Odin's simulation domain is based on an unbounded and sparse multi-level hierarchy which controls all embedded simulation objects, which currently include volumetrics, particles, rigid bodies, and for the sake of this demonstration, also rubber ducks.
All algorithms in Odin emit parallel at the thread and machine level, making the resource management flexible for productions, adjustable for the task at hand. It improves our previous loadbalancing schemes to consider the requirements of multiple different physics problems at runtime. A cache efficient memory layout will compute kernel infrastructure with SIMD optimizations contributes to our significant performance increase.
Artists interact with Odin on Weta's node-based synapse effects framework, allowing for flexible high and low level control.
JOE MARKS: So let me stop there, but pointing out that to many small studios, having the compute resources to do something like these simulations would be impossible. They wouldn't be able to afford that, because they wouldn't be using it all the time. Plus it's expensive to buy.
By being in the cloud and having scalable compute resources, it enables people, small studios, to think about doing complicated simulations like this with the software that we would provide.
All right. Moving now to a little later in the pipeline, the shading, lighting, and rendering. Another area where we've got some unique tools that we will make available as part of the cloud offering. So I'm going to talk here about Physlight and Manuka.
So Manuka is our spectral renderer. And one of the things that we often do is combine live action footage with CGI. And to do that in a realistic way really requires that the CGI be physically accurate. And that's what you're going to see here.
[HEROIC MUSIC PLAYING]
NARRATOR 5: Physlight allows us to capture and recreate real-world lighting more accurately than ever before. Combined with Manuka, our full spectrum light simulation engine, we're able to achieve new levels of photographic accuracy with our digital characters and environments, while being faithful to the intentions of the director and cinematographer on set.
We begin with traditional, high-dynamic range image capture techniques, taking panoramas of the scene at multiple exposures to reconstruct the relative brightness of light sources and reflective illumination. To this process, we add spectral sensitivity by analyzing the response of the camera sensor to light at each wavelength. We can then accurately reconstruct the absolute number of photons hitting each pixel on the sensor and their wavelengths.
This gives us a complete, physically accurate description of the light energy arriving at the capture location, independent of the capture device used. Manuka takes this description of the light energy at each wavelength, and propagates it through the scene until it reaches the camera. In this case, the ALEXA 65 digital cinema camera.
Once established, we can use the captured light to create lighting for new scenes. Physlight's camera model allows us to properly expose the shot using camera settings such as F-stop and ISO that match the ALEXA. Virtual light sources are controlled using real-world units, such as lumens output, Kelvin color temperature. Any practical light source that we have physical data for, such as a torch, can be used to light the scene just as a cinematographer would.
Manuka models the light transport in the scene exactly as it occurs in the real world, and Physlight allows us to capture that scene exactly as a real camera does, giving our artists the best integration into live action footage, and the tools to create completely digital scenes that feel like real photography.
JOE MARKS: Now this may sound very exotic, and obviously we use it for very high-end performance on movies. But we think that these tools will be interesting to other people, as well. In fact, yesterday I got an inquiry from a furniture company that is very interested in Physlight and Manuka because they want to render CGI of their furniture offerings with their lights and in real scenarios, houses, apartments offices, and so on, to give as realistic an image or a video as possible of their actual physical product.
So even though we use it for very exotic movie making, this kind of work, these kinds of tools may be more broadly applicable. And that's what we hope to find out when we put them in the cloud and make them available to everybody.
Near the end now, compositing and editing. Very important to us when putting a movie together. And let me show you our deep compositing tool. This one has no audio, but it's kind of self-explanatory. And this is important.
It's basically image compositing, but with a depth element. We can get the depth element from CGI or we can get the depth element from cameras. Either stereo rigs, multi-camera rigs, or depth LIDAR-type cameras. And this allows us to do a better job of compositing complicated shots together.
And then lastly, this is actually our most boring video, but it's our most used tool in the studio. And I'll just let it speak for itself.
NARRATOR 5: Hidef is an essential part of our shot review process. It is a mature media review system which provides a window into the work we do here as it happens. As a tool for browsing, playing, reporting, and instantly accessing production data about shot, take, and editing for a feature film.
We've used Hidef and dailies at Weta Digital since the first Lord of the Rings film in 2001 to review our visual effects work. It is a video player for 2D, 3D, HD, and high frame rate, up to 60 frames per second. Hidef is a platform for navigating and browsing all moving images in the production database. A hub for all information attached to a particular clip. A medium for sharing presentations with clients. A way of playing selected clips remotely on and offline, synchronized across multiple machines.
Hidef is integral to, and integrated with, the production VFX ecosystem.
JOE MARKS: So let me stop there, but that's an important part of the pipeline, any pipeline, both ours and the one we will put in the cloud, is really the asset management, where asset's broadly defined. The actual models and all the way to finished shots.
And when a movie scales, it's important to get that right. That will also be something that will be in the cloud that we will develop, and bring from our existing system and put it in the offering.
So now just to give you an idea, just to flex our muscles a little bit, when you put all of those tools together you can do some pretty amazing stuff. I'll just show you some work from Gemini Man.
HENRY BROGAN: You're allergic to bees. You hate cilantro. You always sneeze four times.
JUNIOR: Everybody hates cilantro.
NARRATOR 6: Junior is a fully digital character modeled to look like a 23-year-old Will Smith. We structured our capture approach to make sure Will could stay with one character for extended periods, and would always be able to act with a partner. This allowed him to preserve the emotion he was conjuring for each performance.
The true challenge with Junior was not likeness, it was recreating the specific facial movements that Will Smith uses to communicate and express emotion.
JUNIOR: We were talking to him the whole time.
NARRATOR 6: Audiences know Will's face, so we knew any differences in the acting would stand out, particularly with the heightened detail of 120 frames per second into 4K.
JUNIOR: 'Cause I'm the best.
NARRATOR 6: This pushed us to expand our facial rig to allow animators to go deeper into the intricacies of the muscle movements, with 100 main muscle controllers and over 300 secondary controls. Control for details like how the lips sticks together and form surrounding tissue, and how subtle changes in the eye shape communicate emotions were all accessible to the artist to help refine the performance.
Animators began by building an actor puppet for current-day Will Smith to validate their understanding of Will's underlying muscle movements, and to ensure that they were getting everything they could from what was captured. They then applied this learning to the Junior puppet, where they would craft the more youthful performance.
JUNIOR: You never seem to notice, so we just keep going.
NARRATOR 6: New deep shapes technology was developed to add temporal details beyond traditional blend shapes by weighting movement based on muscle depth, altering how the fluidity of those movements are perceived on the skin's surface. The result allowed for a more youthful response to the same series of muscle movements.
A procedural pore distribution system, combined with actor-accurate wrinkle flow line maps, produced pore labeled 3D flex and compression at a micro-geometry level. We added separate pheomelanin and eumelanin absorption parameters to increase the accuracy of our subsurface skin shading model. This also meant blood flow in the skin responded more accurately to the delay line controls for gradual transitions in color based on compression.
To finish the look, we added an independent sweat layer to better reflect exertion and stress.
JOE MARKS: OK, so you're not going to be able to do something like that right out of the box with the cloud pipeline. That's when it's flexing all its muscles, to do some of our best work. But we do hope that people will do some pretty amazing stuff with this cloud pipeline.
And to that end, it's not going to be a static pipeline. And it's not going to be just existing tools. We want to look to the future. With the cloud pipeline like that in place, and the business model in place, it will allow us to invest, to improve the tools. And one of the areas that I think we're on the cusp of is really integrating AI and machine learning into lots of different parts of the pipeline over the coming years.
And it's all of AI, not just machine learning. But the full range of things from computer vision, speech, natural language, intelligent interfaces, knowledge representation, you can read the list there. Here we've got our target list of initial places we think that we will be able to integrate some of these ideas. Some of them we're already working on, and you actually saw them in the previous video. And some are in the planning stages, and will be future work that we'll do ourselves, and with our academic and industrial collaborators.
So let me finish there with a summary. So Weta and Autodesk are collaborating to produce a WetaM cloud offering, which will be a complete pipeline for VFX and animation and games that will be in the cloud and available for use via subscription. We may bring in additional software partners to flesh out that pipeline. Although just the two companies alone, we can create a complete pipeline that is usable, soup to nuts.
And we are going all in on the cloud. This will not be something with a local aspect to it. You will access it in the cloud via remote terminal, and do all the work in the cloud. All the assets will stay in the cloud. And that has a number of advantages in terms of engineering and cost.
We want this to be accessible to everybody. Internally, I picked as a target, imagine there's a few kids get out of film school in Lagos, Nigeria, and they want to set up their own company. Currently that will be very difficult for them to do. And so what we want to do with the cloud pipeline is to make that something that is available to them, accessible, affordable to them, and democratize content creation that way.
So at this point, I will stop for questions. Thank you for listening.
Tags
Product | |
Industries | |
Topics |