Monday, June 22, 2009

Project Challenge: LG Transformers 2

I'm back with another project challenge post about our latest LG commercial. This time around we had a tie in with the upcoming film Transformers 2. Here is a link to the commercial on Youtube.

Our main workflow these days is to use Maya/Vray on our commercials. This project, however, came at a period of time where it would be more beneficial to us (for internal scheduling reasons) to dust off our old Max/Vray pipeline again and do another commercial with it.

This time around we had a very small team. There was only 3 of us doing any 3d work. We were able to get a model to start from of the phone from the client. I took the model and reworked some areas of it and cleaned it up so that we would get proper meshsmoothing on it. I also did the same for the laptop model. Another artist modeled the camcorder from scratch while our supervisor modeled the GPS unit using the phone as a base model to start from. After all of the modeling was done, all of the models were given to me to create materials for. Since the phone was supposed to appear to be transforming into all of these other objects, everything basically needed to have the same materials applied to them. After we got the materials all set up, the three of us began animating. Since we were a very small team we had to handle all the animation ourselves which was a little daunting at first. I handled two of the transformation shots and ended up taking over and finishing a third one. While this was going on I also set up the light rig that we ended up using for all the shots. The rig was entirely set up using an HDR panorama that was shot on location durring the shoot. I plugged that HDR into a Vray dome light and used Light Cache and Brute force for GI.

Animating was probably the biggest challenge on this project. Although once you got your head wrapped around it, it really wasn't that difficult to do. We first focused on the big moves. We didn't care if it was physically possible for something to move a certain way or if it was floating out in space. We just focused on making some interesting movements with the bigger and more noticable pieces. After that we went back in and starting the real work which was to connect up all the big pieces so they weren't just floating around and give them connectors and gizmos that would support their motion. Finally, we did a pass on the animation where we added parts inside the objects to make them appear to have circuit boards and components that would actually be inside these objects and we used those to help hide any bits behind as the transformation happened. So a piece of the phone may slide behind a circuit board while another piece of the camcorder swung out from behind something else. You do this enough times with enough pieces and eventually you have a transforming phone.

The next challenge we had to overcome was that the actor was holding the real phone in his had in all of our plates. So when we did our animations on top of it you could see the real phone in his hand underneath our CG. We already had an artist do some camera and geometry tracking for us so we had a rough hand model that we were using for shadow catching. I took that model as well as a still image we had of the actors empty hand that they shot on set and mapped the hand photo onto our 3d geometry. I had to do a lot of pulling and pushing of the UV's to get them to line up ok since the angle of our camera and the hand photo were fairly different. I rendered out an empty hand pass that the compositor used to help paint out the original phone from the plate. He would just use little bits and pieces from my render as needed to fill in some gaps. We ended up do this for several shots.

Rendering was simple. These weren't complex scenes really so they rendered pretty fast. I think we were around 1 hour a frame for the worst case close up shots. Everything else was between 10 and 30 minutes. It was a lot of fun to work on this project. Not only did it turn out looking pretty cool, but I got to push myself a bit with the animation, and nobody had to work any long hours.

Monday, June 15, 2009

Project Challenge: LG Advanced Learning

Its been a long time since I've done a Project Challenge post. I haven't done a single one since moving to Digital Domain either. So lets go over a spot that I really enjoyed working on...LG Advanced Learning. Here is a link to the commercial. This is the 30 second cut. We did a 60, but I can't find it anywhere online that you don't have to subscribe to.

This project was a lot of fun to work on. It was a pretty good challenge and in the end it went very smooth and I'm quite pleased with how it all turned out. This was the third project I worked on at DD.

The great thing for me was that another co-worker of mine and I convinced our supervisors to use Max/Vray for this job. Vray is exceptional when it comes to photo real metalic surfaces. The only problem with this idea was that DD didn't currently have a Max TD around to write tools, and DD's older tools for max hadn't been updated in a while. There were a few tools that we HAD to have in order for this to work, so right off the start I set to work creating the ones we didn't have, and my co-worker, Chris(who had used the older DD max pipeline), started testing the existing tools to make sure they were working and stable. Fortunately, they were all in good condition, so it was up to me to script a few needed tools. I ended up writing a tool that would allow us to automatically import animation data in the form of MDD files onto objects in max from Maya. We had hundreds of objects and there was no way this was going to be done by hand. I was very happy to see it get used the first time in production without breaking. In fact, I think we only broke it once durring the project. I also re-wrote a tool that I originally came up with at Blur, but needed re-written to plug into DD's pipeline with Vray. It was a quick test render tool which allowed you to override render settings at the push of a button to do very quick test renders while not actually changing any of your final render settings.

When the project first got rolling we learned that the director had already had another company model the characters. So we were given meshes to start working with. For the most part the models were well done. However, we really wanted these characters to be hyper detailed, so we went back into the models and re-worked them A LOT. We added tons of tiny details that you will only see if your lucky enough to catch the commercial in HD, and maybe not even then unless its a close up shot. We went so far as to model little weld points for all the circuitry. Every inch of these characters had some fine detail on them. We spent a lot of time in this part of the production. Perfecting textures and models. This was the hardest part of the project actually. As soon as the characters started getting signed off on it was pretty smooth sailing to the finish.

When it came time to start rendering shots it was a breeze. The time we spent up front getting the tools ready really paid off. Shots came together quickly and we had very little issues on the render farm. The one thing that did present a problem for us though was grainy noise. We were throwing everything at Vray on this spot. We had GI calc'd per frame, glossy reflections, glossy refractions, translucency, depth of field rendered in camera and motion blur. We also had nuclear hot lights which didn't help the grain at all. Then the challenges started. It really wasn't all that bad to solve the grain issues, but it was the most challenging thing about this project. I love working with Chris though. He and I come from two different schools of thought about rendering with Vray. He is very much in favor of using Light Cache and Irradiance mapping for GI, where I'm in favor of Light Cache and brute force, or in some cases completely brute force GI. What we learned was that about 50% of the time his method worked the best. We got fast render times and clean GI. However, the other 50% when things got more complicated we found that my approach worked out better. It ultimately came down to what was happening in each shot. With so many variables it was a little tricky at first to figure out what needed to be tweeked in order to get rid of the grain. The best way to figure it out was to look at all the buffers we were saving out. We could do test renders looking only at the GI pass to see if our settings were causing grain there and we could tweak the GI settings apart from everything else. We did the same for reflection and refraction passes to make sure our samples were high enough there. Then finally we'd turn on motion blur and DOF and check again to make sure everything was clean before submitting to the farm again. Naturally the render times went up, but all in all none were very long for what we were doing. Most frames averaged around 30-40 minutes at 1024x576. A few of the close up shots were rendered at 1920x1080 and those were about 3 1/2 hours per frame for finals.

This was a rare project. Things went so smoothly on the back end that the lighters pretty much finished a week early and were just there to help the comp artists with additional matte passes and misc fixes. We pulled a few late nights in the beginning durring the modeling phase, but once we were into lighting it was regular work hours til delivery.

The FX were very cool on this spot. DD hired an artist to come in and code real time particle FX and render them in open GL. I sat next to this guy and it was very interesting to listen to him click away for hours writing code, then there would be a flash from his monitor and I'd look over and he'd be testing his particle system...It would just swirl endlessly without looping until he stopped it and went back to coding. In order to get those FX into our scenes with the correct camera motion he rendered out cards of his particles and another 3d artist took them into Lightwave and positioned them and rendered out passes for the compositors. In some cases the compositors took care of the FX placement themselves. The lighters didn't have to do any of this, but we did have to render interactive lighting passes and a few reflection passes.

That's about it.