Friday, July 31, 2009

Project Challenge: Audi "Rubix"

In this project challenge I'm going to cover my experience working on a commercial for Audi. At the time of writing this the commercial was not released and I do not have a link to it. Hopefully I'll get a link sometime soon after its release. You have to see it to appreciate it I think.

In this commercial we were asked to do every raytracers worst nightmare. Create a photo-real 9x9x9 glass rubix cube full of car parts in a photo-real CG environment. Then animate the cube rotating around and assembling the car as it spins until the car is completely built. For anyone keeping score...A 9x9x9 grid of boxes means that we had 729 cubes, each with at least one object inside it and most of them having several objects. Each individual cube had realistically modeled sides made of glass, which means we were seeing through a minimum of 36 panes and even more if viewing from an angle. So...729 cubes, made of 6 objects each. At least 3 times as many part objects. That put us at 26,244 nodes, just for the geometry of the cube and parts. Once you throw in the nodes for the environment and cube rig I can't even come up with that number...but these were some HUGE files to work in. Probably the most complex I've ever worked with.

When the project began I was one of the first modelers/lighters to start on it. There was a lot of uncertainty about what the environment should be like. It needed to not distract from the cubes, but also look good in its own right. Myself and two other lighters modeled up rough environments with temp glass cubes in it every day for a week or two. All similar but also quite different. As things progressed the environments that kept getting the most response were the very simple plain white rooms. In the beginning these all had white artificial lighting and no natural light sources. We went this direction for a long time, but when you put a glass cube in an evenly lit white room it disappears. We needed to have contrast in the scene in order for the glass to look like glass. I was pulled off of the environment for a week or so while the client, director and supervisors all worked out what they thought the environment should be. I was put on the task of creating the cubes and their glass shaders. At first I went with a totally realistic approach. I modeled all the sides of the glass cubes and stuck them up against each other. I put a realistic glass shader on them and hit render. Now I've never seen an actual 9x9x9 glass rubix cube before so I was a little suprised at first to find that the center of the cube was very dark. It was like the light just couldn't penetrate into the glass. I made the glass totally transparent and fresnel reflection at 100%, but it still had the same problem. The only way to make the center of the glass not get dark was to reduce the IOR for reflections and refractions. So I started breaking reality and it was starting to look better. After I had something kinda working I started adding a slight bump map near the edges of the cubes because I noticed in my glass reference on my desk that towards the corners the glass was warped a bit when the pieces were fused together. As soon as I did this it became aparent that with as many refractions as we needed you couldn't see through the cube again. Parts just became invisible inside all the crazy refractions. Not only that, but the rendertime suffered an almost 200% increase.

Eventually the environment was sorted out again and because I have architecture experience I was taken off the cube glass set up and put back on environments. Another artist took over the glass set up and came up with some nice looking results that totally broke reality, but that didn't matter because it looked good. I worked on variation after variation on the environment nearly right up until it delivered. At one point we finally decided that there wasn't going to be a good way to make the artificial lighting look good. We ripped open some holes in the ceiling and I was told to light the thing naturally. I think that was the best decision we made on the job, it really changed the look of the project and finally it was working.

In the meantime the other lighter who took over the cubes was getting creative with ways to reduce the rendertimes and still get great results. He came up with a great solution, but unfortunatly it was very complex. It was still faster than just waiting for frames though. His solution was to render the parts all by themselves without any glass in the scene. We did this three times. Once with an HDR generated from the actual 3d environment I built. A second time with an HDR of a studio lighting setup, and finally a third that only had a big area light above the parts. Our compositors would blend these three passes together over the environment background render until the parts looked cool. Then they would render an un-premultiplied pass of the parts back out for the lighters. The lighters took that pass into our cubes scenes and projection mapped it from the shot camera back onto the parts. Then we deleted all lighting from the scene. Only the glass cubes and projection mapped parts. This was rendered with the HDR of the CG environment for reflections and refractions. We still needed the ability to render mattes for the compositors, so we had to make scenes that had everything black except the parts which were 100% illuminated white. We also did an extra pass which was just the connecting edges of the glass cubes. This pass was used to tint only the fused faces of the glass cubes to have a greenish hue.

Thursday, July 2, 2009

Project Challenge: Fight Club

I recently had the pleasure...wait...was it a pleasure? After I fully recover and get some sleep I'll look back on this and consider it to have been a pleasure to work on ;)

At any rate. DD recently finished a very cool project for the Blu-Ray released of the movie "Fight Club." We were hired to create some graphics for the dvd menu screen. This project was FULL of challenges. Our goal was to re-create the scene from the film where Ed Nortons character is walking through his empty apartment while one by one pieces of furniture fade into the scene while text pops up over some of the items making it look like items from a catalog. However, in our version we planned to extend that shot into a full 360 degree panorama with half of the room being his apartment from the film and the other half being the gross dirty kitchen from the house on paper street. This also had to be a totally seamless 360 that would loop repeatedly. So as we came back around to the end of the shot we needed to make the rooms blend back together and start empty again. Did I mention that this all needed to be done from start to finish in less than two weeks?

To make this project harder. I wasn't supposed to be on it. However, the CG supervisor on the show was going on vacation and was not going to be here for the last half of the project. DD decided that they needed to have a CG supervisor on the show and they asked me to drop what I was doing on a different commercial and take the reigns as CG supervisor on this fight club project to help finish it. So I got to put on my supervisor cap again for the first time since leaving Blur. I was asked if I would do this about 15 minutes before the original CG sup left for his trip. I got a 15 minute crash course on what was currently done and what still needed to be done and what the expectations were.

We had a small team. Two lighters, two modelers, one compositor (although we did get some help from a few other artists here and there) and myself. The first plan was to get the modeling started. Half of the modeling was done before I came onto the project. The other half continued up until the last minute.

The plan was to render a 900 frame nodal pan around the entire empty living room and kitchen. Then we would create HDR's of the room and map it back onto the walls and render separate passes for all the different items in the room. After spending a little time thinking about this it became clear that we would run into a problem with this approach. First of all we would need multiple reflection and shadow passes for every object in the room, but more than that we were rendering with GI and as lights turn on and bounce off of objects in the room it should change the lighting on the rest of the objects in the room. Unfortunately, there was no way for us to render a pass that was ONLY the bounced light from the addition of a single prop into the room. So we needed to find a new approach.

The next idea was to actually just keep adding one more prop to each pass and then disolve between them in the comp. This was a good approach because we could elimiate all the shadow and reflection passes completely, and we only needed to render about 30 frames of each pass because we would be frequently disolving to other passes. So this was a good plan because we greatly reduced the number of passes and complexity in the comp and made life a bit easier for everyone. Then we discovered the next problem...GI flickering and glossy reflection noise.

We tried raising settings which was helping, but in order to get renders nice and flicker free we would need to bake out the GI. Since this would require over 100 separate GI calcs and would need updated everytime a prop was moved or changed this became a major problem to manage. We didn't have time or tools in place to help us do that. So it was time again to come up with a new plan.

The key to the solution ended up being that the camera was just a nodal pan AND that nothing in the room moves. Items just fade on. Late at night on my first day on the project I started thinking about this problem. The solution to our GI and reflection problem would be to only render single frames and stitch together a panorama. That would greatly reduce the number of frames we had to render. In fact that would mean we only needed one single frame for each prop that fades on. Then it occured to me. Vray has a cylindrical camera. We could render one single high res frame of the entire 360 degree room and map it onto a 3d cylinder in Nuke. We could also bake out our cameras motion and feed that into Nuke also. Then the compositor would just disolve between our single frames and have complete control over when things blended on. The only problem with this plan was that Vrays cylindrical camera didn't work the way I thought it did. So that idea wasn't going to work. However, Vray's spherical camera could do something close enough. Instead of a cylinder we could use a sphere. I rendered out a few lower res spherical panoramas to give to the compositor to try the idea out with. It worked. We scraped all our other methods and moved forward with the spherical pano idea.

In order for the final resolution pano's to hold up we needed to render 6000x3000 pixels. This took a while, but it was a safer way to go. Renders took a maximum of about 10 hours. Things were kind of interesting in the comp. The tree itself was pretty simple...just a huge line up of read nodes and disolves. But because we were rendering panoramic images and only still frames it made some tasks easier, and others harder. For example it was very easy to roto. You didn't have to animate any beziars because the images didn't move. The camera just pans across them. What wasn't easy was rendering mattes. We rendered a few, but we knew going into it that things would be disolving on and screwing up the matte, so we would either have to render a new matte for every time an object blends on, or the compositor would have to work their magic a bit with mattes. Ultimately, roto would be faster.

While lighting was going on we had two and sometimes 3 prop modelers creating props to fill up the rooms with. We were referencing all of these models into the lighting scene so the lighter could keep working while the modelers were updating things. It sounds great in theory, but in all my years I've never seen this work well. The biggest reason this was a problem this time around was that we discovered some bugs in our model publishing tool that was screwing up shaders and geometry in the lighting scene. We also had a lot of trouble with Maya's render layers which have proven to be a total mess. The biggest issue with them was they would lose texture assignments. This has happened on several shows now and I'm not thrilled about it. On top of that we just started using a tool called "Atomic" that greatly simplifies setting up render layers. Its a lot like Blur's "Render Elements" tool, but in some ways much stronger, and others much weaker. Atomic is still going through stages of being integrated with Vray, so we ran into a few issues there as well. The modelers did a great job. Two of them had never used Vray before and the third had used it only once before. They are a talented bunch and quick learners so they picked it up pretty fast.

Monday, June 22, 2009

Project Challenge: LG Transformers 2

I'm back with another project challenge post about our latest LG commercial. This time around we had a tie in with the upcoming film Transformers 2. Here is a link to the commercial on Youtube.

Our main workflow these days is to use Maya/Vray on our commercials. This project, however, came at a period of time where it would be more beneficial to us (for internal scheduling reasons) to dust off our old Max/Vray pipeline again and do another commercial with it.

This time around we had a very small team. There was only 3 of us doing any 3d work. We were able to get a model to start from of the phone from the client. I took the model and reworked some areas of it and cleaned it up so that we would get proper meshsmoothing on it. I also did the same for the laptop model. Another artist modeled the camcorder from scratch while our supervisor modeled the GPS unit using the phone as a base model to start from. After all of the modeling was done, all of the models were given to me to create materials for. Since the phone was supposed to appear to be transforming into all of these other objects, everything basically needed to have the same materials applied to them. After we got the materials all set up, the three of us began animating. Since we were a very small team we had to handle all the animation ourselves which was a little daunting at first. I handled two of the transformation shots and ended up taking over and finishing a third one. While this was going on I also set up the light rig that we ended up using for all the shots. The rig was entirely set up using an HDR panorama that was shot on location durring the shoot. I plugged that HDR into a Vray dome light and used Light Cache and Brute force for GI.

Animating was probably the biggest challenge on this project. Although once you got your head wrapped around it, it really wasn't that difficult to do. We first focused on the big moves. We didn't care if it was physically possible for something to move a certain way or if it was floating out in space. We just focused on making some interesting movements with the bigger and more noticable pieces. After that we went back in and starting the real work which was to connect up all the big pieces so they weren't just floating around and give them connectors and gizmos that would support their motion. Finally, we did a pass on the animation where we added parts inside the objects to make them appear to have circuit boards and components that would actually be inside these objects and we used those to help hide any bits behind as the transformation happened. So a piece of the phone may slide behind a circuit board while another piece of the camcorder swung out from behind something else. You do this enough times with enough pieces and eventually you have a transforming phone.

The next challenge we had to overcome was that the actor was holding the real phone in his had in all of our plates. So when we did our animations on top of it you could see the real phone in his hand underneath our CG. We already had an artist do some camera and geometry tracking for us so we had a rough hand model that we were using for shadow catching. I took that model as well as a still image we had of the actors empty hand that they shot on set and mapped the hand photo onto our 3d geometry. I had to do a lot of pulling and pushing of the UV's to get them to line up ok since the angle of our camera and the hand photo were fairly different. I rendered out an empty hand pass that the compositor used to help paint out the original phone from the plate. He would just use little bits and pieces from my render as needed to fill in some gaps. We ended up do this for several shots.

Rendering was simple. These weren't complex scenes really so they rendered pretty fast. I think we were around 1 hour a frame for the worst case close up shots. Everything else was between 10 and 30 minutes. It was a lot of fun to work on this project. Not only did it turn out looking pretty cool, but I got to push myself a bit with the animation, and nobody had to work any long hours.

Monday, June 15, 2009

Project Challenge: LG Advanced Learning

Its been a long time since I've done a Project Challenge post. I haven't done a single one since moving to Digital Domain either. So lets go over a spot that I really enjoyed working on...LG Advanced Learning. Here is a link to the commercial. This is the 30 second cut. We did a 60, but I can't find it anywhere online that you don't have to subscribe to.

This project was a lot of fun to work on. It was a pretty good challenge and in the end it went very smooth and I'm quite pleased with how it all turned out. This was the third project I worked on at DD.

The great thing for me was that another co-worker of mine and I convinced our supervisors to use Max/Vray for this job. Vray is exceptional when it comes to photo real metalic surfaces. The only problem with this idea was that DD didn't currently have a Max TD around to write tools, and DD's older tools for max hadn't been updated in a while. There were a few tools that we HAD to have in order for this to work, so right off the start I set to work creating the ones we didn't have, and my co-worker, Chris(who had used the older DD max pipeline), started testing the existing tools to make sure they were working and stable. Fortunately, they were all in good condition, so it was up to me to script a few needed tools. I ended up writing a tool that would allow us to automatically import animation data in the form of MDD files onto objects in max from Maya. We had hundreds of objects and there was no way this was going to be done by hand. I was very happy to see it get used the first time in production without breaking. In fact, I think we only broke it once durring the project. I also re-wrote a tool that I originally came up with at Blur, but needed re-written to plug into DD's pipeline with Vray. It was a quick test render tool which allowed you to override render settings at the push of a button to do very quick test renders while not actually changing any of your final render settings.

When the project first got rolling we learned that the director had already had another company model the characters. So we were given meshes to start working with. For the most part the models were well done. However, we really wanted these characters to be hyper detailed, so we went back into the models and re-worked them A LOT. We added tons of tiny details that you will only see if your lucky enough to catch the commercial in HD, and maybe not even then unless its a close up shot. We went so far as to model little weld points for all the circuitry. Every inch of these characters had some fine detail on them. We spent a lot of time in this part of the production. Perfecting textures and models. This was the hardest part of the project actually. As soon as the characters started getting signed off on it was pretty smooth sailing to the finish.

When it came time to start rendering shots it was a breeze. The time we spent up front getting the tools ready really paid off. Shots came together quickly and we had very little issues on the render farm. The one thing that did present a problem for us though was grainy noise. We were throwing everything at Vray on this spot. We had GI calc'd per frame, glossy reflections, glossy refractions, translucency, depth of field rendered in camera and motion blur. We also had nuclear hot lights which didn't help the grain at all. Then the challenges started. It really wasn't all that bad to solve the grain issues, but it was the most challenging thing about this project. I love working with Chris though. He and I come from two different schools of thought about rendering with Vray. He is very much in favor of using Light Cache and Irradiance mapping for GI, where I'm in favor of Light Cache and brute force, or in some cases completely brute force GI. What we learned was that about 50% of the time his method worked the best. We got fast render times and clean GI. However, the other 50% when things got more complicated we found that my approach worked out better. It ultimately came down to what was happening in each shot. With so many variables it was a little tricky at first to figure out what needed to be tweeked in order to get rid of the grain. The best way to figure it out was to look at all the buffers we were saving out. We could do test renders looking only at the GI pass to see if our settings were causing grain there and we could tweak the GI settings apart from everything else. We did the same for reflection and refraction passes to make sure our samples were high enough there. Then finally we'd turn on motion blur and DOF and check again to make sure everything was clean before submitting to the farm again. Naturally the render times went up, but all in all none were very long for what we were doing. Most frames averaged around 30-40 minutes at 1024x576. A few of the close up shots were rendered at 1920x1080 and those were about 3 1/2 hours per frame for finals.

This was a rare project. Things went so smoothly on the back end that the lighters pretty much finished a week early and were just there to help the comp artists with additional matte passes and misc fixes. We pulled a few late nights in the beginning durring the modeling phase, but once we were into lighting it was regular work hours til delivery.

The FX were very cool on this spot. DD hired an artist to come in and code real time particle FX and render them in open GL. I sat next to this guy and it was very interesting to listen to him click away for hours writing code, then there would be a flash from his monitor and I'd look over and he'd be testing his particle system...It would just swirl endlessly without looping until he stopped it and went back to coding. In order to get those FX into our scenes with the correct camera motion he rendered out cards of his particles and another 3d artist took them into Lightwave and positioned them and rendered out passes for the compositors. In some cases the compositors took care of the FX placement themselves. The lighters didn't have to do any of this, but we did have to render interactive lighting passes and a few reflection passes.

That's about it.

Friday, April 3, 2009

Lost your dialog box off screen?

Ever tried to open a program or dialog box to discover that somewhere it has opened, but is off the edge of your monitor and you can't grab it and bring it back? I'm not sure exactly how this happens half the time, but most commonly I've seen this happen when you have two monitors and then one of them is taken away, or your screen resolution changes enough that the program opens off screen.

No worries. There is a simple trick to getting it back, and it works with any program under windows (as far as I'm aware of).

Lets say your working in 3ds max and you've opened the render dialog box. You know its opened, but its off screen. Here is what to do....

1. Click the render dialog button again. You want to make sure that the dialog IS for sure open and is the currently active window.
2. Don't touch your mouse...Press Alt+Spacebar.
3. At this point you may see a drop down window open up....If not its still open, its just off screen too. I've seen this happen both ways.
4. If you can see the drop down use your keyboard arrow keys and highlight "move" and press enter. If you can't see the drop down...Hit the down arrow once. "Move" is usually the second item from the top of the list.
5. Now you need to lock the window to your mouse. You can do this by hitting the "left" arrow key on your keyboard (at this point you still have not touched your mouse again).
6. Now you should be able to move your mouse and the window will be locked to it. So you can move the window back on screen.

A friend and co-worker of mine back at Blur showed me this trick years ago. Thanks Dave!

Tuesday, March 3, 2009

Large Scene Management

I received a request recently to have a post about how to manage large scenes. I thought that was a great idea for a topic so here goes.

Poly Count

Once you force yourself to follow good modeling habits then poly count almost stops being an issue except on rare occasions. The most important thing you can do on a high poly count scene is to instance objects whenever possible. This just makes sense. If you can use the exact same object in other areas in your scene then you should instance it. Not only will it be easy to update one model and have all the instances update as well, but it saves a huge amount of memory when rendering and also makes your file size a lot smaller. There is a downside to instances though. When you start to have a lot of them. Like around 5000 objects and up. Its great for getting your shot to render and saving the file, but working in the file becomes really slow. You have to make a trade off. If you have some RAM to spare with all your instances then often it will help to grab a bunch of the more simple objects (for example a bunch of grass blades) and break their instances and attach them into one single mesh. You now have one much larger poly count object which will take up more memory at render time, but you have sped up how interactive max is while your working considerably. Always do this with simpler geometry though. You have to make sure that what your attaching won't be so huge that you run out of memory again.

Another great way to deal with huge scenes is to have separate scene files for the background, midground and foreground. Then composite renders together to create the final seamless shot. I worked on a project once (which unfortunately I still can't talk about about and probably won't ever be able to), but it involved a huge jungle campsite. This cinematic went for about 3 minutes straight without a single (visible) cut and all took place in first person. The scene for this was so expansive and huge because the main character (you) run from one end of the camp to the other so it became nearly impossible to cheat things unless they were really far away. We ended up needing 5 separate environments that all fit together like a puzzle that were rendered separately and composited back together. If you gotta do it then you gotta do it.


Textures have always been the trickiest to balance. I always want the highest possible res textures I can get up close to camera and as things go back I'll make them smaller and smaller. But even with a lot of planning this can be difficult. Especially when you have a lot of different types of surfaces and maps that are close to camera. Once the texture is loaded into memory then max is going to treat them all the same, so you want to feed the highest quality image you can with the smallest file size. Most of the time for me this is just a simple jpg. As long as you don't have visible artifacting in the compression then your fine in most cases. I have also been known to use png textures with indexed colors. This keeps the file sizes lower and if you do it carefully you get almost no visual drop in quality. I stick with 8 bit textures for almost everything, except for objects that need displaced. For those maps I use 16 bit. Just so were all clear about this...You can NOT take an 8 bit texture, convert it to 16 bit in Photoshop and use that. displacement maps need to be generated in 16 bit or higher. In the end the best thing to do is to just be very picky about which textures are important to be high res and which won't be.


I have used displacement in Vray, Mental Ray and scanline. Scanline was always predictable, but you aren't able to render millions of polygons since the renderer isn't designed to deal with that. That said I still use max's own displacement for some things. The displacment world space modifier is great. You can create very detailed and still fairly optimized meshes with it. Its not usable for deforming meshes though. I have a long history with Vray, and will admit that I was very let down by Mental rays displacement. Its better than using max's modifiers, but at least back in max2008 there was a lot of memory flushing problems. I found that I couldn't get more than a couple heavily displaced objects in my scene before I was out of RAM. This was a big problem for me since I was creating a big rocky landscape at the time. In the end we had to customize every single shots displacement settings just to get it to render. I was never happy with the level of displacement we had to settle for. I haven't used MR again for a while so I'm not sure if its any better in max 2009 or 2010. Vray on the other hand has never let me down when it comes to displacement. You can load it up all over your scene and it will render. Maybe not fast, but it will finish. Vray handles dynamic memory flushing very well. Vray also has a couple options for how to calculate displacement. So if one way isn't working for you, you can easily switch to another method. I like to use 2d style displacement in Vray because its very fast and predictable, but it does reach a point where it requires to much memory. So in those cases I switch over to 3d displacement which will still take a texture map to drive it, but the calculations are handled differently and you can use a lot more displacement in your scene at the expense of render time.


Its been a while since I've had problems with optimizing lighting. So much of what I do now is all HDR lit and doesn't have more than a few extra lights....If any. I'm currently rendering an interior 3d scene for a commercial lit with only an HDR outside the room. No lights at all. But there are still many projects that will require lots of lights. Memory wise its better to raytrace your shadows, but that is also slower. Shadow maps are great, but you have to be careful balancing them. You can run out of RAM really quick. Never use an omni light unless you have to. Omni's project six shadow maps where a spot light only projects one. So you can save yourself lots of ram by choosing the right light for the job. With shadow maps there are three parameters that are important to optimize. First is the bias. This one is kinda worthless most of the time. It doesn't really affect memory or rendertime but it will affect the look of the shadow. Lower values are best. I usually go with .01 for everything unless an issue comes up. More important are the Size and Sample Range. Size is going to determine the detail of the shadow. The higher the value the sharper the shadow. The sample rangle will blur the shadow and soften out any jagged edges. I've seen people balance these two numbers wrong a lot. The size is what is going to use up the most RAM, but the samples are going to cause the bigger render hit. If you want a soft shadow, don't turn your map size way up and then crank the sample range up to blur it out and make it looks soft. Instead keep your samples at the default 4 and adjust your map size until its as close as you can get it to the final result you want. Then slowly turn up the sample range. I have never needed to go above 16 for the sample range. I've never gone above 4096 for a shadow map size either. Just like with modeling. Instance your lights as often as you can.


Know your renderer inside out. Setup a simple scene that has a little of everything in it and then start spinning values and really get a feel for what they do. It can be as dramatic as knowing one value that needs to change to take a render from a couple hours to a couple minutes, IF you know the right thing to adjust. So get to know what everything is really doing and how it works. With large scenes your probably going to have longer render times just because theres more in them. Be really smart about your test renders. Find VERY low quality settings that will render extremely fast. Even if the render quality is very poor. This is where I start, and remain until almost the end of a project. Every once in a while I'll crank settings up just to make sure something isn't messed up, but keep everything low for as long as possible. Most of the time you really just need to be able to see what is happening in the shot and all the noise and mess can be dealt with later once you know everything is working. Use region renders as much as possible. Once things are coming together and working I'll usually select one shot that best represents the sequence and push that one through to make it final. This shot will become the master shot. This way only one shot on the render farm is going to take any time and everyone else can keep working with fast renders until the look, and which passes will be needed, are all figured out. This way you also learn what settings are likely going to be for final shots and how long they will take to render. This will help you predict how much time your going to need to render all the final shots.

I think that just about does it. :)

Thursday, January 8, 2009

Catching up

Time is flying by. We finished up that previous photo real project I was mentioning in my last post. It was a commercial for LG brand televisions. The commercial featured a bunch of robots made up of parts from inside the television. They come to life late at night in the living room of a modern looking home and have themselves a little rave party. Then the owners of the house come home and they all scurry back into the television and hide. The designs for the robots were pretty cool and were VERY highly detailed. The commercial featured live action background plates with CG robots. This project was animated in Maya, with all the lighting and rendering done in Max/Vray. It was refreshing to be back in software that I knew very well. The project went very smoothly once all the characters were approved. All the characters were highly reflective and had a large range of material types on them. Everything from chrome and etched metals to glass, plastics, translucent bits and glowing LED lights. I learned a fair amount about eliminating grain in Vray on this project. Rendertimes were fantastic considering the complexity. We were using full 360 HDR light rigs with GI, glossy reflections, glossy refractions, translucency, 3d motion blur and DOF all rendered in camera. The longest frame times were just under 4 hours for shots where the character completely filled the frame. Most shots were 45 minutes or less for final renders. The commercial is being shown over seas, unfortunately. I'm not sure when or if it will be shown here in the US.

I'm currently working on a commercial for Sobe which will be shown durring the super bowl. This project is pretty intense. We're nearing the end though. I can't give any details about it yet, but it is being animated in Maya and rendered out of Lightwave. I was a little worried jumping back into Lightwave after being out of it on the LG spot for a couple months. Suprisingly, it all came back to me pretty quickly. We're about a week away from delivery and unless there are some last minute "gotcha's" we should deliver right on time. Now that I said that I'm sure to get a big list of notes ;) .

There is a pretty good chance on the next project I'm on I'll be modeling the main character. Should be interesting considering I'm more of an environment/lighting guy. I'm learning that Blur's artists are a lot more specialized than here at DD (at least in commercials). A generalist here really does do some of everything.