I Have Seen The Future, And It Is… Slow

1

Posted on April 10, 2013 by

Deep data happy dance

Deep data happy dance

With today’s formal announcement by ILM & Weta Digital that OpenEXR 2.0 is finally pushed out for mass consumption, we can finally (there’s that word again!) do the VFX version of the Icky Shuffle.  Or whatever Deep data touchdown happy dance you’ve been working on since you saw the Deep data demo with The Foundry back at the Hinge Digital VFX/PDX blowout last fall.

If you were in hibernation at the time and missed it, or still haven’t had much exposure to them thar Deep renders – in a nutshell, Deep finally gives us a usable Z channel.  Old depth channels have always been a little bit of a hack, plagued by nasty per pixel sampling.  Even once aliased and cleaned up, you still commonly had to split renders into pieces and/or render holdouts to get everything jiving and edges that behaved correctly when composited.  But things that should work, like Z defocus, would instead wreck havoc and have you walking through a landmine field of broken edges, pops, sizzles, bleeps & blunders.  Deep data to the rescue!  Deep allows you to render layered CG uninhibited, in it’s full, juicy glory, and then let the Deep Z information take care of your holdouts and what layers in front of what, and it does this both correctly and (usually) flawlessly. Simply put – on a complex film like Avatar where traditionally you may have had characters running through the forest, and had to render characters running through the trees with holdouts here and holdouts there…  and then (zing!) they change a few frames of animation in the character pass — previously you’d have to rerender EVERYTHING because the holdouts changed as well.  As of today, those days are in the rear view.  Deep compositing solves those issues, and everything now works like it should.  You rerender the changed character pass, DeepMerge it with the existing forest renders and you’re off to the races. ILM and Weta were all over this because it’s the only way they could have finished a film on the scale and scope of Avatar.  If they hadn’t brought back Colin Doncaster & co. to finally nail down what they’d started back on ‘Rings, they’d probably still be working on Avatar here a full 3 years after release.  No jokin’.  The fact that this is finally getting pushed out into the mainstream is pretty darn exciting for everyone outside of Weta and ILM.

What does this mean to us groundlings?  First of all, by no means will the words “instant gratification” come anywhere near this post.  This release means things can finally be standardized and the different workflows across software will start to come in line, given another round of point releases or two.  Deep data has been available for a while, and Renderman + Nuke paved the way, but there were still some inconsistencies as other software caught up to what Weta and ILM were pioneering.

Renderers will now formalize support, some faster than others.  (Houdini’s) Mantra, Arnold, and VRay have all had support to some extent already, but then you have a look across the way at Mental Ray and they seem to be lagging far behind and (according to the guys at Hinge Digital) Deep data doesn’t appear to be a blip on the MR radar yet.  At some point in the near future, all will come around to rendering EXR 2.0 rather than dtex or whatever format was being rendered before.

Nuke is the first and only compositing app out of the gate to have Deep technology, and rightly so, having developed the tools directly with Weta and ILM.  Eyeon Fusion will probably get this in there and I bet After Effects will also come around eventually, most likely with this being added to the ProEXR toolset for immediate use with plugins hot for the technology, and eventually by the stock Adobe Z tools themselves.

In Nuke, other than the initial batch of Deep nodes that were released in v6, you’ll see many nodes and tools start to become Deep compatible – for example, you’ll soon see a “DeepKeymix” and nodes like that start to appear as these things pop up in production.  Even the current set of Deep nodes will change, as Dr. Peter Hillman & co out at Weta continually push things forward.  They seem to have made the perfection of the Deep workflow not only a necessity for the coming films, but it’s been elevated to almost “personal mission” status.  With the Hobbit and Avatar sequels looming, this is more than justified!  At some point it will make sense to have ALL nodes be Deep aware in Nuke and for it to be tossed around as easily as a Z channel is now, but that is a ways off and you’ll see this duality exist for a while (Keymix vs DeepKeymix, etc).

nirvana_nevermind_adult

Just like the baby in the “deep” end of the Nirvana Nevermind album cover, Z Channels are all grown up now.

As far as the Deep workflow goes – I love it, but I hate it.  Your first shot with it, you’re immediately hit with the “wow that’s amazing” new car scent as you plug in that first DeepMerge and everything clicks.  But the luster soon wears off when you realize the huge amount of additional processing overhead and network traffic associated with Deep renders.  It may be sweet images, but you take the slow boat getting there!  It’ll bring your system to it’s knees quickly, and your compositing momentum will start to resemble that banana slug you almost stepped on out on your front porch this morning.  You might as well install a coffee machine at your desk, you’ll be taking so many breaks.

Case in point:  on many shots for Man of Steel, I had volumetric cloudbox renders that were up in the territory 500mb-800mb per frame .  This is not a tax bracket you want to be in.  Ultimately, whether you eventually gravitate towards a DeepMerge style of comping or flip it and go with DeepHoldouts, you’re going to want to use the Deep renders to generate your layering and then precomp them out and get them outta the stream as fast as possible, so you can return to “normal” RGBA interactivity and creative flow.  Comps are supposed to be quick – you lighters can keep your excruciatingly slow little render tile windows, thank ya very much.

The hitch becomes nodes like DeepDefocus (currently unreleased, but you can use the Bokeh plugin from Peregrine) and others that are applied further down the tree – and for that, you’ll get to used to dialing values in and then getting them (again) out of your script – and disabling them with the $gui expression.  All in all, the workflow takes some getting used to, but it’s a small price to pay for the flexibility and power of a Z channel that actually works.  And things can only get faster & better from here as they experiment with new levels of downsampling the accuracy and compressing the renders.

The Foundry Creative Specialist Deke Kincaid put out a great collection of links awhile back to help get everyone up to speed on all things Deep.  Digg ’em:

original deep shadow paper:http://graphics.pixar.com/library/DeepShadows/paper.pdf

other must reads:
houdini docs on it:
prman docs on it:
videos on deep image compositing
basic intro one:
Johannes Samm’s Vimeo channel on deep image tools he wrote for Nuke long before we had a toolset on doing this inside Nuke
Rise of the planet of the apes Nuke video:
from prometheus:

Response to I Have Seen The Future, And It Is… Slow

Leave a Reply