{"id":815,"date":"2013-04-10T21:03:42","date_gmt":"2013-04-10T21:03:42","guid":{"rendered":"http:\/\/www.vfxpdx.com\/?p=815"},"modified":"2013-07-06T21:43:44","modified_gmt":"2013-07-06T21:43:44","slug":"i-have-seen-the-future-and-it-is-slow","status":"publish","type":"post","link":"http:\/\/www.vfxpdx.com\/?p=815","title":{"rendered":"I Have Seen The Future, And It Is&#8230;   Slow"},"content":{"rendered":"<div id=\"attachment_1140\" style=\"width: 300px\" class=\"wp-caption alignright\"><a href=\"https:\/\/i0.wp.com\/www.thesprocketship.com\/vfxpdx\/wp-content\/uploads\/2013\/04\/caddyshack-gopher-dancing.jpg\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1140\" data-attachment-id=\"1140\" data-permalink=\"http:\/\/www.vfxpdx.com\/?attachment_id=1140\" data-orig-file=\"https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/caddyshack-gopher-dancing.jpg?fit=576%2C324\" data-orig-size=\"576,324\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"caddyshack-gopher-dancing\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Deep data happy dance&lt;\/p&gt;\n\" data-large-file=\"https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/caddyshack-gopher-dancing.jpg?fit=576%2C324\" class=\" wp-image-1140   \" alt=\"Deep data happy dance\" src=\"https:\/\/i0.wp.com\/www.thesprocketship.com\/vfxpdx\/wp-content\/uploads\/2013\/04\/caddyshack-gopher-dancing.jpg?resize=290%2C163\" width=\"290\" height=\"163\" srcset=\"https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/caddyshack-gopher-dancing.jpg?w=576 576w, https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/caddyshack-gopher-dancing.jpg?resize=300%2C168 300w\" sizes=\"auto, (max-width: 290px) 100vw, 290px\" \/><\/a><p id=\"caption-attachment-1140\" class=\"wp-caption-text\">Deep data happy dance<\/p><\/div>\n<p>With today&#8217;s <a href=\"http:\/\/www.openexr.com\/#announ\" target=\"_blank\">formal announcement<\/a> by ILM &amp; Weta Digital that OpenEXR 2.0 is finally pushed out for mass consumption, we can finally (there&#8217;s that word again!) do the VFX version of the Icky Shuffle. \u00a0Or whatever Deep data touchdown happy dance you&#8217;ve been working on since you saw the <a href=\"http:\/\/www.vfxpdx.com\/?p=297\" target=\"_blank\">Deep data demo with The Foundry back at the Hinge Digital VFX\/PDX blowou<\/a><a href=\"http:\/\/www.vfxpdx.com\/?p=297\" target=\"_blank\">t<\/a> last fall.<\/p>\n<p>If you were in hibernation at the time and missed it, or still haven&#8217;t had much exposure to them thar Deep renders &#8211; in a nutshell, Deep finally gives us a usable Z channel. \u00a0Old depth channels have always been a little bit of a hack, plagued by nasty per pixel sampling. \u00a0Even once aliased and cleaned up, you still commonly had to split renders into pieces and\/or render holdouts to get everything jiving and edges that behaved correctly when composited. \u00a0But things that should work, like Z defocus, would instead wreck havoc and have you walking through a landmine field of broken edges, pops, sizzles, bleeps &amp; blunders. \u00a0Deep data to the rescue! \u00a0Deep allows you to render layered CG uninhibited, in it&#8217;s full, juicy glory, and then let the Deep Z information take care of your holdouts and what layers in front of what, and it <em>does this both correctly and (usually) flawlessly<\/em>. Simply put &#8211; on a complex film like Avatar where traditionally you may have had characters running through the forest, and had to render characters running through the trees with holdouts here and holdouts there&#8230; \u00a0and then (zing!) they change a few frames of animation in the character pass &#8212; previously you&#8217;d have to rerender EVERYTHING because the holdouts changed as well. \u00a0As of today, those days are in the rear view. \u00a0Deep compositing solves those issues, and everything now works like it should. \u00a0You rerender the changed character pass, DeepMerge it with the existing forest renders and you&#8217;re off to the races. ILM and Weta were all over this because <em>it&#8217;s the only way they could have finished a film on the scale and scope of Avatar. \u00a0<\/em>If they hadn&#8217;t brought back Colin Doncaster &amp; co. to finally nail down what they&#8217;d started back on &#8216;Rings, they&#8217;d probably still be working on Avatar here a full 3 years after release. \u00a0No jokin&#8217;. \u00a0The fact that this is finally getting pushed out into the mainstream is pretty darn exciting for everyone outside of Weta and ILM.<\/p>\n<p>What does this mean to us groundlings? \u00a0First of all, by no means will the words &#8220;instant gratification&#8221; come anywhere near this post. \u00a0This release means things can finally be standardized and the different workflows across software will start to come in line, given another round of point releases or two. \u00a0Deep data has been available for a while, and Renderman + Nuke paved the way, but there were still some inconsistencies as other software caught up to what Weta and ILM were pioneering.<\/p>\n<p>Renderers will now formalize support, some faster than others. \u00a0(Houdini&#8217;s) Mantra, Arnold, and VRay have all had support to some extent already, but then you have a look across the way at Mental Ray and they seem to be lagging far behind and (according to the guys at Hinge Digital) Deep data doesn&#8217;t appear to be a blip on the MR radar yet. \u00a0At some point in the near future, all will come around to rendering EXR 2.0 rather than dtex or whatever format was being rendered before.<\/p>\n<p>Nuke is the first and only compositing app out of the gate to have Deep technology, and rightly so, having developed the tools directly with Weta and ILM. \u00a0Eyeon Fusion will probably get this in there and I bet After Effects will also come around eventually, most likely with this being added to the ProEXR toolset for immediate use with plugins hot for the technology, and eventually by the stock Adobe Z tools themselves.<\/p>\n<p>In Nuke, other than the initial batch of Deep nodes that were released in v6, you&#8217;ll see many nodes and tools start to become Deep compatible &#8211; for example, you&#8217;ll soon see a &#8220;DeepKeymix&#8221; and nodes like that start to appear as these things pop up in production. \u00a0Even the current set of Deep nodes will change, as Dr. Peter Hillman &amp; co out at Weta continually push things forward. \u00a0They seem to have made the perfection of the Deep workflow not only a necessity for the coming films, but it&#8217;s been elevated to almost &#8220;personal mission&#8221; status. \u00a0With the Hobbit and Avatar sequels looming, this is more than justified! \u00a0At some point it will make sense to have ALL nodes be Deep aware in Nuke and for it to be tossed around as easily as a Z channel is now, but that is a ways off and you&#8217;ll see this duality exist for a while (Keymix vs DeepKeymix, etc).<\/p>\n<div id=\"attachment_1141\" style=\"width: 416px\" class=\"wp-caption alignright\"><a href=\"https:\/\/i0.wp.com\/www.thesprocketship.com\/vfxpdx\/wp-content\/uploads\/2013\/04\/nirvana_nevermind_adult.jpg\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1141\" data-attachment-id=\"1141\" data-permalink=\"http:\/\/www.vfxpdx.com\/?attachment_id=1141\" data-orig-file=\"https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/nirvana_nevermind_adult.jpg?fit=725%2C365\" data-orig-size=\"725,365\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"nirvana_nevermind_adult\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Just like the baby in the &amp;#8220;deep&amp;#8221; end of the Nirvana Nevermind album cover, Z Channels are all grown up now&lt;\/p&gt;\n\" data-large-file=\"https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/nirvana_nevermind_adult.jpg?fit=725%2C365\" class=\" wp-image-1141  \" alt=\"nirvana_nevermind_adult\" src=\"https:\/\/i0.wp.com\/www.thesprocketship.com\/vfxpdx\/wp-content\/uploads\/2013\/04\/nirvana_nevermind_adult.jpg?resize=406%2C204\" width=\"406\" height=\"204\" srcset=\"https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/nirvana_nevermind_adult.jpg?w=725 725w, https:\/\/i0.wp.com\/www.vfxpdx.com\/wp-content\/uploads\/2013\/04\/nirvana_nevermind_adult.jpg?resize=300%2C151 300w\" sizes=\"auto, (max-width: 406px) 100vw, 406px\" \/><\/a><p id=\"caption-attachment-1141\" class=\"wp-caption-text\">Just like the baby in the &#8220;deep&#8221; end of the Nirvana Nevermind album cover, Z Channels are all grown up now.<\/p><\/div>\n<p>As far as the Deep workflow goes &#8211; I love it, but I hate it. \u00a0Your first shot with it, you&#8217;re immediately hit with the &#8220;wow that&#8217;s amazing&#8221; new car scent as you plug in that first DeepMerge and everything clicks. \u00a0But the luster soon wears off when you realize the huge amount of additional processing overhead and network traffic associated with Deep renders. \u00a0It may be sweet images, but you take the slow boat getting there! \u00a0It&#8217;ll bring your system to it&#8217;s knees quickly, and your compositing momentum will start to resemble that banana slug you almost stepped on out on your front porch this morning. \u00a0You might as well install a coffee machine at your desk, you&#8217;ll be taking so many breaks.<\/p>\n<p>Case in point: \u00a0on many shots for\u00a0<em>Man of Steel,<\/em> I had volumetric cloudbox renders that were up in the territory 500mb-800mb\u00a0<em>per frame<\/em>\u00a0. \u00a0This is not a tax bracket you want to be in. \u00a0Ultimately, whether you eventually gravitate towards a DeepMerge style of comping or flip it and go with DeepHoldouts, you&#8217;re going to want to use the Deep renders to generate your layering and then precomp them out and get them outta the stream as fast as possible, so you can return to &#8220;normal&#8221; RGBA interactivity and creative flow. \u00a0Comps are supposed to be quick &#8211; you lighters can keep your excruciatingly slow little render tile windows, thank ya very much.<\/p>\n<p>The hitch becomes nodes like DeepDefocus (currently unreleased, but you can use the <a href=\"http:\/\/peregrinelabs.com\/bokeh\/\" target=\"_blank\">Bokeh<\/a> plugin from <a href=\"http:\/\/peregrinelabs.com\/\" target=\"_blank\">Peregrine<\/a>) and others that are applied further down the tree &#8211; and for that, you&#8217;ll get to used to dialing values in and then getting them (again) out of your script &#8211; and disabling them with the $gui expression. \u00a0All in all, the workflow takes some getting used to, but it&#8217;s a small price to pay for the flexibility and power of a Z channel that actually works. \u00a0And things can only get faster &amp; better from here as they experiment with new levels of downsampling the accuracy and compressing the renders.<\/p>\n<p>The Foundry Creative Specialist Deke Kincaid put out a great collection of links awhile back to help get everyone up to speed on all things Deep. \u00a0Digg &#8217;em:<\/p>\n<p><span style=\"line-height: 1.5;\">original\u00a0deep\u00a0shadow paper:<\/span><a style=\"line-height: 1.5;\" href=\"http:\/\/graphics.pixar.com\/library\/DeepShadows\/paper.pdf\" target=\"_blank\">http:\/\/graphics.pixar.<wbr \/>com\/library\/DeepShadows\/paper.<wbr \/>pdf<\/a><\/p>\n<div>\n<div><\/div>\n<div>other must reads:<\/div>\n<div><a href=\"http:\/\/www.deepimg.com\/\" target=\"_blank\">http:\/\/www.deepimg.com\/<\/a><\/div>\n<div><a href=\"http:\/\/www.johannessaam.com\/deepImage.pdf\" target=\"_blank\">http:\/\/www.johannessaam.com\/<wbr \/>deepImage.pdf<\/a><\/div>\n<div><a href=\"http:\/\/www.graphics.stanford.edu\/papers\/deepshadows\/\" target=\"_blank\">http:\/\/www.graphics.stanford.<wbr \/>edu\/papers\/deepshadows\/<\/a><\/div>\n<div><a href=\"https:\/\/code.google.com\/p\/dif\/\" target=\"_blank\">https:\/\/code.google.com\/p\/dif\/<\/a><\/div>\n<div><\/div>\n<div>houdini docs on it:<\/div>\n<div><a href=\"http:\/\/www.sidefx.com\/docs\/houdini12.1\/rendering\/deepshadowmaps\" target=\"_blank\">http:\/\/www.sidefx.com\/docs\/<wbr \/>houdini12.1\/rendering\/<wbr \/>deepshadowmaps<\/a><\/div>\n<div><\/div>\n<div>prman docs on it:<\/div>\n<div><a href=\"https:\/\/renderman.pixar.com\/forum\/docs\/RPS_17\/index.php?url=deepCompositing.php\" target=\"_blank\">https:\/\/renderman.pixar.com\/<wbr \/>forum\/docs\/RPS_17\/index.php?<wbr \/>url=deepCompositing.php<\/a><\/div>\n<div><\/div>\n<div><span style=\"text-decoration: underline;\">videos on\u00a0deep\u00a0image compositing<\/span><\/div>\n<div><\/div>\n<div>basic intro one:<\/div>\n<div><a href=\"http:\/\/www.fxguide.com\/fxguidetv\/fxguidetv_095\/\" target=\"_blank\">http:\/\/www.fxguide.com\/<wbr \/>fxguidetv\/fxguidetv_095\/<\/a><\/div>\n<div><\/div>\n<div>Johannes Samm\u2019s Vimeo channel on\u00a0deep\u00a0image tools he wrote for Nuke long before we had a toolset on doing this inside Nuke<\/div>\n<div><a href=\"https:\/\/vimeo.com\/user3574023\" target=\"_blank\">https:\/\/vimeo.com\/user3574023<\/a><\/div>\n<div><\/div>\n<div>Rise of the planet of the apes Nuke video:<\/div>\n<div><a href=\"https:\/\/vimeo.com\/37310443\" target=\"_blank\">https:\/\/vimeo.com\/37310443<\/a><\/div>\n<div><\/div>\n<div>from prometheus:<\/div>\n<div>\n<div><a href=\"http:\/\/www.fxguide.com\/fxguidetv\/fxguidetv-149-prometheus-visual-breakdown\/\" target=\"_blank\">http:\/\/www.fxguide.com\/<wbr \/>fxguidetv\/fxguidetv-149-<wbr \/>prometheus-visual-breakdown\/<\/a><\/div>\n<div><a href=\"http:\/\/www.fxguide.com\/featured\/prometheus-rebuilding-hallowed-vfx-space\/\" target=\"_blank\">http:\/\/www.fxguide.com\/<wbr \/>featured\/prometheus-<wbr \/>rebuilding-hallowed-vfx-space\/<\/a><\/div>\n<div><a href=\"http:\/\/www.fxguide.com\/quicktakes\/prometheus-in-depth-coverage-on-fxguide\/\" target=\"_blank\">http:\/\/www.fxguide.com\/<wbr \/>quicktakes\/prometheus-in-<wbr \/>depth-coverage-on-fxguide\/<\/a><\/div>\n<div><\/div>\n<div>also additional\u00a0Deep\u00a0stuff from Abraham Lincoln Vampire hunter here, which Weta did:<\/div>\n<div><a href=\"http:\/\/www.fxguide.com\/featured\/vampire-hunter-two-killer-sequences\/\" target=\"_blank\">http:\/\/www.fxguide.com\/<wbr \/>featured\/vampire-hunter-two-<wbr \/>killer-sequences\/<\/a><\/div>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>With today&#8217;s formal announcement by ILM &amp; Weta Digital that OpenEXR 2.0 is finally pushed out for mass consumption, we can finally (there&#8217;s that word again!) do the VFX version of the Icky Shuffle. \u00a0Or whatever Deep data touchdown happy dance you&#8217;ve been working on since you saw the Deep data demo with The Foundry&hellip; <a class=\"more\" href=\"http:\/\/www.vfxpdx.com\/?p=815\">Continue reading &rarr;<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_links_to":"","_links_to_target":""},"categories":[5],"tags":[44,7,8],"class_list":["post-815","post","type-post","status-publish","format-standard","hentry","category-blog","tag-deep-compositing-workflow","tag-nuke","tag-pixel-nerd-stuff"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p2Cfrz-d9","_links":{"self":[{"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=\/wp\/v2\/posts\/815","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=815"}],"version-history":[{"count":16,"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=\/wp\/v2\/posts\/815\/revisions"}],"predecessor-version":[{"id":1144,"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=\/wp\/v2\/posts\/815\/revisions\/1144"}],"wp:attachment":[{"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=815"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=815"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.vfxpdx.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=815"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}