Tag Archives: Nuke

NAB 2014 Report


Posted on by

Hello VFX/PDX…

I’m fresh off a trip to Vegas for NAB 2014, and thought I’d share some of the more interesting things I saw, especially as it relates to VFX, Color Grading & Finishing. (NAB is the National Association of Broadcasters, and their annual convention is like SIGGRAPH, but for the entire industry, not just VFX.)

For those who don’t know me, I’m a Smoke VFX artist/colorist and general technologist at a post facility in Portland. The bulk of our work is TV Commercial based, so that’s the “lens” I’m looking through as I looked around the show.

(NOTE: Any opinions are those of my own, and are not endorsed by any of my clients, employer, or any of the vendors mentioned. Also – feel free to comment anything I may have gotten wrong – possible I may have missed some details or misunderstood something…)


This was absolutely the year of 4K. Unless you’ve been under a rock, (or sequestered in a darkened room with a workstation) – you’ve probably been bombarded with press releases and industry news about 4K. 4K is the catch all term for the next generation of high resolution imagery – with 4 times the resolution of 2k/1080p HDTV. In broadcast, it’s also called UHDTV – Ultra High Definition TV.

(Note: technically, UHDTV and 4K are slightly different formats, like 2k and HDTV are slightly different – but everyone is referring to it as “4k” since it’s sexier and easier to say than UHDTV. You may also hear 2160p.)


I am shocked at how quickly and thoroughly the production and post industries have embraced 4k. There are still quite a few hurdles to overcome before it goes mainstream – but almost every single camera at the show was 4k (or greater). And almost every single edit platform is now 4k ready – Premiere, FCP-X, Sony Vegas, and also more finishing systems like Smoke/Flame are now 4k ready (Scratch & Resolve have been for a while), and Avid says (or doesn’t actually say out loud, but whispers in your ear) they’ll have something by the end of the year.

Personally, I still think it will be a niche product to actually finish and deliver in 4k for quite while – it will be like the 2007 days, where some jobs had an HD finish, and some were SD – but just a few short years later, everything we do now at our shop is in Full HD 1080p. My guess is that 4k will take quite a while to catch on with the public, but it’s only a matter of time until all TVs are 4k – just like all smart phones now have high-resolution “retina” screens.

Sony-4K-NAB-2014-web-605x404 03_nab2014_booth-thumb-628xauto-227894 Panasonic-4K-World

(This last “4k World”  slide was a cheat – it’s actually from CES – but interesting that it’s being pushed so hard to the consumer show as well).

The big question is how does it get to the viewer? Netflix is now streaming House of Cards in 4k – but very few TVs are ready for it, and most home internet connections could not handle the bandwidth, which is about double what you need for HD (not 4x), due to better codecs – </aH265 or HEVC is the new standard for streaming 4k video.

This model is probably what will lead the adoption of UHDTV – the internet and streaming providers can move much more quickly than the broadcast industry, which would have to spend billions to upgrade their pipelines.

YouTube has allowed 4k uploads for a while, and they even have a 4k “Channel” – but again, sufficient bandwidth and h265 capable TVs are still quite rare.

Blackmagic showed 2 new 4k cameras in the $6000 range, and AJA had the big surprise of the show by releasing a camera of their own, the CION – very similar to the Alexa, but in 4k, and at only $9000, compared to 60k for an Alexa. (which is not even a 4k camera – hard to believe we once thought that looked good, eh? Wasn’t that what you were thinking when you saw all off those Oscar movies shot on an Alexa? “12 Years a Slave was ok, but I really think it needs another K or two.”)

Want to see the CION?

While 4K definitely feels (to me) like a solution in search of a problem, I don’t think it’s a fad like 3DTV. I think it will be more like 5.1 audio – sure, we can do it if we want to, but for many projects, it’s really not necessary – stereo is fine. It might turn out that 4k is the same way – high profile spots and cinema releases – sure, finish in 4k. But for most work? Probably not necessary – at least in the near future. It will probably continue to be broadcast/streamed in 1080, and then the 4k TVs will “uprez” it to 4k, and maybe if their screen is over 75″ – maybe they’ll see a difference.  Now, if only the cable companies would give the stream enough bandwidth to look decent, it might actually be worthwhile… But I digress.

The real crazy part about the new UHDTV standard is not even the number of pixels – that’s relatively simple to deal with – but there are also plans to transition to higher frame rate, higher contrast ratios, higher bit depths for more colors (10 & 12 bit) – that’s when multi-format delivery will be quite a challenge…) The new color space (BT2020 is the new Rec709) is actually even broader than the Digital Cinema space. The good news is that there will never be another interlaced format to deal with ever again. The new “spec” calls for frame sizes up to 8k, and frame rates up to 120p, but there are no “i” formats in the new standards at all. (Yeah! From a broadcast finisher’s perspective!)



Adobe Creative Cloud has really taken off. There was literally no one showing anything to do with FCP 7, and probably 50% of the edit systems on display at vendors all across the show were Premiere and CC. There was still a fair number of Avids, and a few FCP-X, and a few PC only platforms (Vegas/Edius) – but Premiere was by far the most common. It certainly helps that it’s on both Mac and Windows, and it doesn’t cost $1500 to start using it – just $50 per month for most users – that’s for every app Adobe makes – Photoshop, After Effects, Illustrator, etc.

Adobe is also pushing a new tool for collaborative workflows called Adobe Anywhere. Imagine editing from a remote location, over wifi, not with lo-rez proxies, but with full resolution media. That’s what Adobe Anywhere promises. The idea is that the facility runs a very beefy central server, with multiple GPU cards and fast RAID storage or SAN (Storage Area Network).


All of the edit systems become remote clients to the central server – and it’s the server that does all of the processing, not the workstation – the workstation is just giving commands to the server for how to build the edit, and the server builds it and on-the-fly renders an h264 stream to your edit system’s viewer window. The better your bandwidth, the better that stream looks, so on a decent internet connection or a LAN, it looks great. And on a low bandwidth wifi – well, it scales to look as good as it can – but since you’re always using the full rez media, you can pause on a frame and it will instantly update to full rez (when paused) to better judge the quality of a shot. And the “Anywhere” server handles all of the project files & permissions among multiple editors, etc.

It’s still pretty limited in many ways – you can’t link After Effects projects in the timeline (which is one of the best parts about Premiere), exporting files or XMLs is a real chore – but it’s an exciting development for sure. It’s also not cheap. For a group of 10 editors, I was told to expect to spend about $80,000 in server hardware (not including storage), and each user account is $1000/year. So – it’s certainly not for everyone, but possibly a glimpse into a new world of remote collaboration.

HP had a pretty big presence, showing they are still committed to the big box workstation and all of the power and flexibility that comes with it. Every single machine at the Avid booth was running on HP hardware, no Macs at all. Some vendors had Premiere systems with the exact same hardware as Flames (z820), and claim it far outperforms the new macs. Those workstations are not cheap – but with most apps now being cross platform, (other than FCP-X and Smoke on Mac), it’s nice to know you can still build a powerhouse system if you needed to.


The new Mac Pro (trash can/cylinder model) was also pretty prominently featured at many booths – including the DaVinci Resolve booth, which used to run their hero demo machine a a beefy Linux box – but this year was on the Mac Pro. It’s pretty awesome, and will only get better once more software can really take advantage of the power in that little tube, and its’ dual graphic cards.



ProRes is being pretty clearly adopted as the defacto standard delivery format for the broadcast industry. More and more systems (PC & Linux) can now create legit ProRes files. And while many of the new cameras are embracing “raw” shooting modes, many of them can now shoot directly into ProRes format. Funny to think that FCP has diminished in stature, but ProRes is flying higher than ever. Thank goodness those unintended gamma shifts are very rare these days…



I Have Seen The Future, And It Is… Slow


Posted on by

Deep data happy dance

Deep data happy dance

With today’s formal announcement by ILM & Weta Digital that OpenEXR 2.0 is finally pushed out for mass consumption, we can finally (there’s that word again!) do the VFX version of the Icky Shuffle.  Or whatever Deep data touchdown happy dance you’ve been working on since you saw the Deep data demo with The Foundry back at the Hinge Digital VFX/PDX blowout last fall.

If you were in hibernation at the time and missed it, or still haven’t had much exposure to them thar Deep renders – in a nutshell, Deep finally gives us a usable Z channel.  Old depth channels have always been a little bit of a hack, plagued by nasty per pixel sampling.  Even once aliased and cleaned up, you still commonly had to split renders into pieces and/or render holdouts to get everything jiving and edges that behaved correctly when composited.  But things that should work, like Z defocus, would instead wreck havoc and have you walking through a landmine field of broken edges, pops, sizzles, bleeps & blunders.  Deep data to the rescue!  Deep allows you to render layered CG uninhibited, in it’s full, juicy glory, and then let the Deep Z information take care of your holdouts and what layers in front of what, and it does this both correctly and (usually) flawlessly. Simply put – on a complex film like Avatar where traditionally you may have had characters running through the forest, and had to render characters running through the trees with holdouts here and holdouts there…  and then (zing!) they change a few frames of animation in the character pass — previously you’d have to rerender EVERYTHING because the holdouts changed as well.  As of today, those days are in the rear view.  Deep compositing solves those issues, and everything now works like it should.  You rerender the changed character pass, DeepMerge it with the existing forest renders and you’re off to the races. ILM and Weta were all over this because it’s the only way they could have finished a film on the scale and scope of Avatar.  If they hadn’t brought back Colin Doncaster & co. to finally nail down what they’d started back on ‘Rings, they’d probably still be working on Avatar here a full 3 years after release.  No jokin’.  The fact that this is finally getting pushed out into the mainstream is pretty darn exciting for everyone outside of Weta and ILM.

What does this mean to us groundlings?  First of all, by no means will the words “instant gratification” come anywhere near this post.  This release means things can finally be standardized and the different workflows across software will start to come in line, given another round of point releases or two.  Deep data has been available for a while, and Renderman + Nuke paved the way, but there were still some inconsistencies as other software caught up to what Weta and ILM were pioneering.

Renderers will now formalize support, some faster than others.  (Houdini’s) Mantra, Arnold, and VRay have all had support to some extent already, but then you have a look across the way at Mental Ray and they seem to be lagging far behind and (according to the guys at Hinge Digital) Deep data doesn’t appear to be a blip on the MR radar yet.  At some point in the near future, all will come around to rendering EXR 2.0 rather than dtex or whatever format was being rendered before.

Nuke is the first and only compositing app out of the gate to have Deep technology, and rightly so, having developed the tools directly with Weta and ILM.  Eyeon Fusion will probably get this in there and I bet After Effects will also come around eventually, most likely with this being added to the ProEXR toolset for immediate use with plugins hot for the technology, and eventually by the stock Adobe Z tools themselves.

In Nuke, other than the initial batch of Deep nodes that were released in v6, you’ll see many nodes and tools start to become Deep compatible – for example, you’ll soon see a “DeepKeymix” and nodes like that start to appear as these things pop up in production.  Even the current set of Deep nodes will change, as Dr. Peter Hillman & co out at Weta continually push things forward.  They seem to have made the perfection of the Deep workflow not only a necessity for the coming films, but it’s been elevated to almost “personal mission” status.  With the Hobbit and Avatar sequels looming, this is more than justified!  At some point it will make sense to have ALL nodes be Deep aware in Nuke and for it to be tossed around as easily as a Z channel is now, but that is a ways off and you’ll see this duality exist for a while (Keymix vs DeepKeymix, etc).


Just like the baby in the “deep” end of the Nirvana Nevermind album cover, Z Channels are all grown up now.

As far as the Deep workflow goes – I love it, but I hate it.  Your first shot with it, you’re immediately hit with the “wow that’s amazing” new car scent as you plug in that first DeepMerge and everything clicks.  But the luster soon wears off when you realize the huge amount of additional processing overhead and network traffic associated with Deep renders.  It may be sweet images, but you take the slow boat getting there!  It’ll bring your system to it’s knees quickly, and your compositing momentum will start to resemble that banana slug you almost stepped on out on your front porch this morning.  You might as well install a coffee machine at your desk, you’ll be taking so many breaks.

Case in point:  on many shots for Man of Steel, I had volumetric cloudbox renders that were up in the territory 500mb-800mb per frame .  This is not a tax bracket you want to be in.  Ultimately, whether you eventually gravitate towards a DeepMerge style of comping or flip it and go with DeepHoldouts, you’re going to want to use the Deep renders to generate your layering and then precomp them out and get them outta the stream as fast as possible, so you can return to “normal” RGBA interactivity and creative flow.  Comps are supposed to be quick – you lighters can keep your excruciatingly slow little render tile windows, thank ya very much.

The hitch becomes nodes like DeepDefocus (currently unreleased, but you can use the Bokeh plugin from Peregrine) and others that are applied further down the tree – and for that, you’ll get to used to dialing values in and then getting them (again) out of your script – and disabling them with the $gui expression.  All in all, the workflow takes some getting used to, but it’s a small price to pay for the flexibility and power of a Z channel that actually works.  And things can only get faster & better from here as they experiment with new levels of downsampling the accuracy and compressing the renders.

The Foundry Creative Specialist Deke Kincaid put out a great collection of links awhile back to help get everyone up to speed on all things Deep.  Digg ’em:

original deep shadow paper:http://graphics.pixar.com/library/DeepShadows/paper.pdf

other must reads:
houdini docs on it:
prman docs on it:
videos on deep image compositing
basic intro one:
Johannes Samm’s Vimeo channel on deep image tools he wrote for Nuke long before we had a toolset on doing this inside Nuke
Rise of the planet of the apes Nuke video:
from prometheus:

The Foundry Releases Assist for NukeX


Posted on by

Cat_in_ComputerzzzWith the release of NukeX 7.0v6, the Foundry is including two copies of it’s new Assist product, a stripped down version of Nuke that only “includes tools for the tasks of roto, paint, and tracking.”

This is a value added move to try to make the pricing hit of a NukeX license a bit more easy to swallow for smaller shops.  Historically, companies like Eyeon offered limited versions of their software (in that case, “Rotation” to compliment Fusion) with the hopes of unseating Flame and the  Flame assistant’s license of Flint/Flare/Combustion/Silhouette/AE in commercial heavy pipelines.  On a base level, it makes a lot of sense to parcel these out when even boutique VFX shops have departmentalized paint/roto aside from compositing.  Why have a bazooka like NukeX aimed at a molehill?  And perhaps Diet Nuke/Nuke Lite/Nuke Dime/Nuke Nuked (I could go on…) is a good way to boost the amount of firepower you can throw at a shot, and give the powers that be one less excuse to pony up some extra NukeX coin.

Offering Assist has immediate value for the company pocketbook when it comes to frame by frame type work, but from the artist standpoint there’s not much to know or get excited about here.  Assist is highly crippled and quickly deteriorates for higher level tasks, and as is, there will probably be a juggling act associated with using it in production.  SplineWarp was not included in the toolset, nor were any 3D tools for geometry assisted paint work, which is to be expected – but that’s the bread and butter area of most higher level artists.  In fact, not even the Grade node was included – which, as you can imagine, makes it hard to grab a clone source from another frame or do any sort of relighting to your paint work.  I can’t think of the last paint shot I had that didn’t have a grade node.  Assist can open any Nuke script, and unsupported nodes will render but be outlined in red and their controls grayed out.  Write nodes are disabled in Assist.

For this to have real value outside of a press release, The Foundry might want to rethink the scope of what it’s definition of especially paint includes – but worth noting that this wasn’t beta tested widely and should be considered a v1.0 release.  The Foundry may decide to change what’s offered in the toolset based on initial reaction.  In my opinion, they also have a couple of line items out of whack as far as what’s offered in NukeX vs. regular Nuke, like GPU accelerated rendering.  But hopefully these things will iron out given more time to digest.  Ahhh, whatever…   whaddya gonna do…     it’s “free.”

For more info, catch the press release here.

Nodes included in this initial Assist toolset:

Did someone say Assist?

Did someone say Assist?  Dame can help with that.


Checkerboard ColorBars ColorWheel Constant
Read Viewer

Radial Ramp Rectangle Roto RotoPaint

FrameBlend FrameHold FrameRange TimeEcho

Add Copy ChannelMerge Remove
Shuffle ShuffleCopy

Invert OCIO CDLTransform OCIO Colorspace OCIO Display
OCIO FileTransform OCIO LogConvert


AddMix Dissolve KeyMix Merge
Premult Switch Unpremult

Crop CornerPin PlanarTracker Reformat
Tracker Transform TransformMasked

JoinViews OneView ShuffleView Split and Join
Stereo Anaglyph Stereo MixViews Stereo ReConverge Stereo SideBySide

AddTimeCode CompareMetadata CopyMetadata ModifyMetadata

Backdrop Dot Group Input
Output PostageStamp StickyNote

1 2