Tag Archives: pixel nerd stuff

It’s Not My Bag, Baby! Errr, Yes It Is

0

Posted on by

Not even VFX will fix those teeth...

Not even VFX will fix those teeth…

Courtney (VEMG/DFV Dept Coordinator over @ the Art Institute) & I were recently talking about reviving the old “VFX Team” class at Ai.  Strange name, but the focus of the class is on-set VFX supervision.  I’m excited about it – definitely would be a boost to get everyone shooting more and solving problems with brainpower & careful planning on the front end, and less muscling things out on the back end.  Probably goes without saying, but if you don’t nail down the shoot you’re going to have a long, messy ride the rest of the way.  This type of production exposure would be valuable in this brief window where, as a student, one might have a little control over one’s own post-production destiny.

A well designed shot can make the rest of the process a breeze.  Or at least…  breezier.

I should probably tag this with the disclaimer that I’m still a “Padawan” when it comes to supervising vfx for a shoot.  I was lucky to get an early taste back in my commercial NYC days (this is 10 some odd years ago now), but not so much since taking positions at larger studios – even ones that had production work happening alongside VFX.  Oh ya know, the occasional element shoot or student project comes along and those are good chances to buy a new toy or 2, shake off the rust, test theories, and keep skills sharp.  But I still have lots to learn and feel like I have plenty of “book smarts” that need converted to “street smarts.”  All along the way I’ve been been hitting up the more senior on-set folks who were around and nice enough to share wisdom, and constantly building my kit.  I think this class would be a nice excuse to bust things out and fully explore the fundamental concepts.  Now that we’re talking about it, I’m starting to get the itch!

On set w/Hinge Digital

On set w/Hinge Digital

All the years spent slugging it out in the trenches has proven valuable;  for better or worse, it’s given me a dose of…   well, to put it nicely – production “challenges.” Challenges which, as any Compositor worth a salt does, gives me a chance to reverse engineer the fix and see how the bullets could’ve been dodged in the first place…   along the way keeping a mental file cabinet of all this shrapnel to avoid.

The on-set skills will definitely expand your mind and get you thinking critically.  Dissecting.  And once you graduate, I think you’ll find that although you might not be able to apply a lot of it instantly or directly, it’s just good to know the process, speak the language, and keep a catalogue in the back of your mind while you work the daily grind – to not only (for example) be able to pull a good key but take the time out to understand why a poorly shot key is blowing up on you, and what could have been done to avoid it.  I’ve not thought much about it or put this to the test, but I have a hunch many of the best on-set supes out there are former Compositors, with scars to prove it.  🙂

Continued on page 2

I Have Seen The Future, And It Is… Slow

1

Posted on by

Deep data happy dance

Deep data happy dance

With today’s formal announcement by ILM & Weta Digital that OpenEXR 2.0 is finally pushed out for mass consumption, we can finally (there’s that word again!) do the VFX version of the Icky Shuffle.  Or whatever Deep data touchdown happy dance you’ve been working on since you saw the Deep data demo with The Foundry back at the Hinge Digital VFX/PDX blowout last fall.

If you were in hibernation at the time and missed it, or still haven’t had much exposure to them thar Deep renders – in a nutshell, Deep finally gives us a usable Z channel.  Old depth channels have always been a little bit of a hack, plagued by nasty per pixel sampling.  Even once aliased and cleaned up, you still commonly had to split renders into pieces and/or render holdouts to get everything jiving and edges that behaved correctly when composited.  But things that should work, like Z defocus, would instead wreck havoc and have you walking through a landmine field of broken edges, pops, sizzles, bleeps & blunders.  Deep data to the rescue!  Deep allows you to render layered CG uninhibited, in it’s full, juicy glory, and then let the Deep Z information take care of your holdouts and what layers in front of what, and it does this both correctly and (usually) flawlessly. Simply put – on a complex film like Avatar where traditionally you may have had characters running through the forest, and had to render characters running through the trees with holdouts here and holdouts there…  and then (zing!) they change a few frames of animation in the character pass — previously you’d have to rerender EVERYTHING because the holdouts changed as well.  As of today, those days are in the rear view.  Deep compositing solves those issues, and everything now works like it should.  You rerender the changed character pass, DeepMerge it with the existing forest renders and you’re off to the races. ILM and Weta were all over this because it’s the only way they could have finished a film on the scale and scope of Avatar.  If they hadn’t brought back Colin Doncaster & co. to finally nail down what they’d started back on ‘Rings, they’d probably still be working on Avatar here a full 3 years after release.  No jokin’.  The fact that this is finally getting pushed out into the mainstream is pretty darn exciting for everyone outside of Weta and ILM.

What does this mean to us groundlings?  First of all, by no means will the words “instant gratification” come anywhere near this post.  This release means things can finally be standardized and the different workflows across software will start to come in line, given another round of point releases or two.  Deep data has been available for a while, and Renderman + Nuke paved the way, but there were still some inconsistencies as other software caught up to what Weta and ILM were pioneering.

Renderers will now formalize support, some faster than others.  (Houdini’s) Mantra, Arnold, and VRay have all had support to some extent already, but then you have a look across the way at Mental Ray and they seem to be lagging far behind and (according to the guys at Hinge Digital) Deep data doesn’t appear to be a blip on the MR radar yet.  At some point in the near future, all will come around to rendering EXR 2.0 rather than dtex or whatever format was being rendered before.

Nuke is the first and only compositing app out of the gate to have Deep technology, and rightly so, having developed the tools directly with Weta and ILM.  Eyeon Fusion will probably get this in there and I bet After Effects will also come around eventually, most likely with this being added to the ProEXR toolset for immediate use with plugins hot for the technology, and eventually by the stock Adobe Z tools themselves.

In Nuke, other than the initial batch of Deep nodes that were released in v6, you’ll see many nodes and tools start to become Deep compatible – for example, you’ll soon see a “DeepKeymix” and nodes like that start to appear as these things pop up in production.  Even the current set of Deep nodes will change, as Dr. Peter Hillman & co out at Weta continually push things forward.  They seem to have made the perfection of the Deep workflow not only a necessity for the coming films, but it’s been elevated to almost “personal mission” status.  With the Hobbit and Avatar sequels looming, this is more than justified!  At some point it will make sense to have ALL nodes be Deep aware in Nuke and for it to be tossed around as easily as a Z channel is now, but that is a ways off and you’ll see this duality exist for a while (Keymix vs DeepKeymix, etc).

nirvana_nevermind_adult

Just like the baby in the “deep” end of the Nirvana Nevermind album cover, Z Channels are all grown up now.

As far as the Deep workflow goes – I love it, but I hate it.  Your first shot with it, you’re immediately hit with the “wow that’s amazing” new car scent as you plug in that first DeepMerge and everything clicks.  But the luster soon wears off when you realize the huge amount of additional processing overhead and network traffic associated with Deep renders.  It may be sweet images, but you take the slow boat getting there!  It’ll bring your system to it’s knees quickly, and your compositing momentum will start to resemble that banana slug you almost stepped on out on your front porch this morning.  You might as well install a coffee machine at your desk, you’ll be taking so many breaks.

Case in point:  on many shots for Man of Steel, I had volumetric cloudbox renders that were up in the territory 500mb-800mb per frame .  This is not a tax bracket you want to be in.  Ultimately, whether you eventually gravitate towards a DeepMerge style of comping or flip it and go with DeepHoldouts, you’re going to want to use the Deep renders to generate your layering and then precomp them out and get them outta the stream as fast as possible, so you can return to “normal” RGBA interactivity and creative flow.  Comps are supposed to be quick – you lighters can keep your excruciatingly slow little render tile windows, thank ya very much.

The hitch becomes nodes like DeepDefocus (currently unreleased, but you can use the Bokeh plugin from Peregrine) and others that are applied further down the tree – and for that, you’ll get to used to dialing values in and then getting them (again) out of your script – and disabling them with the $gui expression.  All in all, the workflow takes some getting used to, but it’s a small price to pay for the flexibility and power of a Z channel that actually works.  And things can only get faster & better from here as they experiment with new levels of downsampling the accuracy and compressing the renders.

The Foundry Creative Specialist Deke Kincaid put out a great collection of links awhile back to help get everyone up to speed on all things Deep.  Digg ’em:

original deep shadow paper:http://graphics.pixar.com/library/DeepShadows/paper.pdf

other must reads:
houdini docs on it:
prman docs on it:
videos on deep image compositing
basic intro one:
Johannes Samm’s Vimeo channel on deep image tools he wrote for Nuke long before we had a toolset on doing this inside Nuke
Rise of the planet of the apes Nuke video:
from prometheus:

The Foundry Releases Assist for NukeX

0

Posted on by

Cat_in_ComputerzzzWith the release of NukeX 7.0v6, the Foundry is including two copies of it’s new Assist product, a stripped down version of Nuke that only “includes tools for the tasks of roto, paint, and tracking.”

This is a value added move to try to make the pricing hit of a NukeX license a bit more easy to swallow for smaller shops.  Historically, companies like Eyeon offered limited versions of their software (in that case, “Rotation” to compliment Fusion) with the hopes of unseating Flame and the  Flame assistant’s license of Flint/Flare/Combustion/Silhouette/AE in commercial heavy pipelines.  On a base level, it makes a lot of sense to parcel these out when even boutique VFX shops have departmentalized paint/roto aside from compositing.  Why have a bazooka like NukeX aimed at a molehill?  And perhaps Diet Nuke/Nuke Lite/Nuke Dime/Nuke Nuked (I could go on…) is a good way to boost the amount of firepower you can throw at a shot, and give the powers that be one less excuse to pony up some extra NukeX coin.

Offering Assist has immediate value for the company pocketbook when it comes to frame by frame type work, but from the artist standpoint there’s not much to know or get excited about here.  Assist is highly crippled and quickly deteriorates for higher level tasks, and as is, there will probably be a juggling act associated with using it in production.  SplineWarp was not included in the toolset, nor were any 3D tools for geometry assisted paint work, which is to be expected – but that’s the bread and butter area of most higher level artists.  In fact, not even the Grade node was included – which, as you can imagine, makes it hard to grab a clone source from another frame or do any sort of relighting to your paint work.  I can’t think of the last paint shot I had that didn’t have a grade node.  Assist can open any Nuke script, and unsupported nodes will render but be outlined in red and their controls grayed out.  Write nodes are disabled in Assist.

For this to have real value outside of a press release, The Foundry might want to rethink the scope of what it’s definition of especially paint includes – but worth noting that this wasn’t beta tested widely and should be considered a v1.0 release.  The Foundry may decide to change what’s offered in the toolset based on initial reaction.  In my opinion, they also have a couple of line items out of whack as far as what’s offered in NukeX vs. regular Nuke, like GPU accelerated rendering.  But hopefully these things will iron out given more time to digest.  Ahhh, whatever…   whaddya gonna do…     it’s “free.”

For more info, catch the press release here.

Nodes included in this initial Assist toolset:

Did someone say Assist?

Did someone say Assist?  Dame can help with that.

Image

Checkerboard ColorBars ColorWheel Constant
Read Viewer

Draw
Radial Ramp Rectangle Roto RotoPaint

Time
FrameBlend FrameHold FrameRange TimeEcho
TimeOffset

Channel
Add Copy ChannelMerge Remove
Shuffle ShuffleCopy

Color
Invert OCIO CDLTransform OCIO Colorspace OCIO Display
OCIO FileTransform OCIO LogConvert

Keyer
Keyer

Merge
AddMix Dissolve KeyMix Merge
Premult Switch Unpremult

Transform
Crop CornerPin PlanarTracker Reformat
Tracker Transform TransformMasked

Views
JoinViews OneView ShuffleView Split and Join
Stereo Anaglyph Stereo MixViews Stereo ReConverge Stereo SideBySide

Metadata
AddTimeCode CompareMetadata CopyMetadata ModifyMetadata
ViewMetadata

Other
Backdrop Dot Group Input
Output PostageStamp StickyNote

One Step Closer to Robo Roto

0

Posted on by

Adobe dropped a hint at the latest addition to After Effects’ roto tool suite as they continue their quest to automate one of the most tedious, labor intensive tasks in the VFX biz.  Have a look at the ghosts of After Effects Roto past, present & future:

 


Great, so Adobe Refine Edge is the buzzword to watch for in the forthcoming release of AE (date TBD). Chris Meyer has been testing and gives his honest opinion here.

A little deja-vu feeling here – I’ve historically held a grudge against AE for it’s masking system (or attempt at one) – and why shouldn’t I?  I was forced into using it on many occasions back in the day…  and AE’s roto tools leave a lot to be desired.  I want those days (or more appropriately, long nights) of my life back.

I give them points for being first in, and seeing that v1.0 masking tool in the video is hilarious – but honestly, the next gen VFX tools evolved what a masking system can & should be, and AE still hasn’t even remotely caught up.  Baby steps forward like the rotobezier, gradient edges, Mocha integration and shape layers are just band-aids over a bad fundamental architecture – and they still haven’t touched the giant gap in workflow that exists between AE and what Commotion had going 8 years ago, or the current industry standard roto tool – Silhouette.  For a program that prides itself on a quick, highly refined animation workflow, AE’s masking tools & dinosauric system in CS6 are completely counter-intuitive and clunky.  Sure you can use them, but why on earth would you want to?  Anyone who’s done the Pepsi challenge knows that they’re only good for rudimentary clean up and garbage masking, and that for anything that crosses the line into what could be termed as “real roto” you need to switch over Silhouette and save yourself a huge percentage of man-hours compared to slugging it out in AE for what, in addition to time wasted, turns out to be less accurate results in the end.

Necessary overhaul aside – at the same time – the band-aids do stop a bit of bleeding and it’s hard to argue that Adobe’s not dropping some coin actively developing these automated ideas, which although I sigh and groan and say “here we go again…”    in their defense, up to this point, the Adobe team are the only ones going after the holy grail.  The other compositing apps and plug-in masters have been regretfully afraid to touch anything like this.  But you can’t deny it – anyone who’s ever dug inside of the Global Estimation tools in Furnace or the exposed OFlow hooks in Nuke, or gotten an inner/outer key in AE or the trimap based Powermatte in Silhouette working – or heck, even brushed a quick edge matte using the Extract tool in Photoshop –  anyone who’s familiar with the voodoo inherent to these tools knows that we should be able to pair these wallflowers up and get ’em to dance.  There have been some pretty impressive white papers the last few years at Siggraph that attest to the possibilities.

And yes, I’ll go on record and say the automated tools in AE are a genuine arrow for the quiver, worth a quick test here and there to see what they give back.  I’m not going to say I haven’t seen the Roto Brush work once in a blue moon.  Or more often than not it will get you 80% of the way there and you’ll have to come in with some spot roto and fixes to complete the job.  If learned properly and used carefully it can give good results.  The Refine Edge idea seems to be equally as effective based on the initial reaction.

But rather than trying to win the race to Robo Roto, it’d be nice to see Adobe take a step back and refine (or some would say downright “fix”) the manual tools and interface within AE.  Use those precious development cycles towards – at least parallel – development of a masking system that screams.   One that would get used in everyday professional studio production, instead of going after another bullet point on press release trying to sell to a wider audience of farm club Joe videos who like things shiny, blurry, quick & dirty rather than the perfection that puts shots down in the major leagues.

image: robertocampus.com

image: robertocampus.com

I’ll always take a fix to a system that’s not working or buggy over a new feature, and it seems like Adobe could use a reality check.  If it ain’t broke then don’t try to fix it…  but if it’s broke, then Holy Crow(!) don’t let it sit broken for 10 years while you tease us with automated ways to not have to do it.   Last I checked, we still have to do it.

1 2 3