It’s 2012 — though I’m still writing 1997 on all my checks — and it seems like everybody and his sister has a DSLR workflow. Well, here’s mine, inspired by a recent DSLR-shot short film I finished and a lot of twiddling and tweaking.
Here’s the list of ingredients, so you can go shopping and follow along at home:
- Magic Bullet Grinder 1.5 (free trial available from Red Giant)
- Episode Pro 22.214.171.124 (optional; free trial available from Telestream)
- Media Composer 5.5.3 or 6.0.1 (free trials available from Avid)
- DaVinci Resolve Lite 8.2 (free from Blackmagic Design)
- Pro Import AE 5.0 (free from Automatic Duck)
- After Effects 5.5 (free trial available from Adobe)
- Final Cut Pro 7 (optional; no longer available at any price)
The basic workflow, though, should be pretty easily adapted to other tools. You’ll just need to play with it.
I’m gonna break this down into sections and go into some detail. Feel free to skim. I know I would. But lemme start with kind of a 10,000-foot view of the whole process, just so you know where we’re heading.Overview
The basic idea here is to pretend we’re talking about film. You remember film? Film workflows are both ancient and venerable, because my God, they have to be. Film’s expensive to deal with, and a feature will tend to generate miles and miles of it, so staying organized and being consistent isn’t really optional. Rather than being all rogue about things, we’re just going to pretend we understand film workflows very well, and take the central ideas of film post and adapt them for our purposes.
That means we’re doing an offline-online workflow, for starters. For those of you who might not know the jargon, an offline-online workflow is one in which you do a quick-and-dirty — and above all, cheap — transfer of all your raw footage to a highly compressed format that your computer can throw around really easily, do all your creative editing on that compressed footage, then output some kind of machine-readable timeline file that goes through a process called conforming. Conforming is where you take the shots you chose in your offline edit — and only those shots — and create a new timeline with them that matches your offline frame-for-frame. It’s this new timeline, with all the high-resolution media in it, that you use to do your visual effects (if any) and color correction.
Now why, you might be asking yourself, would you ever choose to do such a bizarre thing as that. Why not just work exclusively with your high-resolution media from the get go, so you can be done as soon as your edit is finished?
Well, there are a couple reasons why you wouldn’t want to do that. First, film transfers at full resolution are expensive, so you only want to transfer just the frames you absolutely need and not any others. Second, film transfers at full resolution generate a lot of data — anywhere from six to twelve megabytes per frame — and even a beefy computer struggles to move that data around at anything like real time. So doing a quick and cheap transfer of everything to a highly compressed format means you can mind your nickels and dimes, but it also means you can work more creatively when you’re editing.
Now, of course we aren’t talking about film here, but those two principles still apply: it’s really hard to work creatively with the H.264-compressed media files DSLRs spit out, because they’re just so heavily compressed. Yes, the files are small, but your computer has to do a lot of math to deal with them, so things get boggy and slow down on you (and that’s if you’re lucky enough not to just outright crash your NLE in the process). But converting them all to a high-resolution online format is really expensive — in time. You might shoot an hour of rushes for a ten-minute short film — and that’s if you’re a really good director who nails everything in the first few takes. If you’re not so lucky, you might end up with ten hours of rushes for a ten-minute short … or even more. And batch-transcoding all that to your finishing format will take way longer than you want to invest in sitting around twiddling your thumbs.
So we go back to the old ways: an offline-online workflow essentially the same as what you’d use on a film project. Batch-transfer everything to a crappy, low-resolution but lightweight offline format as fast as possible, then edit, then do the time- and hard-drive-consuming high-resolution, full-quality transfer later.
Our workflow’s basically gonna look like this:
- Prep all your media in sensible, sane ways
- Batch-transcode to an offline format that’s suitable for editing
- Edit using the offline media until the picture’s locked
- Conform your timeline to the original media files
- Render out ProRes versions of your selects (with handles) for onlining
- Bring the whole shebang into After Effects for VFX, color grading, the adding of titles and whatever else needs to be done at full resolution
- Output a fuller-resolution master file that you can marry up with the mastered audio files
So that’s the big picture. How let’s dive into the details, starting with the part that has to happen before step one.Preflight
If you’re out there making your own movie with a Canon 7D or whatever, then for God’s sake get organized on set. Pretend you’re getting paid and run your set like the studio’s gonna audit you at any minute. Adopt some basic DIT workflow practices, like naming conventions for mags and clips and such, and follow them assiduously. It’ll make your life easier in the long run.
But of course, you’re probably not out there making your own movie. You’re probably sitting there with a stack of hard drives with God-knows-what on them, and maybe, if you’re incredibly lucky, a shooting script. Maybe. (And even if you have one, it’s probably on white pages anyway, meaning it’s never been revised since before shooting began. Good luck finding any correspondence at all between the script and the rushes on those hard drives in that case.)
If that’s how it is — you’re coming in after principal photography has wrapped and you’re kind of thrown to the wolves — then for God’s sake, get things organized before you begin. I know, I know, editing is the fun part, and it’s natural to want to rush into it. But seriously, you’re just going to create headaches for yourself if you don’t impose some kind of order on the chaos in which you find yourself.Bin and clip naming conventions
There are as many different schools of thought about bin and clip naming as there are editors in the world. If you’ve already got ideas on this topic and they work for you, great. Skip ahead to the next section. But since I’m typing anyway, I’ll go ahead and put my ideas down here.
I use two different systems for bin and clip organization, depending on what the externalities are on any given job. Either I use a camera-oriented system, or I use a script-oriented system.Camera-oriented bin and clip naming
My camera-oriented system looks like this: Every magazine that comes out of the camera — read “CF card” here, since we’re talking about DSLRs — gets a name that indicates which camera it came from and which magazine it is. Since you’re unlikely to have a thousand different camera mags on any one DSLR project, I stick with this:
Here the X stands for the camera identifier: A for the first (if multi-camera) or only (if single-camera), B for the second, and so on. The NNN is a three-digit number that starts at 001 and goes up. So the first mag for the A camera would be A001, the ninth mag off the C camera would be C009, and so on.
The YYMMDD part should be obvious: two-digit year, two-digit month, two-digit day on which that magazine was recorded. So if the first day of photography was March 13, 2011, the first mag would be:
Now, some people like to go even further with it. If you deal with more than one project that’s shooting at the same time, you might want to stick a couple more digits on the back of that to distinguish between the A camera on this project and the A camera on that project. But I’ve never needed to do that, personally, so I don’t bother.
So say you’re working on a single-camera short film that shot over the course of two weekends — July 16-17 and 23-24 of 2011 — shooting one mag per day (because CF cards hold like a million hours of footage or something). Make the following folders:
A001_110716 A002_110717 A003_110723 A004_110724
(Why do it like this instead of just A001, A002, etc.? Cause you will have multiple A001 folders in your life — one for each project you do — and it’s always better to keep them separate, just to avoid future confusion.)
Now that you’ve got your mag folders set up, it’s time to start populating them with clips. Each clip shot on a DSLR comes out with a name that looks like this:
That sucks, because it doesn’t tell you anything at all. It’s just a three-letter prefix that’s always the same, plus a number the camera pulls out of its ear. So that won’t do at all.
What we’re gonna do is rename these files, giving them unique clip IDs. It’s not a complicated scheme; for each mag, we’re just going to start with C001 and count up. But here’s where it gets fun: We’re going to include the mag information in the clip names. Like so:
That’s camera A, mag 001, clip 001, on July 16, 2011. As distinct from:
which is camera A, mag 003, clip 001, on July 23, 2011. Totally different shots, see.
Now, if this sounds like a giant pain in the butt — having to manually rename each and every clip one by one — well, it is. Believe me, it is. But there are some shortcuts that can help. I like using a little utility called A Better Finder Rename, which lets you do nice things like apply regular expressions and stuff. I’ve got a whole workflow cooked up. You can roll your own or whatever you like.
(Incidentally, if something tickles your hindbrain about this little naming scheme, it’s probably the fact that I stole it pretty shamelessly from the emerging standard in the industry for identifying tapeless media. Red and Arri both use essentially this same standard.)
But all that presupposes that what you get is a collection of folders, one for each CF card recorded during the shoot, and you have only basic information about each clip. It’s a good organizational scheme … but it’s not the best organizational scheme for editing. The best scheme for editing is one that’s oriented toward the script rather than the camera.Script-oriented bin and clip naming
Every take in a production can be uniquely identified by a small collection of letters and numbers: the camera (or angle), the scene, the shot (or setup), and the take.
The camera a particular take was filmed on is identified just as above: A, B, whatever. The scenes are numbered in the shooting script — possibly discontinuously, as scenes are often omitted between writing and shooting.
Each scene is broken down into shots — we’re going to shoot these lines in close-up, then these lines in a wide shot, then get a medium of all the lines together for coverage, et cetera. These are often noted with letters: 13A is the first shot of scene 13 (say it’s a wide), 13B is the second shot of that scene (a close-up), and 13CD is a shot that starts out in close up (shot C) but then dollies out to a medium (shot D). That kind of thing.
And then, of course, you do multiple takes of each setup of each scene. Takes are numbered, but you knew that already.
So that means you can identify each individual clip of a project this way:
That’s scene 13, shot A, take 2 on the A camera. (Why do we put the camera ID last instead of first like before? Bin sorting. It’s better if 13A_2_A and 13A_2_B are right next to each other in your bin, rather than finding A_13A_2 and then having to scroll down for a month to find B_13A_2.)
Whether you use a camera-oriented scheme or a bin-oriented scheme depends on what kind of project you’re doing and what you’re given. For instance, documentary projects are rarely thought of in terms of scenes and takes during production; in that case, it makes more sense to just think about cameras and mags and clips, ‘cause that’s information you have available to you. On the other hand, a scripted drama project might lend itself more readily to organizing by scene, shot and take. Which one you use is entirely up to you. The point here is for the love of all that’s good and holy, pick one!Don’t forget sound
DSLRs are capable of recording sound right there in the camera body itself, in sync with picture. But you shouldn’t ever do that, because you get crappy, crappy sound that way. Instead, you want to record dual-system sound, using some kind of external sound recorder that takes input from good mics.
What you get out of such a recorder is most likely going to be 24-bit, 48 KHz WAV files. These files named no more usefully than the files DSLR camera bodies spit out:
Maybe that’s a date, I dunno. It looks like it could be: May 22, 2008? Who knows. That’s the name of one file I have here in my project archive, and your guess is as good as mine as to what it signifies. (The fact that that particular audio recorder had not yet been invented in 2008 argues against its being a date, but whatever.)
Point is, you need to get your sound files organized just like you organize your picture files. Whatever naming scheme you use to organize your clips, apply it to your sound files. This is where script-oriented naming works well, cause you can just have these files:
16DE_2_A.MOV 16DE_2_B.MOV 16DE_2.WAV
Scene 16, shot DE, take 2, A-camera, B-camera and sound. That’s all your coverage for that particular take.
Anyway, again, the point is you gotta pick a system and stick with it.Transfer for offline (or “what’s a one-light?”)
If we were talking about film here — cause remember, that’s where we find our inspiration — it’d be time to discuss the mystical and magical process of one-light film transfer.
The short version is this: When you transfer film to video or a data file format, you have to set the exposure on the film scanner. Since a roll of film has many takes on it, and in fact may include takes from different scenes shot at different locations at different times of day, the guy who does your transfer really ought to set the exposure differently for each scene, to compensate for underexposure or overexposure or whatever. Trouble is, that’s work, and work costs money. So when you’re getting your film rushes scanned for your offline, you typically ask for a one-light transfer, which means the colorist sets one light — that is, sets the exposure just once — and runs the whole reel. You get back dailies that are mostly-okay, generally, but more important you got ‘em back cheap.
Converting DSLR media files — you know, those H.264 QuickTimes — to a finishing format is a lot less complicated than scanning film. But it still takes time, and we want to minimize the time we put into it, both in terms of time we ourselves spend and in terms of time we have to dedicate our computers to the transcoding process.
My personal workflow for handling this involves two steps: Adding timecode to the camera media, and batch-transcoding the media to DNxHD.The timecode thing I just said
Timecode is more important than you might realize. It’s a good idea, just in general, for each frame in your project to have unique timecode. I elaborate a bit on why this is true here, but in addition to the basic principles involved, you should be aware that the conforming process I’m going to describe here will literally break if you don’t have timecode. So just drink the timecode Kool-Aid already, okay?
There are a couple different ways to put timecode on DSLR media; you can use a utility called QtChange — google it — which adds timecode to your media files directly, or you can use Magic Bullet Grinder which copies your media files and then adds timecode to them. I go back and forth, but right now, as I write this, I think I prefer Grinder. Both tools are useful, but Grinder does more things, and like Alton Brown, I’m disinclined toward unitaskers by nature.
So here’s what you do: Organize your clips into folders by reels. “What’s a reel?” you ask? It’s a collection of clips that we’re going to put in order by timecode. We’re going to have a set of clips, for example, that all have timecode in the one-hour range, and then another set in the two-hour range, and so on. You can do this by mag if you like, or you can do it by scene; whenever possible, I choose to do it by scene, because of all the reasons I talked about in the section on bin and clip organization.
Anyway, once you get all your clips divided into folders, set Grinder up this way:
Things to note: The timecode start option is set to “Continuous” (that’s the default), which means each clip’s start timecode will be one frame after the previous clip’s end timecode. The timecode setting is dialed in to 07:00:00:00, which as you can see corresponds to the fact that these shots are all from scene 7. Finally, the main format option is set to “Original + Timecode,” which means Grinder is going to copy these files from their original directory into the directory of my choosing, adding timecode to them in the process. They will not be transcoded; they’re going to stay in their camera-native H.264 format. That means this process will go real fast, basically as fast as your hard drives let it.
Do this once for each set of shots you want to stripe with continuous timecode.
Once Grinder’s done, you’ll end up with a bunch of clips that all have “_main” appended to their file names. I personally do not care for this, though I haven’t figured out yet how to get Grinder not to do it. So I just let it do its thing, then use Automator to strip the “_main” suffix off.Transcoding
The next step is to convert all those now-striped files to the offline editing format of your choice. I use Avid here, so I obviously want DNxHD 36. (You don’t even have to use HD here; you could use SD instead to make things even quicker, but that introduces complexities in the conform, so I don’t bother. I just stick with the same frame size and frame rate at which I’m finishing.)
Now, you can do this with just Media Composer. AMA your striped H.264 clips into a bin, select-all, then use the “consolidate/transcode” function to transcode to DNxHD 36 on your media drive of choice. Media Composer will convert the sound and picture and write them into your Avid Media Files folder.
But I prefer not to do it that way. See, I run Media Composer on my personal laptop, and transcoding takes time — even when you use all the shortcuts available to you — and I don’t want to tie up my laptop for hours or even days just for that. So I choose to use a program called Episode, from Telestream, to do the job, running it on a little Mac mini I have just for stuff like this. I have a custom encoder set up that converts whatever sources I feed into the job into DNxHD 36-format QuickTime movies. As you probably know, Avid doesn’t work with QuickTime movies; it stores all its media in MXF format. But Avid can do what’s called a “fast import” in situations where the source file is in the right codec. It simply copies the frames out of the QuickTimes and right onto your media drive. It’s fast and easy. (Well, the fast-importing is fast. The transcoding takes as long as it takes on the hardware you have. Since my hardware is just unbelievably modest, it takes a really long time for me, but I don’t care, because it’s a fire-and-forget kind of thing, and also I can’t afford anything better.)
Another key benefit there, in addition to getting your transcoding off your Avid and onto another system, is that you still have your DNxHD-format offline QuickTimes just sitting there in a folder. Because the format’s highly compressed, they don’t take up that much room — not in this era of terabyte-plus FireWires — and if you ever end up without your media drive, due to a drive crash or just inconvenience, you can batch import the files right back in again. Avid even remembers the paths to the original QuickTimes, so you don’t even have to find them. Just do a batch import and hit “okay,” and you’re laughing.
Of course, this extra convenience comes at a price. Episode is either five hundred or a thousand bucks depending on which one you buy. I already had the Pro version — bought it a few years back for a paying job, passing the cost on to my client — so I get to use it for free, essentially. If you don’t like that, you can just use MPEG Streamclip instead, though you’ll want to spend some time diddling around with the settings to make sure it’s writing out correctly formatted DNxHD QuickTimes to fast import.
(Oh, another note: Download and install the Avid codecs package. It’s free, and lets you both play back and encode DNxHD on any Mac.)
However you do it, get all your sources transcoded to offline format, then import them into your bins. If you’re not using script-oriented file names, you’ll want to take the extra step of logging the scene/shot/take information in Media Composer so you can stay organized. You can do that while you’re reviewing the footage.Edit
Do I really have to explain this? Do your damn job. Create a rough cut by picking takes and laying them out in script order, then refine it until you’re happy. Just go … I dunno. Be an editor.
However, bear this in mind: You are editing right now. You’re not mixing sound. You’re not doing visual effects or compositing. You’re particularly not doing any color grading. By the time you’re done, you should have only one video track on your timeline (unless you’re using a gap effect on V2 for your timecode and metadata burn in for review and approval, which you should be doing, but that’s another conversation). Keep your eye on the ball here. This is the part of the workflow where you’re an editor and not anything else.Export a linked AAF for conforming
Once you’re done with the edit and have a locked timeline — for whatever working definition of “locked” applies in your situation — you aren’t done with the job. Your next step is to output an AAF. An AAF is kind of like an EDL; it’s a machine-readable timeline that you can import into other programs to do other things. In our case, we’re going to import it into Resolve to do our conform. More on that later. For right now, we’ll just focus on getting it exported correctly.
The first thing you need to do is commit any group edits you might have on your timeline. I mention this specifically because it will screw you up in the next step if you’re not careful. Subclips are fine, such as those created by AutoSync, but group and multigroup edits must be committed with the “commit multicam edits” function before you export.
Media Composer comes with about a zillion export presets, but they confuse and scare me, so I always create my own. I just call it “AAF (edit protocol),” because it creates an AAF with the edit protocol option enabled. I’m not creative.
This is how I set up the video part of my export preset:
And here’s how I set up the audio part:
The key details there are that I turn all of the rendering options off. I don’t want Avid doing any rendering at all, because I’m going to deal with all that stuff later.
Exporting an AAF with these settings takes like zero time. It’s effectively instant. Which is good, because we don’t like waiting, do we?Export an AAF for sound
Now is the time to talk about the small matter of sound. I’m going to assume that you, like me, are an editor who sucks at sound. Seriously, I’m just terrible at it. Every time I try to do any kind of audio sweetening or add filters or effects I end up making everything sound like it’s being heard over a telephone line. Underwater. During the Battle of Britain. I’m just awful at it.
That’s why I always send any real sound work out to the experts. And what the experts always want is either an OMF or an AAF, but AAFs are easier to deal with. Here’s how I export an AAF for use in ProTools:
But note: These may not actually be correct or sensible! I just do what’s worked in the past. Talk to your audio guy about what he or she needs. Because seriously, your audio guy is your best friend. We couldn’t survive without those folks. So be nice, be accommodating, and send ‘em Christmas cards or something, jeez.Conform in Resolve
Okay, so now that we’ve locked picture and output AAFs for picture and for sound (and hopefully said please and thank you when delivering the audio AAF to the sound guy), it’s time to conform our timeline to our high-res media.
But hang on a second. We didn’t actually make high-res media. All we did was stripe our original H.264s with timecode, then convert them to low-res DNxHD files for offline editing. How can we conform to the high res when we didn’t actually make high res?
That’s where Resolve comes in.
See, Resolve is a full-featured system for doing DI — that’s “digital intermediate,” which is now the jargon term of choice for conforming, grading and just generally finishing the picture on a project. We’re going to criminally underutilize it … but that’s okay, because it turns out Resolve is free.
That’s right, Blackmagic Design, the company that makes the product, perhaps unwisely chooses to give away a “Lite” version — no 2K or up, no multiple coprocessor boards, etc. — for literally no money, and you can download and install it on any Mac you have. Can’t beat that with a stick.
But of course, it couldn’t just be that easy. It turns out Resolve is actually quite tricky to use. It’s got a learning curve. I’m not going to try to tell you everything there is to know about Resolve here; I’m just going to tell you what works for me. Refer to the manual for more information, cause I literally don’t know anything other than what I’m about to say.
The first step — after you’ve set up your project and all that; see the manual — is to go to the Browse screen and load your original, striped media files into Resolve’s media pool. This is pretty straightforward; you just point the program at the files — which is not straightforward, but again, see the manual — and tell it to load all the clips into your pool. Poof, done.
Next, you go to the Conform screen and load the AAF you exported earlier. Resolve will bring in the timeline and automatically link it — based on the source file names — to the clips you loaded into your media pool.
Now, there are a couple gotchas here that you should take pains to avoid. First and foremost, Resolve will not be able to link your timeline up to your clips correctly if your clips don’t have timecode on them. That’s why we put timecode on way back in the beginning, with Grinder. It’s essential.
Secondly, Resolve cannot import AAF files that include grouped clips. I mentioned this before. So commit your multicam edits before exporting your AAF.
And third, Resolve really wants the camera media files to have the same file names as your offline media files. If you used the AMA consolidate/transcode method of importing your media to offline, you don’t have to worry about this; Avid is smart enough to keep the source file parameters in sync for you. But if you batch-transcoded, like I prefer to, you need to take extra care that your offline and online media clips all have the same file names before you get working. Otherwise you’ll have a headache when it’s time to conform.
Assuming you avoided all those pitfalls, you should basically be done conforming. Resolve does it all for you automatically, as long as your clips have unique names, as long as they have timecode, and as long as your AAF is sensible to the program.Render ProRes files
At this point, if you were a colorist, you’d do color … stuff. You know, making things pretty and whatnot. You still can if you want, and more power to you if you do, but that’s not actually why we’re here. We’re here to render these conformed clips out to ProRes files that we can then work with in After Effects.
A little math here: The media files you get off a DSLR are compressed with the H.264 codec and hover somewhere around the 50-megabits-per-second range. ProRes 422 LT is a QuickTime codec that hovers around the 100-megabits-per-second range. In essence, this means we can take the frames out of that 50 Mbps sack and put them in a 100 Mbps sack with no loss of quality; ProRes 422 LT has the headroom to reproduce exactly, pixel for pixel, what your camera gave you.
That would not be true if your camera were spitting out something like R3D media, just to pick one example. R3D media files have more data in them than ProRes 422 LT files can hold, so you can’t convert R3D to ProRes 422 LT without suffering some loss of quality. (Whether that loss of quality matters to you, or is even noticeable to you, is a conversation for another day.)
ProRes 422 LT is also fast, fast as lightning even on a modest system. That means we’ll get to spend more time working and less time waiting when we get into After Effects for our online.
So we’re going to render out ProRes 422 LT files of all our selects. Resolve makes this trivially easy. I mean seriously, it’s almost one-button easy. But we’re going to push a few extra buttons just to smooth some things out.
Start by going to the Color screen and hitting ⌘-R, which is the shortcut for “render.”
By default, Resolve wants to render your whole timeline. That’s good; that’s what we want. But you can also tell it to render just one shot, or a timecode range, if you prefer. That’s helpful for situations in which you rendered out a couple hundred shots but one of ‘em had a problem. It’s easy to just click the one you need to rerender and be done with it.
Here’s how I set up my renders:
First thing to note is the rendering mode toggle: It’s set to “Source,” not to “Target.” The target mode renders out the whole timeline as one long … whatever. DPX or EXR sequence, usually, or QuickTime movie in this case. But we don’t want that. We want each individual shot on the timeline to be rendered out as its own thing. Hence we set it to “Source.”
Next, file naming. You probably used different parts of the same take on your timeline. You start on this guy’s close-up, cut away to something else, then cut back to the guy’s close-up again. If that particular take is A001_C003 or whatever, by default Resolve will render out the frames from the first use of that take to a QuickTime called A001_C003.mov … then later come back and render out the frames from the second use of that same take to a QuickTime called A001_C003.mov again, overwriting the first one! That’s lame, but there’s an easy workaround: tick the box next to “Render clip with unique filename.”
Next, handles. Handles are extra frames on the head and tail of each shot. They’re purely optional; you don’t require them for this workflow. But I always include them, because it doesn’t cost much (a few seconds here and there in the rendering, a few megabytes here and there on disk), and I feel safer knowing they’re there if I need them.
Make sure your frame rate and output type are right (Resolve defaults to 24 fps, and 10-bit DPXs), then give it a destination and hit “render.”
On my little Mac mini render node, which is just about the slowest possible computer for this, I get between five and ten frames a second. Which isn’t bad, frankly. It could be a lot worse. In my environment, I know that a ten-minute timeline will take a bit less than an hour to render. That’s a known quantity, and I can plan for it. If you have more computer to throw at it, of course it goes faster.
Anyway, once all your shots are rendered out to ProRes, the last step in the conforming process is to export an XML from Resolve that links to the new media files. This is trivially easy: Just go back to the Conform screen, and right next to the “Load” button you used to bring your AAF in in the first place is an “Export” button. Give it a name and path, and choose XML from the list of drop-downs, and you’re done.Online in After Effects
Now that you have rendered media with handles and an XML, we want to pop over into After Effects. In advance, you should’ve installed the Pro Import AE plugin from Automatic Duck. It’s a (previously $500, now free) plugin that lets After Effects create compositions from AAF or XML timelines. It links to whatever media files the AAF or XML file points to, and gives you a comp that you can start working on to do color or VFX or titles or whatever needs doing. It’s handy as hell.
Before you do anything else, though, you need to set up your color space correctly. If anybody knows how to make After Effects default to these settings, please let me know, cause I’m sick of changing them every damn time. Anyway:
This should be pretty familiar to anybody who uses After Effects: 32-bit color precision, sRGB working space, and linear-light compensation. Just get used to having those settings on all the time.
Now that that’s done, import the XML file you created out of Resolve. It’ll automatically link to your rendered ProRes files.
Once that’s done, you’ll end up with a timeline that looks something like this (though of course more complex and interesting, because this is just a simple example):
One layer per shot, and note that each shot has handles on it like we told Resolve to make. That way if an edit needs to be rolled a few frames in the online, you have the flexibility to do that.
Now go be an online artist. Do your color-correction (I like Colorista for this) and VFX (Mocha AE is a damn good planar tracker) and titles and credits (pro tip: lay them out in InDesign [I know!] and then export PDFs; they’re easier to work with by a mile than the After Effects title tool).
Because you’re using ProRes media files, courtesy of Resolve, After Effects is gonna go just about as fast as it can go. It’s not gonna be as real-time-interactive as something like Nuke would be — cause Nuke is practically supernatural — but it’s gonna be quick.Render out your final
Once you’re done being an online artist, it’s time to pick a format to render out to. What you’re producing now, out of After Effects, are your final, finished frames. (We’ll add the finished audio mixdowns later.) So you basically have two useful choices: a ProRes 4444 QuickTime, which is 12-bit gamma-encoded and lossy-but-not-very, and a half-float Piz-compressed EXR sequence which is lossless. EXR sequences are better in basically every way; they’re lossless but fairly small, at only about 6 MB per frame for HD, and they’re sequences which makes it easy to rerender just individual frames if you need to. ProRes 4444 QuickTimes, however, have their own advantages: They’re much smaller than EXR sequences, and they can be easily brought into Final Cut Pro to lay on final audio mixdowns (see below).
Me? I’m basically messed up in the head. I render EXR sequences of my final timelines … then I bring those EXRs back into After Effects and output them as ProRes 4444 QuickTimes. Why? Because I like having those sequences as my final work product. I know I can always go back to them if I need to without having to reopen After Effects (or God forbid, go all the way back to Avid) and hope the media linked up and my plugins are licensed and all that. But really, we’re getting into matters of personal taste here, so I’ll just say that’s how I do it and leave it at that.Lay down the audio
So now you have your ProRes 4444 QuickTime (made from your EXR sequence master, if you’re cool like me), and the audio mixdowns your sound guy has graciously sent you. What to do? Well, the best tool for this job is an old one you may or may not still have laying around: Final Cut Pro.
See, Final Cut Pro has this weird ability that no other similar tool has: It can read the frames out of a QuickTime file and then write them back out to disk without ever decoding them. Meaning you can drop your picture-only ProRes 4444 master file on a Final Cut Pro timeline, sync up your audio mixdowns, trim off your slates or two-pop or whatever else you might need to trim off, and then export the result without re-encoding the actual frames. That means the process is very fast, and more importantly, completely lossless.
If you don’t have Final Cut Pro at your disposal … well, I do, so I’ve never bothered to come up with a workaround. I guess you could put your audio in with After Effects when you convert your EXR sequence to ProRes 4444 the first time. That gets into implementation details that are beyond the scope of this little blahg, so I’m just gonna leave figuring that part out in detail as an exercise for the reader.
Anyway, long story short … you’re done. Your final product is whatever deliverable you needed to produce, with an EXR sequence (or ProRes 4444 QuickTime if you’re a loser) as an intermediate master product you can always go back to if you need to in the future. It’s good stuff.Variations
Now, the whole thing with Resolve seemed kinda needlessly complex, didn’t it? I mean, After Effects reads H.264s natively, right? Can’t we just bring an AAF into After Effects and do our conform there, linking to the original camera H.264 files?
Well, yes and no. You can do a workflow quite like that, but you don’t do your conform in After Effects. You actually do it in Avid, after you lock picture but before you make your AAF. You do it by AMAing all your takes in in their H.264 form, then relinking your timeline to those AMA files. Then you can export an AAF which After Effects will read directly, linking automatically to the camera QuickTimes.
This is quirky, though. First of all, for whatever reason my combination of After Effects and Pro Import AE read AAFs slightly wrong; it reads them as having a non-square pixel aspect ratio. That’s easy to fix — you change the PAR in the composition settings for your timeline, then you reinterp one of your camera QuickTimes as square-pixel and copy-and-paste that interpretation onto the other QuickTimes — but it’s a thing you have to do, and I like not doing things more than I like doing things.
But the bigger issue is that After Effects is slow to read H.264s. It’s just sluggish as hell. If you’re trying to be creative — not editing-creative now, but like color and visual effects creative — you don’t want slow. You want fast. So having your media be ProRes is better than not.
If that’s the case, though, why not batch-transcode everything to ProRes right out of the gate? After all, the first tool we used was Grinder, an application designed specifically for that. Can’t we just grind everything to ProRes?
Sure, you can … but that takes time and disk space that we don’t actually have to invest. Doing your conform in Resolve and then rendering out QuickTimes of just your selects (with handles) is a lot faster and more parsimonious than transcoding a zillion frames hardly any of which you’re going to use in your edit.
But the more important benefit of using Resolve this way is the flexibility it gives you. In this little story, I described writing out ProRes 422 LT QuickTimes as my online format. I didn’t have to do that. I could’ve written out DPX or EXR sequences instead. I chose not to specifically because I’m onlining in After Effects here, and After Effects is kind of a little bitch about linking to image sequences. If you wanted to use sequences in After Effects, you’d need to do a second conform there, by hand this time, overcutting your AAF timeline with sequences for each shot. Doable, but tedious in the extreme.
But what if you’re not onlining in After Effects? What if you’re onlining in Smoke? No problem. Just have Resolve spit out 10-bit DPXs instead. Those go straight into Smoke, and you’re laughing. Or what if you are onlining in After Effects, but you need to send like three shots over to a VFX guy who’s going to do some work for you to incorporate later? Easy. Just render everything to ProRes, like we talked about here, but then render just those shots out of Resolve to DPX or EXR sequences and send them to your VFX guy. He works on your frames and gives you back similar sequences as his finished product, which you overcut into your online timeline just like normal.
Now, I’m not saying this workflow’s for everybody. It’s highly idiosyncratic, and tuned to my particular set of needs using my particular set of tools. But it works well for me, and I thought maybe somebody might get something out of reading about how I do stuff. If nothing else, people smarter than I can get frustrated and send me vitriolic emails about how stupid I am, and then I’ll have learned something.