The present and future of post production business and technology

How Adobe ‘gets’ metadata workflows

Thanks to an upcoming piece of software we’re working on I’ve been spending a lot of time within the CS5 workflow environment, particularly looking at metadata and the Story workflow, and I’ve come to the conclusion that we’ve been so blinded by the Mercury Engine’s performance that we might not have seen where they’re heading. And I like what I see.

Most readers will likely be aware of the Transcription ability introduced with CS4 and updated in CS5. Either in Soudbooth, or in Adobe Media Encoder (AME) via Premiere Pro for batches, the technology Adobe builds on from Autonomy will transcribe the spoken word into text. Our initial testing wasn’t that promising, but we’ve realized we weren’t sending it any sort of fair test. With good quality audio the results are pretty good: not perfect but close, depending on the source, of course.

We first explored this early in the year when we built and released Transcriptize, to port that transcription metadata from the Adobe world across to Apple’s. That’s what set us down our current path to the unreleased software, but more of that in a later post.

Now we’re back in that world, it’s a pretty amazing “story”. There’s three ways they get it that I see:

  1. Good metadata tracking at the file level
  2. Flexible metadata handling
  3. Metadata-based workflows built into the CS applications (and beyond).

Balancing that is the serious miss of not showing source metadata from non-tape media that doesn’t fit into pre-defined schema. At least that seems to be the case: I can’t find a Metadata Panel that shows the Source Metadata from P2, AVCCAM/AVCHD, or RED to display. Some of the source metadata is displayed in existing fields, but they are only the fields that Adobe has built into Premiere Pro, which miss a lot of information from the source. For example, none of the exposure metadata from RED footage is displayed, nor Latitude and Longitude from P2 and AVCCAM footage.

That’s the downside. To be fair, Final Cut Pro doesn’t display any of the Source Metadata either (although you can access it via the XML file.)  Media Composer can show all the Source if desired.

Good Metadata Tracking at the file level

Apple added QuickTime Metadata to Final Cut Pro 5.1.2 where they retain and track any Source Metadata from files imported via Log and Transfer. This is a flexible schema but definitely under supported. Adobe’s alternative is XMP metadata. (Both XMP and QuickTime metadata can co-exist in most media file formats.)

XMP metadata is file based, meaning it is stored in, and read from, the file. There are seven built-in categories, plus Speech Analysis, which is XMP metadata stored in the file (for most formats) but considered as a different category in the Premiere Pro CS5 interface. I believe that the Source metadata should show in the XMP category because it is file-based even if its not XMP.

On the other plus side XMP metadata is very flexible. You don’t need third party applications to write to the XMP metadata. Inside Premiere Pro CS5 you simply set up the schema you want and the data is written to the file transparently. If the data is in a file when it’s added to a project, it’s read into the project and immediately accessible.

This metadata travels with the file to any and all projects. This provide a great way of sending custom metadata between applications. Speed Analysis metadata is also carried in the file, so it can be read by any Adobe application (and an upcoming one from us, see intro paragraph) direct from the file.

Flexible Metadata Handling

Not only is the XMP file-based metadata incredibly flexible, but you can also apply any metadata scheme to a clip within a Premiere Pro project, right into Clip metadata. For an example of how this is useful, let’s consider what we had to do in Final Cut Pro for First Cuts. Since Final Cut Pro doesn’t have a flexible metadata format, we had to co-opt Master Comments 1-4 and Comment A to carry our metadata. In Premiere Pro CS5 we could simply create new Clip-based fields for Story Keywords, Name, Place, Event or Theme and B-roll search keywords.

(Unfortunately this level of customization in Premiere Pro CS5 does not extend to Final Cut Pro XML import or export.)

An infinitely flexible metadata scheme for clips and for media files (independently) is exactly what I’d want an application to do.

Metadata-based Workflows in the CS5 Applications

To my chagrin I only recently discovered how deeply metadata-based workflows have become embedded in the Adobe workflow. (Thanks to Jacob Rosenberg’s demonstration at the June Editor’s Lounge for turning me on to this.) Adobe have crafted a great workflow for scripted productions that goes like this:

  1. Collaboratively write your script in Adobe Story, or import a script from most formats, including Final Draft. (Story is a web application.)
    • Adobe Story parses the script into Scenes and Shots automatically.
  2. Export from Adobe Story to a desktop file that is imported into OnLocation during shooting.
    • In OnLocation you have access to all the clips generated out of the Adobe Story file. Clips can be duplicated for multiple takes.
    • Clips are named after Scene/Shot/Take.
  3. During shooting you do not need to have a connection to the camera because some genius at Adobe realized that metadata could solve that problem. All that needs to be done during shooting of any given shot/take is for a time stamp to be marked against the Clip:
    • i.e. this clips was being taken “now”.
    • Marking a time stamp is a simple button press with the clip selected.
  4. After footage has been shot, the OnLocation project is “pointed” to the media where it automatically matches the shot with the appropriate media file, based on the time stamp metadata in the media file with the time mark in the OnLocation Clip.
    • The media file is renamed to match the clip. Ready for import to Premiere Pro CS5.

Now here’s the genius part in my opinion (other than using the time stamp to link clips). The script from Adobe Story has been embedded in those OnLocation clips, and is now in the clip. Once Speech Analysis is complete for each clip, the script is laid-up against the analyzed media file so each word is time stamped. The advantage of this workflow over using a guide script directly imported is that the original formatting is used when the script comes via Story.

All that needs to be done is to build the sequence based on the script, with the advantage that every clip is now searchable by word. Almost close to, but not quite, Avid’s ScriptSync based on an entirely different technology (Nexidia).

It’s a great use of script and Speech Analysis and a great use of time-stamp metadata to reduce clip naming, linking and script embedding. A hint of the future of metadata-based workflows.

All we need now, Adobe, is access to all the Source Metadata.


Posted

in

,

by

Tags:

Comments

11 responses to “How Adobe ‘gets’ metadata workflows”

  1. You said, “Clips are named after Scene/Shot/Take.” What about multi-camera shoots? Can you do Scene/Shot/Take/Camera?

    Does “Shot” represent an individual camera setup? We always added a suffix to “Scene” for each setup. Like Scene 1A, 1B, etc. whenever we changed the angle or the lens length.

    Does CS5 have tape capture? Do you think they should drop it for CS6?:)

    Peace,

    Rob:-]

    1. Additional thoughts – CS5 does have tape capture and they should probably drop it at the next version, but there’s not the same dynamic as they’ve already rewritten (well, they have common code with the PC version), whereas for FCP it has to be consciously rewritten for that 2012 release. Slightly different circumstances.

      And from what I recall of the OnLocation workflow, you can modify the default names there if you want. Since the breakdown is derived from the script the Scene and shot breakdown is as per the script. But editable.

      Philip

  2. Here’s a question you might address in another post. I think your a good person to ask as you have a lot of experience with both.

    What are the pros and cons of upgrading to the next Final Cut Studio or switching to Adobe Premier Pro bundle, CS5.

    Here’s some of the factors I’m aware of.
    1. Cost is one … Adobe is more expensive.
    2. I could switch to cheaper hardware. PCs cost less and Windows 7 64 bit seems pretty stable. (Which hardware platform/OS would you recommend?)
    3. CS5, with 64 bit, is here now.
    4. If I stick with FCP I still need PhotoShop and AfterEffects, right? (If I buy those two plus one more Adobe app like Illustrator, then I might as well get the bundle.)
    5. Will the next FCP leapfrog CS5?

    Peace,

    Rob:-]

    1. First off I like to think of it as “and” not “or”. Given what we used to pay for just one NLE, we can have all three major ones and still have huge amounts of change.
      1) In the grand scheme of things whether you pay an extra $1000 or so really isn’t much in a professional budget.
      2) You *could* get cheaper hardware but you shouldn’t. You should buy quality PC hardware from a quality brand for NLE and that’s going to cost you about the same as a Mac.
      3) CS5 is here now and the performance is good, even without the supported NVIDIA graphics card. How much benefit that is….
      4) Seriously, if you want two Adobe apps, buy the bundle with them in it. Three, definitely. PPro is a bonus. (My “deal braker” with PPro is lack of multiple project open at a tie support.)
      5) Almost certainly the next version of FCP will leapfrog everything else. I think/believe that Apple are re-archtechting FCP and (I hope) making a “next generation” tool. By analogy: if you”d asked people what they wanted for transportation improvements around the end of the 1800’s they’d have said “faster horses” when the real innovation was the automobile.

      Philip

  3. Philip Hodgetts

    That’s a good question Rob. I don’t know how the workflow works with multicam. You’ll have to ask someone at Adobe and get back to me when you find out.

    Philip

  4. Hay Philip,

    Thanks for the quick response. I should have mentioned that I’m not a professional. Oh, I’ve made a bit of money shooting and editing, but that’s not my goal, really.

    I’m fixing to retire in a few years and I’d like to be able to produce my own stuff; short docs, short narratives and maybe a web series. So I’m all self-funded and saving for a Scarlet or two.

    Peace,

    Rob:-]

    1. Jennard did say “by end of this year” for Scarlet. And self funding is a different issue. PPro, FCP or Media Composer can all do a great job – it’s really which you find works better. There’s a 30 day trial of both Media Composer and Premiere Pro CS5.

      Philip

  5. “You’re better off with books than websites. Google is not the answer to everything. Syd Field writes good screenwriting books, in which you’ll learn about the three act structure and story arcs. There are plenty of screenplays available to buy as well. Get a screenplay of a GOOD movie and buy a copy of the same film on DVD, then watch it over and over again. If you can begin to work out how the magic translates from page to screen, then you’ve got a good start. The rest is up to you….”

    1. An interesting comment Carol, but what the heck does it have to do with the post? I prefer relevant comments rather than those that push your own position. I approved this one but in the future, keep comments relevant to the subject.

      Philip

  6. Alex Udell

    I guess a distinction I need clarified is how we’re dealing with “clips” versus “files” and metadata.

    Is there a distinction at or is all markup from any application in the pipeline applied to the same file XMP?

    If not, should there be?

    Alex

    1. Clips are abstract representations of media files. There can be many clips pointing to the media file, and each clip can have *clip* metadata. XMP metadata is file-based metadata (like QT Metadata) and can only carry one set of XMP metadata per file. But I’m sorry, I simply don’t understand the next question.

      Philip