Thanks to an upcoming piece of software we’re working on I’ve been spending a lot of time within the CS5 workflow environment, particularly looking at metadata and the Story workflow, and I’ve come to the conclusion that we’ve been so blinded by the Mercury Engine’s performance that we might not have seen where they’re heading. And I like what I see.
Most readers will likely be aware of the Transcription ability introduced with CS4 and updated in CS5. Either in Soudbooth, or in Adobe Media Encoder (AME) via Premiere Pro for batches, the technology Adobe builds on from Autonomy will transcribe the spoken word into text. Our initial testing wasn’t that promising, but we’ve realized we weren’t sending it any sort of fair test. With good quality audio the results are pretty good: not perfect but close, depending on the source, of course.
We first explored this early in the year when we built and released Transcriptize, to port that transcription metadata from the Adobe world across to Apple’s. That’s what set us down our current path to the unreleased software, but more of that in a later post.
Now we’re back in that world, it’s a pretty amazing “story”. There’s three ways they get it that I see:
- Good metadata tracking at the file level
- Flexible metadata handling
- Metadata-based workflows built into the CS applications (and beyond).
Balancing that is the serious miss of not showing source metadata from non-tape media that doesn’t fit into pre-defined schema. At least that seems to be the case: I can’t find a Metadata Panel that shows the Source Metadata from P2, AVCCAM/AVCHD, or RED to display. Some of the source metadata is displayed in existing fields, but they are only the fields that Adobe has built into Premiere Pro, which miss a lot of information from the source. For example, none of the exposure metadata from RED footage is displayed, nor Latitude and Longitude from P2 and AVCCAM footage.
That’s the downside. To be fair, Final Cut Pro doesn’t display any of the Source Metadata either (although you can access it via the XML file.) Â Media Composer can show all the Source if desired.
Good Metadata Tracking at the file level
Apple added QuickTime Metadata to Final Cut Pro 5.1.2 where they retain and track any Source Metadata from files imported via Log and Transfer. This is a flexible schema but definitely under supported. Adobe’s alternative is XMP metadata. (Both XMP and QuickTime metadata can co-exist in most media file formats.)
XMP metadata is file based, meaning it is stored in, and read from, the file. There are seven built-in categories, plus Speech Analysis, which is XMP metadata stored in the file (for most formats) but considered as a different category in the Premiere Pro CS5 interface. I believe that the Source metadata should show in the XMP category because it is file-based even if its not XMP.
On the other plus side XMP metadata is very flexible. You don’t need third party applications to write to the XMP metadata. Inside Premiere Pro CS5 you simply set up the schema you want and the data is written to the file transparently. If the data is in a file when it’s added to a project, it’s read into the project and immediately accessible.
This metadata travels with the file to any and all projects. This provide a great way of sending custom metadata between applications. Speed Analysis metadata is also carried in the file, so it can be read by any Adobe application (and an upcoming one from us, see intro paragraph) direct from the file.
Flexible Metadata Handling
Not only is the XMP file-based metadata incredibly flexible, but you can also apply any metadata scheme to a clip within a Premiere Pro project, right into Clip metadata. For an example of how this is useful, let’s consider what we had to do in Final Cut Pro for First Cuts. Since Final Cut Pro doesn’t have a flexible metadata format, we had to co-opt Master Comments 1-4 and Comment A to carry our metadata. In Premiere Pro CS5 we could simply create new Clip-based fields for Story Keywords, Name, Place, Event or Theme and B-roll search keywords.
(Unfortunately this level of customization in Premiere Pro CS5 does not extend to Final Cut Pro XML import or export.)
An infinitely flexible metadata scheme for clips and for media files (independently) is exactly what I’d want an application to do.
Metadata-based Workflows in the CS5 Applications
To my chagrin I only recently discovered how deeply metadata-based workflows have become embedded in the Adobe workflow. (Thanks to Jacob Rosenberg’s demonstration at the June Editor’s Lounge for turning me on to this.) Adobe have crafted a great workflow for scripted productions that goes like this:
- Collaboratively write your script in Adobe Story, or import a script from most formats, including Final Draft. (Story is a web application.)
- Adobe Story parses the script into Scenes and Shots automatically.
- Export from Adobe Story to a desktop file that is imported into OnLocation during shooting.
- In OnLocation you have access to all the clips generated out of the Adobe Story file. Clips can be duplicated for multiple takes.
- Clips are named after Scene/Shot/Take.
- During shooting you do not need to have a connection to the camera because some genius at Adobe realized that metadata could solve that problem. All that needs to be done during shooting of any given shot/take is for a time stamp to be marked against the Clip:
- i.e. this clips was being taken “now”.
- Marking a time stamp is a simple button press with the clip selected.
- After footage has been shot, the OnLocation project is “pointed” to the media where it automatically matches the shot with the appropriate media file, based on the time stamp metadata in the media file with the time mark in the OnLocation Clip.
- The media file is renamed to match the clip. Ready for import to Premiere Pro CS5.
Now here’s the genius part in my opinion (other than using the time stamp to link clips). The script from Adobe Story has been embedded in those OnLocation clips, and is now in the clip. Once Speech Analysis is complete for each clip, the script is laid-up against the analyzed media file so each word is time stamped. The advantage of this workflow over using a guide script directly imported is that the original formatting is used when the script comes via Story.
All that needs to be done is to build the sequence based on the script, with the advantage that every clip is now searchable by word. Almost close to, but not quite, Avid’s ScriptSync based on an entirely different technology (Nexidia).
It’s a great use of script and Speech Analysis and a great use of time-stamp metadata to reduce clip naming, linking and script embedding. A hint of the future of metadata-based workflows.
All we need now, Adobe, is access to all the Source Metadata.