That doesn’t mean the demand for metadata will go down. If anything there is a growing demand for comprehensive metadata from the camera through to any possible future distribution and reuse of all or part of any project. Everything you’ve shot is only an asset if you can find it and use it again (including knowing what rights are associated with it). So, once again I’ve been thinking about the future of metadata for production automation, largely because it’s the only way I think we can hope to produce the Solar Odyssey TV Show, and because I was presenting on the subject in Boston in the middle of February. Both are forcing me to think on the subject.
One of the under-promoted features of Adobe’s workflow is the Adobe Story flow of script metadata. These days Adobe Story .astx files can be directly imported, and associated with their shot footage, without going near OnLocation. But it was a demo of OnLocation at Editor’s Lounge by Jacob Rosenberg that set of a bright light in my brain.
In that demo, the workflow was to take the Story .astx file and open it into OnLocation, which broke it down into scenes and shots based on the metadata in the Story file. What particularly interested me is that in the demo the logging took place without any connection between OnLocation and the media files, except by use of a simple time-of-day timestamp. The metadata matched to the media later by searching for media files that encompassed that timestamp. Simple and ingenious, and a total fail if there were two takes but the camera kept rolling between! But in principle the idea is genius.
This is also not a currently promoted workflow from Adobe, and with the sneak preview of Prelude at the San Francisco Supermeet, I have great hope that something better is coming in their future releases.
But that core concept: linking metadata to media by simple time-of-day now that all our (non-tape) source devices have reasonably accurate clocks. For metadata matching I don’t need microsecond time matches, like I would for synchronizing clips: give or take a half second on metadata tags isn’t an issue.
So how does that help us get closer to the goal I stated in the excerpt: getting the metadata earlier and making it easier?
Because it can be done without any direct connection to the media, metadata can be gathered “on set,” during shooting (depending on style) or even prepared ahead of time (like a script, with shots and takes broken down, as I outlined above). Instead of notes, enter the metadata electronically and later associate it with the media using that ubiquitous time-of-day timestamp.
For example on the Solar Odyssey voyage, we’ll be doing off-vessel interviews. My plan is to shoot three cameras – two essentially locked off and one with our primary camera person. A putative (as in, not yet built but in the design phase) iPad app (or web app) could have a simple text interface. Enter the subject’s name, location (matched to the GPS or derived from the GPS) and then for each interview answer a simple “start answer” button, prompting selection of previously entered metadata terms, or quick entry of a new term, and an “end answer” button. All stored in a database.
Take the records from the database, and in some (again putative) application, merge the shoot log with the media files and send it to the NLE (in our case it will be Final Cut Pro X) with all three cameras trimmed to the answers, with metadata entered ready to be conformed to multiclips. (If the time-of-day timestamps were accurate enough, we could make multiclips but I suspect we’ll be better off using FCP X’s own make multiclip feature or with PluralEyes.)
Relatively idle time during the interview, is transformed into useful metadata ready for the editor(s) to start work cutting the piece.
After the Boston presentation where I outlined some of these ideas, Richard Akerboom pointed out that something similar was being done by Sportstec with their Coda product. While its primary function uses a standard timecode, Richard researched for me and came back with:
Right now the xml includes a relative time (based on start of tagging session) but also includes a tag for start of session in GMT, so the raw material is there to tag all events in GMT.
The other idea for making metadata entry easy is one I’ve seen in a number of ways, and that is to use a custom visual interface. For us on Solar Odyssey, when we’re on board, the app could change to be more visual: the layout of the boat in plan view, icons for each of the crew and for “visitor”, and contextually sensitive pop-up menus for actions.
For example, touch the “Philip” button; then “Jim”, and click on Jim’s cabin seat. The pop-up menu might then give choices of:
Navigation, food, schedule, power, etc
Navigation,food, schedule, etc
Level of intensity 1-5 (or something).
This is still a work in progress, so not a lot of detailed planning has gone into the sorts of choices, and visual elements to drive them has been done, but the principle of being able to enter seriously useful metadata by hitting four or five buttons in succession, instead of type-type-type, might make it easy enough that people will actually do it!
Also, using Time Associated metadata (that’s my term for this phenomena) means that it isn’t the work of an Assistant Editor to enter in the log notes from the shoot. The app will be on everyone’s iPad (and probably also accessible from any MacBook Pro) so the moment something starts happening, they can capture the metadata. And, if the camera on an iPad gets a little better, be able to immediately turn on the camera and get *some* coverage if one of the primary cameras isn’t rolling.
I know nothing about the production of “Big Brother” but it seems to me the house layout and character icons would seem an obvious place for this type of Time Associated, visual entry philosophy.
Anything to encourage metadata entry: let it be easy and early.