The present and future of post production business and technology

Thoughts on the (near) future of Metadata

I am convinced that we will be benefiting from computer derived metadata, as I’m outlining in the (occasional, sadly) series of technology summaries, at some future time – likely early than we expect. However, that remains some time in the future, or in terms of usefulness now: not at all.

That doesn’t mean the demand for metadata will go down. If anything there is a growing demand for comprehensive metadata from the camera through to any possible future distribution and reuse of all or part of any project. Everything you’ve shot is only an asset if you can find it and use it again (including knowing what rights are associated with it). So, once again I’ve been thinking about the future of metadata for production automation, largely because it’s the only way I think we can hope to produce the Solar Odyssey TV Show, and because I was presenting on the subject in Boston in the middle of February. Both are forcing me to think on the subject.

One of the under-promoted features of Adobe’s workflow is the Adobe Story flow of script metadata. These days Adobe Story .astx files can be directly imported, and associated with their shot footage, without going near OnLocation. But it was a demo of OnLocation at Editor’s Lounge by Jacob Rosenberg that set of a bright light in my brain.

In that demo, the workflow was to take the Story .astx file and open it into OnLocation, which broke it down into scenes and shots based on the metadata in the Story file. What particularly interested me is that in the demo the logging took place without any connection between OnLocation and the media files, except by use of a simple time-of-day timestamp. The metadata matched to the media later by searching for media files that encompassed that timestamp. Simple and ingenious, and a total fail if there were two takes but the camera kept rolling between! But in principle the idea is genius.

This is also not a currently promoted workflow from Adobe, and with the sneak preview of Prelude at the San Francisco Supermeet, I have great hope that something better is coming in their future releases.

But that core concept: linking metadata to media by simple time-of-day now that all our (non-tape) source devices have reasonably accurate clocks. For metadata matching I don’t need microsecond time matches, like I would for synchronizing clips: give or take a half second on metadata tags isn’t an issue.

So how does that help us get closer to the goal I stated in the excerpt: getting the metadata earlier and making it easier?

Because it can be done without any direct connection to the media, metadata can be gathered “on set,” during shooting (depending on style) or even prepared ahead of time (like a script, with shots and takes broken down, as I outlined above). Instead of notes, enter the metadata electronically and later associate it with the media using that ubiquitous time-of-day timestamp.

For example on the Solar Odyssey voyage, we’ll be doing off-vessel interviews. My plan is to shoot three cameras – two essentially locked off and one with our primary camera person. A putative (as in, not yet built but in the design phase) iPad app (or web app) could have a simple text interface. Enter the subject’s name, location (matched to the GPS or derived from the GPS) and then for each interview answer a simple “start answer” button, prompting selection of previously entered metadata terms, or quick entry of a new term, and an “end answer” button. All stored in a database.

Take the records from the database, and in some (again putative) application, merge the shoot log with the media files and send it to the NLE (in our case it will be Final Cut Pro X) with all three cameras trimmed to the answers, with metadata entered ready to be conformed to multiclips. (If the time-of-day timestamps were accurate enough, we could make multiclips but I suspect we’ll be better off using FCP X’s own make multiclip feature or with PluralEyes.)

Relatively idle time during the interview, is transformed into useful metadata ready for the editor(s) to start work cutting the piece.

After the Boston presentation where I outlined some of these ideas, Richard Akerboom pointed out that something similar was being done by Sportstec with their Coda product. While its primary function uses a standard timecode, Richard researched for me and came back with:

Right now the xml includes a relative time (based on start of tagging session) but also includes a tag for start of session in GMT, so the raw material is there to tag all events in GMT.

The other idea for making metadata entry easy is one I’ve seen in a number of ways, and that is to use a custom visual interface. For us on Solar Odyssey, when we’re on board, the app could change to be more visual: the layout of the boat in plan view, icons for each of the crew and for “visitor”, and contextually sensitive pop-up menus for actions.

For example, touch the “Philip” button;  then “Jim”, and click on Jim’s cabin seat. The pop-up menu might then give choices of:

Discussed

Navigation, food, schedule, power, etc

Argued

Navigation,food, schedule, etc

Level of intensity 1-5 (or something).

This is still a work in progress, so not a lot of detailed planning has gone into the sorts of choices, and visual elements to drive them has been done, but the principle of being able to enter seriously useful metadata by hitting four or five buttons in succession, instead of type-type-type, might make it easy enough that people will actually do it!

Also, using Time Associated metadata (that’s my term for this phenomena) means that it isn’t the work of an Assistant Editor to enter in the log notes from the shoot. The app will be on everyone’s iPad (and probably also accessible from any MacBook Pro) so the moment something starts happening, they can capture the metadata. And, if the camera on an iPad gets a little better, be able to immediately turn on the camera and get *some* coverage if one of the primary cameras isn’t rolling.

I know nothing about the production of “Big Brother” but it seems to me the house layout and character icons would seem an obvious place for this type of Time Associated, visual entry philosophy.

Anything to encourage metadata entry: let it be easy and early.


Posted

in

by

Tags:

Comments

7 responses to “Thoughts on the (near) future of Metadata”

  1. kimhill

    Brilliant.

  2. I know there’s an app out, or about to be out, for the iPad that is a clapper app, which shows a bar code, which you can then use the desktop co-app to match the metadata from the clapper to your video automatically. I love where this technology is going, and am excited to see what y’all will come up with during this project. And what you may release as commercial products as a result of this project.

    1. Philip

      I’ll look into that Ben.

  3. Ben Brodbeck

    The app that Mr. Balser is referring to is called QR Slate.

    1. Philip

      Thanks Ben. The link is http://www.qrslate.com/ and it’s definitely “on track”.

  4. Metadata’s nothing new to experienced film editors who once logged extremely fine detail into master codebooks, and for the same reasons, to find stuff quickly, either on the job or in archive. What bothers me is how it’s being slapped in your face by apps like FCPX as toylike aspects such as “collections” “favorites” after ingest.

    The work sequence many of us trained in adds metadata well before cutting. FCPX in particular won’t let you prelog before transfer because it’s file based; capture is no longer LOG and capture, which was another misstep which flips the bird to the vets– you’re saddled with it after ingest, when normally you want to cut. This is why many of us avoid FCPX. Its architecture is half-baked and dislexic to boot.

    But much of what you describe is useful.

    1. Philip

      I wish more people would have logged in detail, but the reality is it’s been a few and not often for the last decade as the industry has exploded in size.