Apple Keynote – Back to the Mac: Implications for Final Cut Pro
There were a lot of features I saw in OS X Lion and particularly in iMovie 11, that I would love to see inside Final Cut Pro. Things like QuickView I already mentioned in my “What should Apple do with Final Cut Pro” article from September.
But today I saw some things I really want in the next version of Final Cut Pro. Like scalable waveforms that change color according to their level! Scalable waveforms (as Media Composer already has and I think PPro CS5) has been a feature request for Final Cut Pro as far back as I can remember. And now the technology is there in the Apple technology basket. We’ll take that, thanks.
Trailers – semi-automatic completion of a trailer – and Themes, fit comfortably with my concept of Templatorization: the use of templates to speed up production. I first mentioned the concept in a blog post of April 2005 titled “Can a computer replace an editor?“. It’s still a good read and remember, that was long before we started actually building that future with our First Cuts/Finisher products. Templatorization is already in the Final Cut Studio with Master Templates from Motion used and adapted (with custom images and text) inside Final Cut Pro.
The concept here is similar. We’ll see more Templatorization over time, even if they are custom templates for a project, like custom Master Templates.
Plus, as my friend Rob Birnholz tweeted during the presentation when some were complaining that Templatorization would drive hourly rates down even further:
I can now sell CUSTOM Professional video design! (vs. template based ‘insta-video’)
But the one piece of technology I most want to see in the next version of Final Cut Pro is People Finder because it automates the generation of so much metadata, that combined with Source metadata is going to really open up Assisted Editing to take away a lot of the dull work of finding footage and a story. (And Shane you can hate me now, but more efficient production is always going to be the driver, but we can automate the drudgery, not the creativity.)
By analyzing your video for faces, People Finder identifies the parts with people in them and tells you how many are in each scene. It also finds the close-ups, medium shots, or wide angles so it’s easy to grab video clips as you need them.
We get shot type metadata – CU, M, Wide and we get identification of the number of people in the shot. That’s powerful metadata. I suspect we won’t get it in the next version of Final Cut Pro because they’ve got enough to do and can’t do everything at once, but I’d love to see this level of automated metadata generation. Remember too, that as well as the facial recognition technology already shipping in iPhoto and now iMovie, it was announced back in September that they had purchased another facial recognition company to improve the accuracy.
The holy grail, from my perspective, of facial recognition would be if the software (Final Cut Pro please) recognized all faces in the footage, and grouped the same face together (like Faces in iPhoto). You’d still have to identify the person once, but from there on basic Lower Thirds (person and location) could be automatically generated (location eventually coming from GPS in the camera – we’re not there yet).
It’s a pity Apple don’t have or license speech recognition technology. Licensing Nexidia’s speech search would be ok (it’s what powers Get and ScriptSync) but it doesn’t derive metadata like speech analysis does. Once you have speech as metadata is makes things like prEdit possible; and ultimately the automatic derivation of keywords.
And it seems like my five year old ruminations might have been on to something.