CAT | Video Technology
Red Shark news reports that Disney Research have:
Researchers working for the Mouse have developed a groundbreaking program that delivers automated edits from multi-camera footage based on cinematic criteria.
When you read how they’ve achieved it, I think it’s impressive, and very, very clever.
The system works by approximating the 3D space of the cameras in relation to each other. The algorithm determines the “3D joint attention,” or the likely center of activity, through an on-the-fly analysis of the multiple camera views. Based on this information, the algorithm additionally takes into account a set of cinematic preferences, such as adherence to the 180 degree rule, avoidance of jump cuts, varying shot size and zoom, maintaining minimum and maximum shot lengths, and cutting on action. The result is a very passable, almost human edit.
Perhaps it’s the very nature of research, but I’m not sure of the practical application. Maybe that’s the point of pure research.
Assuming the technology delivers, it’s rare that we want to take a multicam shoot and do a single, switched playback version. “Live switching” after the fact, if you will. At least in my experience, the edit not only needs to switch multicam angles, but to remove dross, tighten the presentation, add in additional b-roll, etc, etc.
More often than not, my angle cuts are more directed by the edit I want, than a desire to just pick the best shot at the time.
That said, this type of research is indicative of what can be done (and therefore almost certainly will be done): combine a good multicam edit, with content metadata and perhaps you’d have a decent first pass, that could be built on, finished and polished by the skilled editor. The point being, as Larry Jordan points out is
How do you save time every step of the production process, so that you’ve got the time that you need to make your films to your satisfaction.
Ultimately the commercial versions of these type of technologies should be seen as tools editors can use to make more time for their real job: finessing, polishing and finishing the project; bringing it heart that makes the human connection in storytelling.
Over on IndieGoGo there’s a project for MOX – an open source mezzanine codec for (mostly) postproduction workflows and archiving. The obvious advantage over existing codecs like ProRes, DNxHD and Cineform is that MOX will be open source, so there is significantly reduce risk that the codec might go away in the future, or stop being supported.
Technically the project looks reasonable and feasible. There is a small, but significant, group of people who worry that support for the current codecs may go away in the future. There’s no real evidence for this, other than that Apple has deprecated old, inefficient and obsolete codecs by not bringing them forward to AVFoundation.
I have more concerns for the long term with an open source project. History shows that many projects start strong, but ultimately it comes down to a small group of people (or one in MOX’s case) doing all the work, and inevitably life’s circumstances intervene.
MOX is not a bad idea. I just doubt that it will gain and sustain the momentum it would need.
A new show in which we discuss 4K. http://www.theterenceandphilipshow.com/?p=546
In this free webinar I examine Apple’s ProRes codec inside and out. Content includes:
Apple’s ProRes family is becoming one of the most common formats in production and postproduction, but how much do you really know about this code? Which version is best for your needs?
- Introducing the ProRes family
- RCBA vs YUV – what does it mean?
- Lossless vs Visually Lossless
- Using ProRes: a codec by codec guide.
Check out the trailer and register free.
I frequently find myself evolving my position on technology as new information comes to light. As my email sig line used to say “Above all, I reserve the right to be wrong”. As new information comes to light, or reaching a certain point in thinking allows another perspective to open up, my positions frequently evolve.
One example would be the use of 4K, another is the development of Lumberjack System.
I had time to do some export testing from Premiere Pro CC and Final Cut Pro X 10.1. Definite proof that second GPU is being used, and worth it!
There as been some discussion – and a little panic – as the news has leaked out from developers that “QuickTime is deprecated”. What does that mean and what affect will it have on video professionals? When an OS API (Application Programming Interface) is deprecated, developers are warned to not write any new code using that API, because at some future (usually unspecified) time, the API will go away and the code won’t run.
Buried in John Siracusa’s excellent review of OS X Mavericks is this little gem:
Modern Macs with integrated GPUs get some nice improvements in Mavericks. Any Mac with Intel’s HD4000 graphics or better can now run OpenCL on the integrated GPU in addition to the CPU and any discrete GPU.
It’s that little bold bit that makes it special! With OpenCL increasingly taking up the load of general computer processes previously forced on the CPU, the more we can take advantage of the GPU power already installed.
I have conflicting thoughts about 4K for production and distribution. At one level I’m convinced it’s being pushed on us by equipment manufacturers when there is no real demand: at another I know from experience that there are some non-obvious advantages to 4K. But one thing is clear: the push to 4K is not about a push to improved quality.
After Terry Curren’s round up of last year’s Hollywood Post Alliance Retreat I decided I should attend this year. While I was working on marketing for Lumberjack – our real time location logging tool – I got an email from the HPA offering spaces in the demo room during the retreat. It was immediately obvious that this was the time and place to reveal what we’ve been working for the last 8-9 months.