The present and future of post production business and technology | Philip Hodgetts

CAT | Technology

How small can a high quality production kit go? It depends on the usage, but the kit I used for a recent trip to Tasmania to record interviews for a family history project. This is the same kit I discussed in A New Production Haiku recently. I also discussed the audio portion in more details Larry Jordan’s Digital Production BuZZ on February 5th. The BuZZ segment is below the LACPUG presentation.

(more…)

Feb/15

2

Why I Love Keyword Ranges

One of the Final Cut Pro X features that really resonates with me, is Keyword Ranges, and by extension, Keyword Collections. I realize now that this enchantment is because Keyword Ranges are a very pure embodiment of Content Metadata. I also realize now, that I’d been simulating this approach in other software, for as long as I can remember. In order to understand better, we’ll need to take a little trip to the past.

(more…)

In early 2012 I went through a process of reducing production gear to a minimum, akin to trying to write a Haiku. I’m gearing up for a production trip to Australia to record interviews with my extended family during our quadrennial family reunion. It’s a new production Haiku with different solutions due to the inevitable march of technology, and the needs of this production.

I do not expect this project to ever reach broadcast, but there’s no reason not to have the best quality sound and picture I can, for these recordings should last into posterity. I am traveling alone, so it was important to not carry too much. Essentially I need a good multicam interview setup, with excellent audio quality. I will shoot b-roll around the family reunion and some of the family sites.

(more…)

Dec/14

15

Advances in Content Recognition

At the current stage of technology development, we are largely limited to adding Content Metadata manually. If we want people described; if we want the scene described; or the action described, we need to add Keywords or Notes to achieve that. I don’t expect that to be the case in the future. Technology from Clarifai and Google give us clues to the future.

(more…)

Dec/14

2

Final Cut Pro X 10.1.4

Out of the blue, Apple announces Final Cut Pro X 10.1.4, which includes some key stability improvements. There is also a Pro Video Formats 2.0 software update, which provides native support for importing, editing, and exporting MXF files with Final Cut Pro X. While FCP X already supported import of MXF files from video cameras, this update extends the format support to a broader range of files and workflows.

So, I guess we know what happened to Hamburg’s MXF technology!
Here is a summary of what’s new in version 10.1.4:
- Native MXF import, edit, and export with Pro Video Formats 2.0 software update (also works with Motion)
- Option to export AVC-Intra MXF files
- Fixes issues with automatic library backups
- Fixes a problem where clips with certain frame rates from Canon and Sanyo cameras would not import properly 
- Resolves issues that could interrupt long imports when App Nap is enabled
- Stabilization and Rolling Shutter reduction works correctly with 240fps video
For more information about the new MXF support, you can view the following article: http://support.apple.com/en-us/HT6423
Hard to categorize this as either a feature or maintenance release. I’ll go for feature release simply because of the support for MXF natively and export to AVC-Intra MXF files. Neither will affect me, but they are features that are very important to the higher end of the editing market.

Jon Chapelle of Digital Rebellion has noted that the support for MXF is much wider than just Pro Apps. What is interesting is that the MXF components seem to be QuickTime based, rather than AV Foundation, probably for historic reasons.

Nov/14

22

Useful Speech-to-Text is hard!

I was saddened, but not really surprised, by this week’s announcement that Adobe were pulling Speech-to-Text transcription from Premiere Pro, Prelude and AME. As Al Mooney says in the blog post:

Today, after many years of work, we believe, and users confirm, that the Speech to Text implementation does not provide the experience expected by Premiere Pro users.

I am saddened to see this feature go. Even though the actual speech-to-text engine was somewhat hit or miss, there was real benefit in the ability to import a transcript (or script) and lock the media to the script.  So it’s probably worth keeping the current version of Premiere (or one of the other other apps) to keep the synching function, as the apps will continue to support the metadata if it’s in the file.

Co-incidentally, we had a feature request recently, wanting a transcription-based workflow in Final Cut Pro X. When questions on how he’d like it to work, he described (unintentionally) the workflow in Premiere Pro!

In fact, I’d almost implore Adobe to keep the ability to import a transcript and align it to the media, using a speech analysis engine. That way the industry will have an alternative to Avid’s Script Sync auto-alignment (previously powered by Nexidia) tools currently unavailable in Media Composer. The ability to search – by word-based content – hundreds of media files with transcripts, is extremely powerful for documentary filmmakers.

And yes, there is the Nexidia-powered Boris Soundbite, but there is one problem with this waveform-matching approach: there is no content metadata. Nor anything (like text) we can use to derive content metadata.

Nov/14

15

Metadata and Organization

This week I sat down with Larry Jordan and Michael Horton and talked – what else – metadata: what it is, why we need it, how we get it, and how we use it. It was a good interview and probably my clearest explanation of what metadata is.

You can hear the interview here:

http://www.digitalproductionbuzz.com/BuZZ_Audio/Buzz_141113_Hodgetts.mp3

and read the transcript, courtesy of Take 1 Transcription is at:

http://www.digitalproductionbuzz.com/2014/11/transcript-digital-production-buzz-november-13-2014/#.VGeUCoe287Q

Nov/14

14

How Useful is Automated Multicam Editing?

Red Shark news reports that Disney Research have:

Researchers working for the Mouse have developed a groundbreaking program that delivers automated edits from multi-camera footage based on cinematic criteria.

When you read how they’ve achieved it, I think it’s impressive, and very, very clever.

The system works by approximating the 3D space of the cameras in relation to each other. The algorithm determines the “3D joint attention,” or the likely center of activity, through an on-the-fly analysis of the multiple camera views. Based on this information, the algorithm additionally takes into account a set of cinematic preferences, such as adherence to the 180 degree rule, avoidance of jump cuts, varying shot size and zoom, maintaining minimum and maximum shot lengths, and cutting on action. The result is a very passable, almost human edit.

Perhaps it’s the very nature of research, but I’m not sure of the practical application. Maybe that’s the point of pure research.

Assuming the technology delivers, it’s rare that we want to take a multicam shoot and do a single, switched playback version. “Live switching” after the fact, if you will. At least in my experience, the edit not only needs to switch multicam angles, but to remove dross, tighten the presentation, add in additional b-roll, etc, etc.

More often than not, my angle cuts are more directed by the edit I want, than a desire to just pick the best shot at the time.

That said, this type of research is indicative of what can be done (and therefore almost certainly will be done): combine a good multicam edit, with content metadata and perhaps you’d have a decent first pass, that could be built on, finished and polished by the skilled editor. The point being, as Larry Jordan points out is

How do you save time every step of the production process, so that you’ve got the time that you need to make your films to your satisfaction.

Ultimately the commercial versions of these type of technologies should be seen as tools editors can use to make more time for their real job: finessing, polishing and finishing the project; bringing it heart that makes the human connection in storytelling.

Once upon a time it was easy to differentiate between Film and TV production: film was shot on film, TV was shot electronically. SAG looked after the interests of Screen Actors (film) while AFTRA looked after the interests of Television actors. That the two actors unions have merged is indicative of the changes in production technology.

As is noted in an article at Digital Trends, there is almost no difference between the technologies used in both styles of production, so what are the differences? It comes down to two thing, which are really the same thing.

(more…)

Over on IndieGoGo there’s a project for MOX – an open source mezzanine codec for (mostly) postproduction workflows and archiving. The obvious advantage over existing codecs like ProRes, DNxHD and Cineform is that MOX will be open source, so there is significantly reduce risk that the codec might go away in the future, or stop being supported.

Technically the project looks reasonable and feasible. There is a small, but significant, group of people who worry that support for the current codecs may go away in the future. There’s no real evidence for this, other than that Apple has deprecated old, inefficient and obsolete codecs by not bringing them forward to AVFoundation.

I have more concerns for the long term with an open source project. History shows that many projects start strong, but ultimately it comes down to a small group of people (or one in MOX’s case) doing all the work, and inevitably life’s circumstances intervene.

MOX is not a bad idea. I just doubt that it will gain and sustain the momentum it would need.

<< Latest posts

Older posts >>

April 2015
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930