The present and future of post production business and technology | Philip Hodgetts

Archive for July 2012

The latest release of 7toX for Final Cut Pro includes:

• Bug fix for media file paths with “illegal” characters
• Bug fix for subclips in bins
• Bug fix for 25 fps media in XML exported from Premiere Pro
• Bug fix for large numbers in long sequences
• Bug fix for a peculiar XML structure containing text nodes
• Bug fix for missing Reel name metadata

Kaggle’s algorithms show machines are getting too good at judging humans http://t.co/sltVPexI

Regular readers will know I’m fascinated by research and technology that has computers understanding human behavior. My interest is personal, but also professionally I’m interested in how these technologies can be adapted to take some of the more boring work out of some types of production.

The article presents two more data points. I’ll just post the summaries.

An algorithm is no less reliable at scoring essays than the average teacher. 

and

With only 140 characters, data scientists and statisticians can get a strong sense for your personality. That’s fairly worrying, considering that this information could get into the wrong hands.

Make of it what you will.

Although I love Final Cut Pro X overall, there are still ways it frustrates me. Today – disclosure triangles! Final Cut Pro X is incredibly bad at remembering the state of disclosure triangles in list view in the Event Browser.

It forgets the state of the disclosure triangles if you leave an Event and then return to it later. (Should be remembered and open exactly as it was left.) (more…)

RIAA Knows (But Tried To Hide) That Most ‘Unpaid’ Music Acquisition Comes From Offline Swapping http://t.co/g79CtPfy (more…)

Will crowdfunding usurp the studio biz? http://t.co/vY61mDPo Disruption in the financing stages, not just production. Sure sounds good.

Seamless Video Editing – A Look Toward the Future http://t.co/ILZFqHDd

A provocative first paragraph:

A new application being developed by researchers at UC Berkely and Adobe Systems aims to do just that…helping editors identify the best spot to make a cut based off of audio and visual features of raw footage.  The program can auto generate seamless transitions to make the cuts visually smooth and undetectable.

Which sounds exciting, until you read later:

This tech seems useful for working with on-camera interviews (with only one subject), but in it’s current state it doesn’t seem like it would be effective at tackling more complex shooting situations.

So, which is it? Both and neither. Understanding how and why we make edits is complex, but it is/will be doable. Finding the base information on which to apply that algorithm is even harder. But it is inevitable. Certainly not for every type of edit, and not for every project. Given that an enormous amount of editing is not highly “creative” but somewhat routine.

I have long advocated that this type of technology will be developed and applied. When we were developing First Cuts, the algorithm would product a result and it would be “off” in some way – simply not what I would have done as an editor. That forced an examination of how I would have made the edit. That then lead to needing to quantify why I made the edit there.

That part was not easy, although I am fortunate to have a brain almost equally balanced between left and right – creative and analytic.

In layman’s terms: Spots of the video where there is little audio or on-screen movement are given priority as ideal spots to cut, and are plotted on a “cut suitability” timeline.  If necessary the application will insert natural looking pauses to bridge two cuts together.   From the product demo (embedded below) it appears that editors can simply delete text from the transcript view and the application will go to work creating a seamless transition.  An additional features allows for one-click removal of “ums” and repeated words.

They can go back one step. In an interview situation you generally have two voices: breaking an interview up on voice changes, and then paragraph breaks (which is what this research seems to be doing, but adding in the analysis of motion in video) is “trivial” once we get reliable speech transcription.

Reliable speech transcription is the key to unlocking an enormous amount of metadata-driven tagging/keywording and driving these sorts of automatic assembles. At this stage I see this more as an editor’s tool than for finished projects, although there are some applications in exploring large amounts of video material. (Something I hope to demonstrate by the end of the year using some of the Solar Odyssey footage.)

Should we go down this path? That’s an irrelevant question because, with downward budget pressures dominant in the industry, it’s inevitable. Those that can work smarter – using all the tools at their disposal – will continue to be needed.

And I firmly believe that the emotionally compelling, heart-tugging edit is going to remain beyond the ability of a computer for the balance of my lifetime.

I am fascinated by most technology, particularly how it can be applied to the things I am interested in professionally: the production, distribution and marketing of video/filmic entertainment. Two recent articles have stuck in my mind. The Japanese research quoted in the excerpt and How cold, hard numbers can be used to foretell the battle where researchers are using Wikileaks information, among other sources, to predict where violence will break out. (more…)

Thousands of YouTube partners now make six figures a year http://t.co/mv4tZhPC

While no-one’s being paid the super high bucks of some of the top insiders in “Hollywood” a six figure income is well into the middle class. I’d like to see fewer make it really rich, and many more make decent, middle class incomes doing what they love to do.

Thousands of YouTube partners are making over $100,000 a year, according to Google SVP and Chief Business Officer Nikesh Arora. The number was shared during Google’s second quarter earnings call on Thursday, where Arora pointed to YouTube as an acquisition that has paid off for the company.

That’s up from “hundreds”.  Google has been investing in content and content creators, so it’s not all “cat videos” anymore.
Google has been increasingly investing into programming on YouTube. The site unveiled a premium channel initiative late last year that included a reported $100 million in advances both for YouTube stars and traditional media brands who took their assets to YouTube as a result. And earlier Thursday, it announced that it is rewarding some 1,500 YouTube producers for successes with their channels on the site, with 80 of them receiving a golden play button, and everyone getting $500 gift certificates that can be put toward buying video equipment.

Video Metadata Drives Engagement Rates As Much as 300 Percent, RAMP’s CEO http://t.co/VmdsmcRE Metadata is valuable!!

The presence of tags and transcripts in and accompanying video on the Web can drive engagement rates by anywhere from 40% to 300%, says Tom Wilde, CEO of content optimization platform RAMP. Beet.TV caught up with Wilde for a discussion of the growth of metadata in video and how it’s driving ad opportunities.

In addition to engagement rates, rich metadata can also help with more refined ad targeting, he explains in this video interview. He expects more growth in the availability of metadata in video due to the pending September deadline requiring IP-delivered video to include the same amount of closed captioning as TV, asmandated by Congress in the 21st Century Communications and Video Visiblity Act.

Get with the program now, or get with it later. Whichever way, there’s metadata in your future!

There are many kinds of metadata, and previously I’ve noted six that are important to production. There’s a whole second category of metadata that’s more related to asset management, but they share the common goal of being able to find a specific shot, as quickly as possible. In this post I’m talking about what would be Added Metadata of the six.

For scripted material, this is relatively easy, particularly with tools like QR Slate, Movie Slate, and the like now available. For scripted material you have known assets: Scene, shot, take, character, etc., as well as one or two free form fields like Comment. It is with un-scripted material that we run into problems not solvable with those solutions (at least in their current forms). (more…)

Older posts >>

July 2012
M T W T F S S
« Jun   Aug »
 1
2345678
9101112131415
16171819202122
23242526272829
3031