The present and future of post production business and technology | Philip Hodgetts

The extensive article by Steven Levy – The iBrain is Here – is a fascinating read on how Apple are using Machine Learning, neural networks and Artificial Intelligences across product lines. It’s well worth the time to read through, but this quote from Phil Schiller stood out:

“We use these techniques to do the things we have always wanted to do, better than we’ve been able to do,” says Schiller. “And on new things we haven’t be able to do. It’s a technique that will ultimately be a very Apple way of doing things as it evolves inside Apple and in the ways we make products.”

The ways this could all be aligned with editing? Speech-to-text; keyword extraction (just like Magic Keywords in Lumberjack System); sentiment extraction; image recognition; facial detection and recognition; speech controlled editing (if anyone really wants that), and the list goes on.

I’d like to believe the Pro Apps Team are working on this.

Buried in an article called The iBrain is Here about Apple’s use of Artificial Intelligence across a wide range of products and purposes was this gem:

Machine learning…. It even knows what good filmmaking is, enabling Apple to quickly compile your snapshots and videos into a mini-movie at a touch of a button

At one level this is certainly true, and likely. After all, Greg and I spent a summer analyzing how I made documentary-style edits. It was a fascinating experience for me, analyzing why “that” was the right place to start b-roll over an interview.

I would then have to turn that analysis into a rule of thumb that Greg could program. This was the basis of (the now gone) First Cuts app. That work will resurface at some time. It’s too valuable not to.

Mostly I edit Lunch with Philip and Greg, product videos, or the occasional The semiSerious Foodies video. This last week I put together a demo piece for a friend, that was much more fun/creative.

It’s a competition piece, so if you’d all like to go to http://indi.com/7fqks and vote for Marlon Braccia, we’d appreciate it.

Edited in FCP X I used significant amounts of speed change, chroma key, crop and blur on the background. Those in LA can see it in person, and learn how it was done in detail at the August 24 meeting of LACPUG.

Bloomberg reported yesterday about a Hedge Fund ‘Robot’ that “outsmarted” its human “master”. The quotation marks are all mine because it’s self learning, so it doesn’t really have a master, but rather someone that created it.

Still, the performance in the quoted instance is quite impressive. It’s currently in charge of about $35 million in investment.

Aug/16

22

Editing Audio in FCP X

When I discovered I could do in two keystrokes what took 9 mouse clicks and keystrokes in Soundtrack Pro, I never looked back and now edit all my audio only projects in FCP X.

I got together with Marcelo Lewin of DigitalMedia Pros and explained how I do it.

 

Most of the thinking – the little that’s done – around the affect of Artificial Intelligence and Robotics replacing jobs, is somewhat negative, so it was almost a relief to read John Hagel’s perspective that we could use this transition as an opportunity to rethink the nature of work.

(more…)

I cut a short 30 minute competition entry for a friend today. A relatively simple single-take green screen over a Pond 5 background she purchased.

Except we used a bunch of technologies that were all non-existent just a few years ago.

Starting with some Blackmagic Design ProRes files, we:

  • Sped up the talent about 20% with no visible or audible artifacting
  • Keyed out the green background by using the built-in keyer in FCP X at the default settings
  • Repositioned the talent to better fit the background shot
  • Slowed down the background to 66% with no visible artifacting
  • Applied a real time blended mask and gaussian blur on the background (over a duplicate, not blurred copy to simulate depth of field
  • Used the Color Board to reduce the exposure on her face, while using a mask so her eyes continued to sparkle

all in real time on a 2015 Retina 5K iMac and Final Cut Pro X.

It wasn’t that long ago that applying a soft edge to a mask; or any gaussian blur, or any chroma key meant a render before playback.

Like in the machine learning/AI field, the video technologies also keep getting better all the time.

Maybe I’m pushing this subject a bit hard, but I really believe we are on the cusp of a wide range of human activities being taken over by smart algorithms, also known as Machine Learning. As well as the examples I’ve already mentioned, I found an article on how an “AI” saved a woman’s life, and how it’s being used to create legal documents for homeless (or about to be homeless) in the UK.

(more…)

I’ve been talking about machine learning and smart APIs recently, where I think there is great potential to make pre-editing tasks much easier. But they are not without their downside. They are built on sample data sets to ‘train’ the algorithm. If that training set is not truly representative of the whole data set, then the results will go horribly wrong.

Cory Doctorow at Boing Boing uses the Trump campaign as an example of how this can play out in ‘the real world’.

I recently commented on the importance of metadata for rights management during distribution. While cleaning my email inbox I revisited a story from late last year, on how over-the-top content providers (generally niche) can use metadata from social media and other sources to help grow their audiences.

(more…)

Older posts >>

August 2016
M T W T F S S
« Jul    
1234567
891011121314
15161718192021
22232425262728
293031