IBM Watson cuts trailer for Morgan

Terry Curren pointed me to this example where IBM Watson (one of the Smart APIs I referred to a couple of weeks back) was tasked with determining whether or not an Artificial Intelligence could “cut” a movie trailer. This is the result, with a very interesting insight into how they did it at the end.

IBM Watson pulled the selects based on action and emotion, but an editor created the trailer from the selects. Still, being able to locate the highlights and determine emotion is a big step forward.

 

AI: Now it’s ‘making movies’

Buried in an article called The iBrain is Here about Apple’s use of Artificial Intelligence across a wide range of products and purposes was this gem:

Machine learning…. It even knows what good filmmaking is, enabling Apple to quickly compile your snapshots and videos into a mini-movie at a touch of a button

At one level this is certainly true, and likely. After all, Greg and I spent a summer analyzing how I made documentary-style edits. It was a fascinating experience for me, analyzing why “that” was the right place to start b-roll over an interview.

I would then have to turn that analysis into a rule of thumb that Greg could program. This was the basis of (the now gone) First Cuts app. That work will resurface at some time. It’s too valuable not to.

A little FCP X Project

Mostly I edit Lunch with Philip and Greg, product videos, or the occasional The semiSerious Foodies video. This last week I put together a demo piece for a friend, that was much more fun/creative.

It’s a competition piece, so if you’d all like to go to http://indi.com/7fqks and vote for Marlon Braccia, we’d appreciate it.

Edited in FCP X I used significant amounts of speed change, chroma key, crop and blur on the background. Those in LA can see it in person, and learn how it was done in detail at the August 24 meeting of LACPUG.

Does AI and Robotics give us the opportunity to rethink work?

Most of the thinking – the little that’s done – around the affect of Artificial Intelligence and Robotics replacing jobs, is somewhat negative, so it was almost a relief to read John Hagel’s perspective that we could use this transition as an opportunity to rethink the nature of work.

Continue reading Does AI and Robotics give us the opportunity to rethink work?

What we take for granted

I cut a short 30 minute competition entry for a friend today. A relatively simple single-take green screen over a Pond 5 background she purchased.

Except we used a bunch of technologies that were all non-existent just a few years ago.

Starting with some ProRes files from a Blackmagic Design camera, we:

  • Sped up the talent about 20% with no visible or audible artifacting
  • Keyed out the green background by using the built-in keyer in FCP X at the default settings
  • Repositioned the talent to better fit the background shot
  • Slowed down the background to 66% with no visible artifacting
  • Applied a real time blended mask and gaussian blur on the background (over a duplicate, not blurred copy to simulate depth of field
  • Used the Color Board to reduce the exposure on her face, while using a mask so her eyes continued to sparkle

all in real time on a 2015 Retina 5K iMac and Final Cut Pro X.

It wasn’t that long ago that applying a soft edge to a mask; or any gaussian blur, or any chroma key meant a render before playback.

Like in the machine learning/AI field, the video technologies also keep getting better all the time.

“AI” put to Practical Use

Maybe I’m pushing this subject a bit hard, but I really believe we are on the cusp of a wide range of human activities being taken over by smart algorithms, also known as Machine Learning. As well as the examples I’ve already mentioned, I found an article on how an “AI” saved a woman’s life, and how it’s being used to create legal documents for homeless (or about to be homeless) in the UK.

Continue reading “AI” put to Practical Use

The Problem with Machine Learning

I’ve been talking about machine learning and smart APIs recently, where I think there is great potential to make pre-editing tasks much easier. But they are not without their downside. They are built on sample data sets to ‘train’ the algorithm. If that training set is not truly representative of the whole data set, then the results will go horribly wrong.

Cory Doctorow at Boing Boing uses the Trump campaign as an example of how this can play out in ‘the real world’.