CAT | Technology
As you probably all know, I have two day jobs heading Intelligent Assistance Software and Lumberjack System. We’re very proud of the work we’ve done through both companies. We make a decent income from them for sure, but what makes us particularly happy when our tools get people’s work done faster. They get to go home to their families earlier and production has less drudgery.
So it pleases us greatly when that gets recognized, as it did this trip.
In this latest episode of The Terence and Philip Show Terry and I discuss metadata, my citizenship, smart APIs, Artificial Intelligence and more.
The extensive article by Steven Levy – The iBrain is Here – is a fascinating read on how Apple are using Machine Learning, neural networks and Artificial Intelligences across product lines. It’s well worth the time to read through, but this quote from Phil Schiller stood out:
“We use these techniques to do the things we have always wanted to do, better than we’ve been able to do,” says Schiller. “And on new things we haven’t be able to do. It’s a technique that will ultimately be a very Apple way of doing things as it evolves inside Apple and in the ways we make products.”
The ways this could all be aligned with editing? Speech-to-text; keyword extraction (just like Magic Keywords in Lumberjack System); sentiment extraction; image recognition; facial detection and recognition; speech controlled editing (if anyone really wants that), and the list goes on.
I’d like to believe the Pro Apps Team are working on this.
UPDATE: Ruslan Salakhutdinov is Apple’s first Director of AI.
It’s a competition piece, so if you’d all like to go to http://indi.com/7fqks and vote for Marlon Braccia, we’d appreciate it.
Edited in FCP X I used significant amounts of speed change, chroma key, crop and blur on the background. Those in LA can see it in person, and learn how it was done in detail at the August 24 meeting of LACPUG.
When I discovered I could do in two keystrokes what took 9 mouse clicks and keystrokes in Soundtrack Pro, I never looked back and now edit all my audio only projects in FCP X.
I got together with Marcelo Lewin of DigitalMedia Pros and explained how I do it.
Most of the thinking – the little that’s done – around the affect of Artificial Intelligence and Robotics replacing jobs, is somewhat negative, so it was almost a relief to read John Hagel’s perspective that we could use this transition as an opportunity to rethink the nature of work.
I cut a short 30 minute competition entry for a friend today. A relatively simple single-take green screen over a Pond 5 background she purchased.
Except we used a bunch of technologies that were all non-existent just a few years ago.
Starting with some Blackmagic Design ProRes files, we:
- Sped up the talent about 20% with no visible or audible artifacting
- Keyed out the green background by using the built-in keyer in FCP X at the default settings
- Repositioned the talent to better fit the background shot
- Slowed down the background to 66% with no visible artifacting
- Applied a real time blended mask and gaussian blur on the background (over a duplicate, not blurred copy to simulate depth of field
- Used the Color Board to reduce the exposure on her face, while using a mask so her eyes continued to sparkle
all in real time on a 2015 Retina 5K iMac and Final Cut Pro X.
It wasn’t that long ago that applying a soft edge to a mask; or any gaussian blur, or any chroma key meant a render before playback.
Like in the machine learning/AI field, the video technologies also keep getting better all the time.
Maybe I’m pushing this subject a bit hard, but I really believe we are on the cusp of a wide range of human activities being taken over by smart algorithms, also known as Machine Learning. As well as the examples I’ve already mentioned, I found an article on how an “AI” saved a woman’s life, and how it’s being used to create legal documents for homeless (or about to be homeless) in the UK.
Comments off · Posted by Philip in Interesting Technology
I’ve been talking about machine learning and smart APIs recently, where I think there is great potential to make pre-editing tasks much easier. But they are not without their downside. They are built on sample data sets to ‘train’ the algorithm. If that training set is not truly representative of the whole data set, then the results will go horribly wrong.
Cory Doctorow at Boing Boing uses the Trump campaign as an example of how this can play out in ‘the real world’.
I recently commented on the importance of metadata for rights management during distribution. While cleaning my email inbox I revisited a story from late last year, on how over-the-top content providers (generally niche) can use metadata from social media and other sources to help grow their audiences.