The present and future of post production business and technology | Philip Hodgetts

CAT | Machine Learning

I was privileged to be invited to a panel at 2018 HPA TR-X: Everything You Thought You Knew About Artificial Intelligence in M&E and What You Didn’t Know You Didn’t on a panel titled AI Vision Panel: What Will Happen and When?

It was an interesting topic, although our panel got quite pessimistic about the future of society if we’re not very careful with how AI/Machine Learning comes into our lives, but that’s a blog post for another time.

What has really set me thinking was a comment by John Motz, CTO Gray Meta that his 12 year old and her friends, spend more time creating media for each other than simply consuming it.

(more…)

A simple indicator of the growing influence and impact of Artificial Intelligence and Machine Learning is the Hollywood Professional Association (party of SMPTE) inclusion of it in their annual retreat.

For the 2016 Retreat I proposed a presentation on AI & ML that wasn’t deemed relevant at that time.

For the 2017 Retreat I made pretty much the same proposal, which lead to a panel discussion at the Retreat that I was pleased to contribute to.

For the 2018 Retreat the half day tech focus the day before the main conference is:

A half day of targeted panels, speakers and interaction, this year focusing on one of the most important topics our industry faces, Artificial Intelligence and Machine Learning.

Two years ago it wasn’t relevant.

A year ago it was one panel in three days of conference.

This year it’s the tech focus day ahead of the conference!

I’ll be back on a panel this year – Novel AI Implementations: Real World AI Case Studies – and hope to see you there. The HPA Retreat is the closest thing to TED talks for our industry and discuss the topics that will be crucial to film and television production in 2-3 years. Get a heads up before everyone else.

Well, from a couple of days of reading and email newsletters, but there is quite a focus.

MESA Alliance quotes Deluxe Entertainment Services Group chief product offer Andy Shenkler as saying:

“AI is obviously playing a fairly broad role, especially with the areas that we at Deluxe are working on,” he told the Media & Entertainment Services Alliance (MESA) in a phone interview. That includes “everything from the post-production creation process, localization” around advanced language detection and auto translation – “and then even down into the distribution side of things,” he said, noting the latter was “probably the least well-known and discussed” part of the equation.

That article goes on to talk about who’s technologies they use and how they use it to assign metadata to incoming assets. Speaking of Content Metadata (in this case about finished content, not for use in production) Roz Ho, senior vice president and general manager, consumer and metadata at TIVO, writes in a guest blog at Multichannel News:

Not only does machine learning help companies keep up with the tsunami of content, it can better enrich metadata and enable distributors to get the right entertainment in front of the right viewers at the right time.

Machine learning takes metadata beyond cast, title and descriptions, and enables content to be enhanced with many new data descriptors such as keywords, dynamic popularity ratings, and moods, to name a few.

Liz finishes with a short dissertation on how these machines, and people enhanced by them, will be the direction we take in the future.

And out of CES some headlines:

CES 2018: Consumer Technology Association Expects Major Growth for AI in 2018

CES 2018: AI Touted Heavily by LG, Samsung, Byton on Eve of CES

It seems like every day there is news yet another application of Machine Learning (AI) into the Media and Entertainment space, either in production – where it is helping decide what goes in to production as well as helping in production – through to helping people find more appropriate content.

As part of the regular year end activities The Digital Production BuZZ invited me, and a bunch of other people, to look forward to 2018 and predict what the major themes will be.

Here is a link to the full show –  
http://www.digitalproductionbuzz.com/2018/01/digital-production-buzz-january-4-2018/
Here is a link to the Transcript-  
http://www.digitalproductionbuzz.com/2018/01/transcript-digital-production-buzz-january-4-2018/
And here are the links (including the MP3 version) to your individual interview – 
http://www.digitalproductionbuzz.com/interview/hodgetts-smart-assistants-hdr-and-vr/
MP3: http://www.digitalproductionbuzz.com/BuZZ_Audio/Buzz_180104_Hodgetts.mp3

Jan/18

3

Believable Fake Humans

The uncanny valley has been a limiting factor on creating realistic humans, but a Nividia research team in Finland have created some amazingly realistic looking humans (and mostly quite attractive ones) using Artificial Intelligence in an approach that goes beyond the limits of training data (to some degree).

To recap, Machine Learning (what we’re really talking about here) has relied on massive amounts of marked up training data to derive it’s internal “algorithm” to match the marked up results on new material. Machines can also be challenged like Googles “walk upright.” To get that results, the machine was challenged to generate actions that “moved forward” and “stayed upright.” It took many iterations to get the results in the video.

This Nividia team has a double “AI” approach. The machine being challenged to create new human face “like but different from” its training set, it matched against another machine that’s been trained to recognize human faces. If machine one can fool machine too into believing it’s created a real face, that’s success.

The images are pretty incredible. Even the eyes are not much lifeless than the average human model!

For now they’re relatively low resolution still images, but we all know that still images will lead to motion images, which will lead – ultimately – to synthetic actors that are “like” your current favorite.

Thereby killing all future movie and TV acting jobs.

If I was to summarize 2017 it would be: AI, HDR, VR, AR and Resolve. If you missed any trend they would be Artificial Intelligence (really Machine Learning); High Dynamic Range; Virtual Reality (i.e. 360 or 180 degree video); Augmented Reality; and Blackmagic Design’s Resolve 14.

As Augmented Reality is composited in at the viewer’s device, I doubt there will be any direct affect on production and post production.

Virtual Reality has had a good year with direct support appearing in Premiere Pro CC and Final Cut Pro X. In both cases the NLE’s parent purchased third party technology and integrated it. Combined with the ready availability of 360 cameras, there’s no barrier to VR production.

Except the lack of demand. I expect VR will become a valuable tool for a range of projects like installations, telepresence and travel, and particularly in gaming, although that’s outside my purview.

What I don’t expect is a large scale uptake for narrative or general entertainment functions. Nor in most of the vast range of video production. It’s not a fad, like 3D, but will likely remain a niche in the production world. I should point out it’s very possible to make good money in niches!

Conversely I would not buy a new screen without it being HDR compatible – at least with one or two of the major HDR formats. High Dynamic Range video is as big a step forward as color. I believe it provides a fundamentally better viewing experience than simply upping the pixel count.

High Dynamic Range is supported across the most important editing software but suffers from two challenges: the proliferation of competing standards and studio monitoring.

The industry needs to consolidate to one standard, or sets will have to be programmed for all standards. None currently are. Ultimately this will change because it has to, but some earlier set purchasers will probably be screwed over!

HDR studio monitors remain extremely expensive, and hard to find. There’s also the problem of grading for both regular and high dynamic range screens.

I have no doubt that HDR is fundamental to the future of the “television” screen. It will further erode attendance in movie theaters as the home experience is a better image than the movie theater, and you get to control who arrives in your media room!

In 2017 Resolve fulfilled it’s long growing promise of integrating a fully feature NLE into an excellent grading and DIT tool. One with a decent Digital Audio Workstation also integrated. Blackmagic Design are definitely fulfilling their vision of providing professional tools for lens-to-viewer workflows, while continuing to reduce the cost of entry.

When you hear that editors in major reality TV production companies don’t balk at Resolve, despite being Media Composer traditionalists, I do worry that Avid may be challenged in its core market. Not that any big ProdCo has switched yet, but I wouldn’t be surprised to see significant uptake of Resolve as an editing too in 2018.

My only disappointment with Resolve is that, as of 14.1, there is now way to bridged timed metadata into Resolve. Not only does that mean we cannot provide Lumberjack support, but no transcript (or AI derived metadata) import either. It’s frustrating because version 14 included Smart Collections that could function like Keyword Collections.

In another direct attack on Avid’s core markets, both Resolve and Premiere Pro CC added support for bin locking and shared projects. Implemented slightly differently by each app, they both mimic the way Media Composer collaborates. Resolve adds a nice refinement: in app team messaging.

The technology that will have the greatest affect on the future of production has only just begun to appear. While generally referred to as Artificial Intelligence, what most people mean, and experience, are some variation on Machine Learning. These types of systems can learn (by example or challenge) to expertly do one, or two tasks. They have been applied to a wide range of tasks as I’ve written about previously.

The “low hanging fruit” for AI integration into production apps are Cognitive Services, which are programming interfaces that help interpret the world. Speech-to-Text, Facial recognition, image content recognition, emotion detection, et. al. are going to appear in more and more software.

In 2017 we saw several apps that use these speech-to-text technologies to get transcripts into Premiere Pro CC, Media Composer and Final Cut Pro X. Naturally that’s an area that Greg and I are very interested in: after all we were first to bring transcripts into FCP X (via Lumberjack Lumberyard). What our experience with that taught us is that getting transcripts into an NLE that doesn’t have Script Sync wasn’t a great experience. Useful but not great.

Which is why we spent the year creating a better solution: Lumberjack Builder. Builder is still a work in progress, but it’s a new NLE. An NLE that edits video by editing text. While Builder is definitely an improvement on purely transcription apps, it won’t be the only application of Cognitive Services.

I expect we at Lumberjack System will have more to show later in the year, once Builder is complete. I also expect this is the year we’ll see visual search integrated into Premiere Pro CC. Imagine being able to search b-roll by having computer vision recognize the content. No keywording or indexing.

Beyond Cognitive Services we will see Machine Learning driving marketing – and even production – decisions. In 2018, the terms Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks will start appearing in the most unexpected places. (While they describe slightly different things, all those terms fall under the Artificial Intelligence umbrella.)

I’m excited about 2018, particularly as we do more with our new intelligent assistants.

If you’re not going to be at IBC then move on, but if you’re going you’ll probably want to be at the FCP X World Event, particularly on Saturday at 12:15 and Sunday at 2:15 when Lumberjack System will be previewing the latest addition to the Lumberjack family. (more…)

By AI I mean Machine Learning! Some of the discussion around Larry’s post and my response has been about data sets. Norm Hollyn noted in the comments that there were non-training options “under NDA”.  Here’s a good discussion on the types of training data, or lack of need, from TechCrunch.

Larry Jordan got on his (self described) soap box this morning with a thoughtful post about the future of editing in an AI infested world. I think we should all be aware of what’s happening, and I’ve certainly been trying to do my part there, as recent posts attest, but I’m not sure I’m quite as concerned about editing jobs as Larry is. Assistants perhaps.

Larry shared his post with me, asking for feedback, and having written a fairly comprehensive response, I decided to share it here as well. While I mostly address the areas where AI/Machine Learning might be used, and why pervasive automated editing is probably way further in the future than Larry’s concern would indicate, none of that negates Larry’s excellent advice on managing your career.

(more…)

We were watching the WWDC keynote address last night and the term “Machine Learning” came up so ofter, that if you were taking a shot each time, it would have been very detrimental to your health. There were at least 12-15 references during the 2.25 hour keynote.

Apple have seriously embraced Machine Learning/Deep Learning across many apps and have introduced a Machine Learning framework for running developer designed learning algorithms, even providing conversion tools for migrating from other AI platforms.

Or course, this comes as no surprise, as I wrote about the many ways Apple were integrating machine learning back in October 2016.

UPDATE: Wired also noticed just how key Machine Learning has become to Apple.

Older posts >>

May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031