The present and future of post production business and technology | Philip Hodgetts

CAT | The Business of Production

I was privileged to be invited to a panel at 2018 HPA TR-X: Everything You Thought You Knew About Artificial Intelligence in M&E and What You Didn’t Know You Didn’t on a panel titled AI Vision Panel: What Will Happen and When?

It was an interesting topic, although our panel got quite pessimistic about the future of society if we’re not very careful with how AI/Machine Learning comes into our lives, but that’s a blog post for another time.

What has really set me thinking was a comment by John Motz, CTO Gray Meta that his 12 year old and her friends, spend more time creating media for each other than simply consuming it.

(more…)

If I was to summarize 2017 it would be: AI, HDR, VR, AR and Resolve. If you missed any trend they would be Artificial Intelligence (really Machine Learning); High Dynamic Range; Virtual Reality (i.e. 360 or 180 degree video); Augmented Reality; and Blackmagic Design’s Resolve 14.

As Augmented Reality is composited in at the viewer’s device, I doubt there will be any direct affect on production and post production.

Virtual Reality has had a good year with direct support appearing in Premiere Pro CC and Final Cut Pro X. In both cases the NLE’s parent purchased third party technology and integrated it. Combined with the ready availability of 360 cameras, there’s no barrier to VR production.

Except the lack of demand. I expect VR will become a valuable tool for a range of projects like installations, telepresence and travel, and particularly in gaming, although that’s outside my purview.

What I don’t expect is a large scale uptake for narrative or general entertainment functions. Nor in most of the vast range of video production. It’s not a fad, like 3D, but will likely remain a niche in the production world. I should point out it’s very possible to make good money in niches!

Conversely I would not buy a new screen without it being HDR compatible – at least with one or two of the major HDR formats. High Dynamic Range video is as big a step forward as color. I believe it provides a fundamentally better viewing experience than simply upping the pixel count.

High Dynamic Range is supported across the most important editing software but suffers from two challenges: the proliferation of competing standards and studio monitoring.

The industry needs to consolidate to one standard, or sets will have to be programmed for all standards. None currently are. Ultimately this will change because it has to, but some earlier set purchasers will probably be screwed over!

HDR studio monitors remain extremely expensive, and hard to find. There’s also the problem of grading for both regular and high dynamic range screens.

I have no doubt that HDR is fundamental to the future of the “television” screen. It will further erode attendance in movie theaters as the home experience is a better image than the movie theater, and you get to control who arrives in your media room!

In 2017 Resolve fulfilled it’s long growing promise of integrating a fully feature NLE into an excellent grading and DIT tool. One with a decent Digital Audio Workstation also integrated. Blackmagic Design are definitely fulfilling their vision of providing professional tools for lens-to-viewer workflows, while continuing to reduce the cost of entry.

When you hear that editors in major reality TV production companies don’t balk at Resolve, despite being Media Composer traditionalists, I do worry that Avid may be challenged in its core market. Not that any big ProdCo has switched yet, but I wouldn’t be surprised to see significant uptake of Resolve as an editing too in 2018.

My only disappointment with Resolve is that, as of 14.1, there is now way to bridged timed metadata into Resolve. Not only does that mean we cannot provide Lumberjack support, but no transcript (or AI derived metadata) import either. It’s frustrating because version 14 included Smart Collections that could function like Keyword Collections.

In another direct attack on Avid’s core markets, both Resolve and Premiere Pro CC added support for bin locking and shared projects. Implemented slightly differently by each app, they both mimic the way Media Composer collaborates. Resolve adds a nice refinement: in app team messaging.

The technology that will have the greatest affect on the future of production has only just begun to appear. While generally referred to as Artificial Intelligence, what most people mean, and experience, are some variation on Machine Learning. These types of systems can learn (by example or challenge) to expertly do one, or two tasks. They have been applied to a wide range of tasks as I’ve written about previously.

The “low hanging fruit” for AI integration into production apps are Cognitive Services, which are programming interfaces that help interpret the world. Speech-to-Text, Facial recognition, image content recognition, emotion detection, et. al. are going to appear in more and more software.

In 2017 we saw several apps that use these speech-to-text technologies to get transcripts into Premiere Pro CC, Media Composer and Final Cut Pro X. Naturally that’s an area that Greg and I are very interested in: after all we were first to bring transcripts into FCP X (via Lumberjack Lumberyard). What our experience with that taught us is that getting transcripts into an NLE that doesn’t have Script Sync wasn’t a great experience. Useful but not great.

Which is why we spent the year creating a better solution: Lumberjack Builder. Builder is still a work in progress, but it’s a new NLE. An NLE that edits video by editing text. While Builder is definitely an improvement on purely transcription apps, it won’t be the only application of Cognitive Services.

I expect we at Lumberjack System will have more to show later in the year, once Builder is complete. I also expect this is the year we’ll see visual search integrated into Premiere Pro CC. Imagine being able to search b-roll by having computer vision recognize the content. No keywording or indexing.

Beyond Cognitive Services we will see Machine Learning driving marketing – and even production – decisions. In 2018, the terms Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks will start appearing in the most unexpected places. (While they describe slightly different things, all those terms fall under the Artificial Intelligence umbrella.)

I’m excited about 2018, particularly as we do more with our new intelligent assistants.

When Apple released FCP X six years ago, there was an incredible backlash, which Apple deserved for the sudden, harsh removal of FCP 7 from the market. But I believe there was another reason behind the anger beyond the sudden cut off: there was no more hope that Apple would supplant Avid in the editing world.

(more…)

The Wall Street Journal (firewalled but details available here) reports that Apple is planning to join its competitors in original programming.

The shows Apple is considering would likely be comparable to critically acclaimed programs like “Westworld” on Time Warner Inc.’s HBO or “Stranger Things” on Netflix.

I would say the move was inevitable, and predicted it seven years ago in Dec 2009. More from the WSJ”

Nonetheless, the entry of the world’s most valuable company into original television and films could be a transformative moment for Hollywood and mark a significant turn in strategy for Apple as it starts to become more of a media company, rather than just a distributor of other companies’ media.

Maybe I’m pushing this subject a bit hard, but I really believe we are on the cusp of a wide range of human activities being taken over by smart algorithms, also known as Machine Learning. As well as the examples I’ve already mentioned, I found an article on how an “AI” saved a woman’s life, and how it’s being used to create legal documents for homeless (or about to be homeless) in the UK.

(more…)

Aug/16

2

The end of the Studio System?

Stephen Galloway writes at The Hollywood Reporter:

The majors are still the first port of call for any significant project; they still have an unparalleled ability to get that project developed, cast, shot, marketed and into theaters; and despite extraordinary technological and economic change, they haven’t allowed any upstarts to challenge their hegemony.

Until now.

He then goes on to document the changes that Amazon and Netflix have wrought on the “old bastions of Hollywood Power.”

All of which is good news: more outlets lead to more production.

Avid Media Composer has always been great at tracking metadata. Without accurate timecode, Media Composer would have never become established. Over the years Avid have continued to add metadata support to the app.

Reading an Avid blog on Spanish Broadcaster RTVE’s technical deployment in Rio, I was struck by this:

All of the cataloging and indexing process will be carried out by means of an autometadata software developed by the Corporación itself, which will enable the use of metadata provided by OBS and those selected by TVE’s documentary makers.

I’d love to know more about what the “Corporation” has done. Maybe we can set up a meeting for October when we’ll be back in Barcelona!

Spoiler alert: they use a lot of Avid technology!

Google today launched a new API to help parse natural language. An API is an Application Programming Interface, that developers can use to send data to, and get a response back. Natural Language Parsing is used to understand language that is available in computer-readable form (text). Google’s API joins an increasingly long list of very smart APIs that will understand language, recognize images and much more.

A lot has changed since I last wrote about Advances in Content Recognition late last year.

(more…)

Jul/16

15

Two companies: $8 billion of production

It’s hard to imaging but between them Amazon and Netflix plan on spending $8 billion on original content in 2016.

Amazon is expected to spend $2 billion, subsidized by Prime subscribers

Netflix’s subscribers will finance $6 billion in original programming.

Combined that would be 60 movies with a $100 million budget each. (Typically 400-500 movies are released by studios every year.)

Or 3000 ‘hour’ television episodes assuming a $2 million an episode budget. (Some would be higher, some lower). At 23 episodes a season, that’s one season for 130 different shows.

In this episode I get to spend a lot of time talking about the background the Lumberjack System, in the context of  the very unsexy topic of workflow, particularly automating the workflow. I share many of the background decisions related to Lumberjack System – our logging and pre-editing system for Final Cut Pro X – including why it’s limited to FCP X.

Other topics include automation; Digital Heaven’s announcement of SpeedScriber; how Lumberjack has developed based on user, and use, feedback; the post NAB development of noteLogger; Prelude LIveLogger and the Premiere Pro ecosystem and NLE market shares; how development resources are allocated.

Older posts >>

May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031