The present and future of post production business and technology | Philip Hodgetts

CAT | Interesting Technology

Thanks to an introduction by a mutual friend, I had the opportunity to chat with Alex LoVerde of SyncOnSet, and it stuck me that the best technology is one driven by a direct, and often personal need. It also struck me how different two ostensibly similar “metadata companies” can be.

(more…)

Web APIs (Application Programming Interface) allow us to send data to a remote service and get a result back. Machine learning tools and Cognitive Services like speech-to-text and image recognition are mostly online APIs. Trained machines can be integrated into apps, but in general these services operate through an API.

The big advantage is that they keep getting better, without the local developer getting involved.

Nearly two years ago I wrote of my experience with SpeedScriber*, which was the first of the machine learning based transcription apps on the market. At the time I was impressed that I could get the results of a 16 minute interview back in less than 16 minutes, including prep and upload time. Usually the overall time was around the run time of the file.

Upload time is the downside of of web based APIs and is significantly holding back image recognition on video. That is why high quality proxy files are created for audio to be transcribed, which reduces upload time.

My most recent example sourced from a 36 minute WAV, took around one minute to convert to archival quality m4a which reduced the file size from 419 MB to 71MB. The five times faster upload – now 2’15” – compared with more than 12 minutes to upload the original, more than compensates for the small prep time for the m4a.

The result was emailed back to me 2’30.” That’s 36 minutes of speech transcribed with about 98% accuracy, in 2.5 minutes. That’s more than 14x real time. The entire time from instigating the upload to finished transcript back was 5’45” for 36 minutes of interview.

These APIs keep getting faster and can run on much “heavier iron” than my local iMac which is no doubt part of the reason they are so fast, but that’s just another reason they’re good for developers. Plus, every time the speech-to-text algorithm gets improved, every app that calls on the API gets the improvement for free.

*I have’t used SpeedScriber recently but I would expect that it has similarly benefited from improvements on the service side of the API they work with.

On the night of the Supermeet 2011 Final Cut Pro X preview I was told that this was the “foundation for the next 10 years.” Well, as of last week, seven of the ten have elapsed. I do not, for one minute, think that Apple intended to convey a ten year limit to Final Cut Pro X’s ongoing development, but maybe it’s smart to plan obsolescence. To limit the time an app continues to be developed before its suitability for the task is re-evaluated.

(more…)

I was honored to be invited – as one of many – to provide my thoughts on 2017: what technologies were important, what major changes happened.

Here is a link to the full show –  
http://www.digitalproductionbuzz.com/2017/12/digital-production-buzz-december-28-2017/
Here is a link to the Transcript –  
http://www.digitalproductionbuzz.com/2017/12/transcript-digital-production-buzz-december-28-2017/
Or if you want to go direct to my segment:
http://www.digitalproductionbuzz.com/interview/philip-hodgetts-2017-in-review/
MP3: 
http://www.digitalproductionbuzz.com/BuZZ_Audio/Buzz_171228_Hodgetts.mp3

If I was to summarize 2017 it would be: AI, HDR, VR, AR and Resolve. If you missed any trend they would be Artificial Intelligence (really Machine Learning); High Dynamic Range; Virtual Reality (i.e. 360 or 180 degree video); Augmented Reality; and Blackmagic Design’s Resolve 14.

As Augmented Reality is composited in at the viewer’s device, I doubt there will be any direct affect on production and post production.

Virtual Reality has had a good year with direct support appearing in Premiere Pro CC and Final Cut Pro X. In both cases the NLE’s parent purchased third party technology and integrated it. Combined with the ready availability of 360 cameras, there’s no barrier to VR production.

Except the lack of demand. I expect VR will become a valuable tool for a range of projects like installations, telepresence and travel, and particularly in gaming, although that’s outside my purview.

What I don’t expect is a large scale uptake for narrative or general entertainment functions. Nor in most of the vast range of video production. It’s not a fad, like 3D, but will likely remain a niche in the production world. I should point out it’s very possible to make good money in niches!

Conversely I would not buy a new screen without it being HDR compatible – at least with one or two of the major HDR formats. High Dynamic Range video is as big a step forward as color. I believe it provides a fundamentally better viewing experience than simply upping the pixel count.

High Dynamic Range is supported across the most important editing software but suffers from two challenges: the proliferation of competing standards and studio monitoring.

The industry needs to consolidate to one standard, or sets will have to be programmed for all standards. None currently are. Ultimately this will change because it has to, but some earlier set purchasers will probably be screwed over!

HDR studio monitors remain extremely expensive, and hard to find. There’s also the problem of grading for both regular and high dynamic range screens.

I have no doubt that HDR is fundamental to the future of the “television” screen. It will further erode attendance in movie theaters as the home experience is a better image than the movie theater, and you get to control who arrives in your media room!

In 2017 Resolve fulfilled it’s long growing promise of integrating a fully feature NLE into an excellent grading and DIT tool. One with a decent Digital Audio Workstation also integrated. Blackmagic Design are definitely fulfilling their vision of providing professional tools for lens-to-viewer workflows, while continuing to reduce the cost of entry.

When you hear that editors in major reality TV production companies don’t balk at Resolve, despite being Media Composer traditionalists, I do worry that Avid may be challenged in its core market. Not that any big ProdCo has switched yet, but I wouldn’t be surprised to see significant uptake of Resolve as an editing too in 2018.

My only disappointment with Resolve is that, as of 14.1, there is now way to bridged timed metadata into Resolve. Not only does that mean we cannot provide Lumberjack support, but no transcript (or AI derived metadata) import either. It’s frustrating because version 14 included Smart Collections that could function like Keyword Collections.

In another direct attack on Avid’s core markets, both Resolve and Premiere Pro CC added support for bin locking and shared projects. Implemented slightly differently by each app, they both mimic the way Media Composer collaborates. Resolve adds a nice refinement: in app team messaging.

The technology that will have the greatest affect on the future of production has only just begun to appear. While generally referred to as Artificial Intelligence, what most people mean, and experience, are some variation on Machine Learning. These types of systems can learn (by example or challenge) to expertly do one, or two tasks. They have been applied to a wide range of tasks as I’ve written about previously.

The “low hanging fruit” for AI integration into production apps are Cognitive Services, which are programming interfaces that help interpret the world. Speech-to-Text, Facial recognition, image content recognition, emotion detection, et. al. are going to appear in more and more software.

In 2017 we saw several apps that use these speech-to-text technologies to get transcripts into Premiere Pro CC, Media Composer and Final Cut Pro X. Naturally that’s an area that Greg and I are very interested in: after all we were first to bring transcripts into FCP X (via Lumberjack Lumberyard). What our experience with that taught us is that getting transcripts into an NLE that doesn’t have Script Sync wasn’t a great experience. Useful but not great.

Which is why we spent the year creating a better solution: Lumberjack Builder. Builder is still a work in progress, but it’s a new NLE. An NLE that edits video by editing text. While Builder is definitely an improvement on purely transcription apps, it won’t be the only application of Cognitive Services.

I expect we at Lumberjack System will have more to show later in the year, once Builder is complete. I also expect this is the year we’ll see visual search integrated into Premiere Pro CC. Imagine being able to search b-roll by having computer vision recognize the content. No keywording or indexing.

Beyond Cognitive Services we will see Machine Learning driving marketing – and even production – decisions. In 2018, the terms Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks will start appearing in the most unexpected places. (While they describe slightly different things, all those terms fall under the Artificial Intelligence umbrella.)

I’m excited about 2018, particularly as we do more with our new intelligent assistants.

If you’re not going to be at IBC then move on, but if you’re going you’ll probably want to be at the FCP X World Event, particularly on Saturday at 12:15 and Sunday at 2:15 when Lumberjack System will be previewing the latest addition to the Lumberjack family. (more…)

May/17

9

NAB 1998 in Retrospect

Because I am researching my journey through my earlier writings on metadata and interactive story telling I came across my ‘review’ of NAB 1998 thanks to the Wayback Machine. This was the year everyone was coming to terms with ATSC – digital broadcast – and how it was to be implemented. From my review it seems my attention was on interactivity and QuickTime 3, neither of which is surprising.

(more…)

A few years ago, we considered supporting transcripts in Lumberjack System. At the time our goal was to quickly prepare for an edit, and transcriptions took days and cost serious money.

Two years ago we supported the alignment of time-stamped transcripts to Final Cut Pro X Clips and a year ago, introduced “magic” keywords, derived by a cognitive service. Since Lumberjack doesn’t (yet, I might emphasize) support a speech to text service internally, what are the options and what do they tell us about the state of play for transcription in April 2017?

(more…)

Feb/17

22

In Just 10 Years

While projecting the changes that Artificial Intelligence (AI) and Machine Learning (ML) might bring about in the future, it was interesting to look back and see just what didn’t exist 10 years ago. Keep in mind that the Internet itself is only just over 30 years old.

(more…)

One of the powerful way Artificial Intelligence ‘learns’ is by using neural networks. Neural Networks are trained with a large number of examples where the result is known. The Neural Network adjusts until it gives the same result as the human ‘teacher’.

However, there’s a trap. If that source material contains biases – such as modeling Police ‘stop and frisk’ – then whatever biases are in the learning material will be contained in the subsequent AI modeling. This is the subject of an article in Nature: There is a blind spot in AI research  and also the praise of Cathy O’Neil’s book Weapons of Math Destruction that not only brings up that issue, but the problem of “proxies”.

Proxies, in this context, are data sources that are used in AI programs that are not the actual data, but rather something that approximates the data: like using zip code as a proxy for income or ethnicity.

Based on O’Neil’s book, I’d say the authors of the Nature article are too late. There are already institutionalized biases in very commonly used algorithms in finance, housing, policing and criminal policy.

Older posts >>

November 2018
M T W T F S S
« Oct    
 1234
567891011
12131415161718
19202122232425
2627282930