The present and future of post production business and technology | Philip Hodgetts

CAT | Video Technology

With the announcement of Resolve 14 today at NAB 2017, it seems that Blackmagic Design have their sites clearly on Avid’s Media Composer: intentionally or not.

I’ve long wondered what apps would be most threatened by Blackmagic’s rapid development of Resolve.

Adobe’s suite of tools and dynamic link makes a powerful argument for that platform. Although Resolve has improved integration with Fusion, it’s not yet at the level of Dynamic Link. Not that Dynamic Link is the most robust of Adobe tech. Despite being free, it’s hard to see Resolve directly threatening Premiere Pro, After Effects et. al.

Apple’s Final Cut Pro X/Motion combination features a new look at the editing interface – probably the reason it’s the most used professional NLE now –  and those who use it, love the Magnetic Timeline. The most common response to today’s Resolve announcements was “but it’s track based.”  Once you’re a fan of the Magnetic Timeline it’s hard to go back.

There are other players like Media 100, Edius and Vegas that will definitely be threatened by Resolve Free or the full version for just $299, but the one company that – mid term – is most threatened is Avid.

Resolve has already replaced Avid’s excellent (but left to die) Symphony grading and with major audio improvements – integrating their Fairlight purchase – and shared project upgrades directly threaten core focuses of Media Composer and ProTools.

Fortunate or not, while these are key parts of Avid’s current software lineup, there are a small percentage of Avid’s overall business.

Very interesting to see how the new features and pricing affect adoption, and who will be most threatened. If you’re looking for a modern, track-based NLE with good audio, great color grading, and excellent DIT tools and collaborative workflows gaining maturity, Resolve deserves a version 14 look.

Alex Gollner (aka Alex4D) has seen the same issues: Blackmagic Design has sights set on Avid with DaVinci Resolve 14

Aug/16

18

What we take for granted

I cut a short 30 minute competition entry for a friend today. A relatively simple single-take green screen over a Pond 5 background she purchased.

Except we used a bunch of technologies that were all non-existent just a few years ago.

Starting with some Blackmagic Design ProRes files, we:

  • Sped up the talent about 20% with no visible or audible artifacting
  • Keyed out the green background by using the built-in keyer in FCP X at the default settings
  • Repositioned the talent to better fit the background shot
  • Slowed down the background to 66% with no visible artifacting
  • Applied a real time blended mask and gaussian blur on the background (over a duplicate, not blurred copy to simulate depth of field
  • Used the Color Board to reduce the exposure on her face, while using a mask so her eyes continued to sparkle

all in real time on a 2015 Retina 5K iMac and Final Cut Pro X.

It wasn’t that long ago that applying a soft edge to a mask; or any gaussian blur, or any chroma key meant a render before playback.

Like in the machine learning/AI field, the video technologies also keep getting better all the time.

While Augmented Reality and Virtual Reality often get conflated, they are very different beasts.

(more…)

Terence Curren and I recorded our thoughts on NAB 2016.  Topics covered include general impressions of NAB 2016, and why Terry did not attend this year; Blackmagic Design Resolve; Avid’s business; market fragmentation; HDR and expanded color gamut; Studio Daily’s Top 50 influencers (including Philip); Zcam; Lytro cam; VR; innovation; Apple watch and NDA’d Final Cut Pro X preview.

Episode 71: NAB 2016

Jan/16

5

A new 4K TV for Christmas? Sorry, already obsolete!

CES finally brings High Dynamic Range TV to the consumer. Brighter (really brighter) white levels, cleaner blacks and wider color gamut are more obvious to most people, than high pixel count. 10 bit sampling will allow for smoother gradients and contribute to the wider color gamut.

Fortunately, the competing technology companies came to an agreement with UltraHD Premium.

Already at CES TCL and LG have announced new models with Dolby Vision incorporated. Dolby Vision is probably the widest adopted of the HDR standards. But it doesn’t really seem to matter as UltraHD Premium is about standards met, rather than how to meet the standard. This is a good approach as it allows the technology to evolve, as long as the same basic standards are met.

 

 

I’d started writing about the inevitability of vertical video, and how we should adapt to it, when what should came up in the Frame.io blog but Say yes to vertical video.

I had come to the realization that fighting against vertical video is not a winnable battle, simply because most people really don’t care. They shoot on a mobile device, and that’s where they view it. Most mobile phones and tablets default to vertical video. Every non-industry person I interact with shoots vertical video: from my singing teach to my niece!

UPDATE: On Twitter Kenneth X or @Knesaren pointed me to an article on How Norwegian Broadcasting made the first vertical video documentary. As always, start with a good story!

UPDATE 2: Clark Dunbar of Mammoth HD tells me that they’ve had large format (HD to 6K) vertical footage for well over a decade for signage, POS and museum installations! Their vertical stock footage gallery is at http://www.mammothhd.com/MHD_QG_VertPort.html.

UPDATE 3: Carl [email protected] on Twitter, had some thoughts on vertical video today:

Vertical video haters keep this in mind: For centuries artist have used the vertical format to represent human presence intimately.

Perhaps that explains why many people (not cinematographers) are naturally drawn to the vertical. It’s not laziness as some scoff.

Rather, think about a mother who films her child. Subconsciously she goes for the vertical to intimately capture her child filling the frame.

To the mother, that’s the most natural thing in the world. Try to overcome your prejudices as a creative and see things as others do.

UPDATE 4: There’s a vertical film festival in Katoomba, Australia. Makes sense, that’s a very mountainous region!

In the latest Terence and Philip Show, Terence and Philip talk about Lunch with Philip and Greg; what it is and the 4K, small production kit approach that allows the show to be produced over lunch in regular restaurants. The discussion moves to other production and why we got into the business in the first place before discussing the future of motion graphics in the era of templatorization. (Motion VFXStupid RaisinsFiverr).

Terence and Philip answer some listener questions, including “Where do we compromise, and where can we not compromise” and “When is too much media is enough”.

Peter Wiggins is a freelance editor who has been using Final Cut Pro for broadcast since 2003. He runs the successful FCP plugin website iDustrial Revolution and he is the force behind FCP.co.

Peter WigginsPeter joined us for lunch in San Jose during the recent FCP X Creative Summit. (more…)

Apr/15

18

My Take on NAB 2015

A very subjective take on NAB 2015 because I spent very little time looking at tech! Instead my focus was on the FCPWORKS demo room and particularly my Lumberjack System presentation on Wednesday. But, of course, NAB is also about the socializing.

(more…)

Nov/14

14

How Useful is Automated Multicam Editing?

Red Shark news reports that Disney Research have:

Researchers working for the Mouse have developed a groundbreaking program that delivers automated edits from multi-camera footage based on cinematic criteria.

When you read how they’ve achieved it, I think it’s impressive, and very, very clever.

The system works by approximating the 3D space of the cameras in relation to each other. The algorithm determines the “3D joint attention,” or the likely center of activity, through an on-the-fly analysis of the multiple camera views. Based on this information, the algorithm additionally takes into account a set of cinematic preferences, such as adherence to the 180 degree rule, avoidance of jump cuts, varying shot size and zoom, maintaining minimum and maximum shot lengths, and cutting on action. The result is a very passable, almost human edit.

Perhaps it’s the very nature of research, but I’m not sure of the practical application. Maybe that’s the point of pure research.

Assuming the technology delivers, it’s rare that we want to take a multicam shoot and do a single, switched playback version. “Live switching” after the fact, if you will. At least in my experience, the edit not only needs to switch multicam angles, but to remove dross, tighten the presentation, add in additional b-roll, etc, etc.

More often than not, my angle cuts are more directed by the edit I want, than a desire to just pick the best shot at the time.

That said, this type of research is indicative of what can be done (and therefore almost certainly will be done): combine a good multicam edit, with content metadata and perhaps you’d have a decent first pass, that could be built on, finished and polished by the skilled editor. The point being, as Larry Jordan points out is

How do you save time every step of the production process, so that you’ve got the time that you need to make your films to your satisfaction.

Ultimately the commercial versions of these type of technologies should be seen as tools editors can use to make more time for their real job: finessing, polishing and finishing the project; bringing it heart that makes the human connection in storytelling.

Older posts >>

August 2017
M T W T F S S
« Jul    
 123456
78910111213
14151617181920
21222324252627
28293031