While I’m still to see a demo, there are few announced features that I definitely think are in the right direction, particularly those driven by Resolve’s “Neural Engine.” It seems, like Adobe Sensei and Apple’s CoreML, to be a playback engine that implements the Machine Learning models in practical tools.
Improved retiming, facial recognition, color matching, color balance and upscaling are the first implemented features. These are in line with what I have been expecting from ML: smart features that make the process easier for editors. All Resolve’s current Neural Engine driven features are better ways (faster) to do things we’ve been doing for years (other than facial recognition).
Adobe have already implemented ML driven features in their apps, and marketing tools. I’d hope that Avid have taken likely future use of ML into account in their NAB-announced revamp of Media Composer, although I doubt we’ll see any ML driven features there for many years to come, for two reasons. The Media Composer market is largely not ready for it, and Avid will have enough on their plate bringing a newly rewritten version to maturity. I expect we’ll see ML in Media Central before any desktop app.
Apple have made very good use of ML across all their products. It’s why Apple Mail predicts mailboxes for you, among dozens of little features across their devices. The have an excellent playback engine in CoreML, and developers of macOS or iOS have access to some of IBM Watson’s models. The Pro Apps team were also advertising a position for a Machine Learning specialist in late 2017.
Thanks to the Content Auto Analysis (that no-one uses) they even have a pipeline within the app to bring ML derived keywords into the app.
I’m sure we’ll see ML driven tools in FCP X at some future time. It largely depends on the priorities within the Pro Apps Team. I’d love to see a big focus on ML in a future FCP X, but there are those who would rather see collaboration, dupe detection or scrolling timelines.