While researching the anecdotal history of some local property, I did what I’ve done previously: ask Siri. In this case, asking about actors dates of birth and death. In the past, these type of questions would have pulled up the relevant IMDB or Wikipedia page with Siri saying “I’ve found some links for you on the web” or similar.
It took several rounds before I realized that, while the pages were still being pulled up as before, Siri was parsing out the answer to the question I’d asked, and gave that to me directly. I never had to glance down or open my phone.
Similarly, in Mail, there is now a predictive mailbox making suggestions (usually accurate) into which email box I might want to move the selected email.
In Calendar, I find addresses being suggested for my events, based on whether I’ve been there or not, address book entries, or other information.
It’s clear to me that these are all improvements related directly the Apple’s increased use of Machine Learning across it’s software products.
As I’m trying to figure out how and where we might use Machine Learning (ML) in our software businesses, I thought I’d review all the uses I can find beyond the more general cognitive services (like speech to text, image recognition, keyword extraction, etc) that I’ve already talked about and that – by themselves – are incredibly valuable and offer a near-immediate payoff.
I was a little shocked at the diversity of ways ML is being used. According to TechCrunch there has already been over $10 billion in Venture Capital to 1500 AI/ML startups in 70 countries, which is predicted to rise to more than four times that in 2017!
Since I was compiling this list, I thought I’d share it with you, but it’s just a sampling. Even so there are more than 40 applications described here, in addition to the Cognitive Services as stand alone ML tools.
For those in the LA area who want to understand what all the Lumberjack System hype is about, this is your chance to get an in depth look at Lumberjack System, and get your questions asked. Everyone leaves with a one month free trial.
While projecting the changes that Artificial Intelligence (AI) and Machine Learning (ML) might bring about in the future, it was interesting to look back and see just what didn’t exist 10 years ago. Keep in mind that the Internet itself is only just over 30 years old.
It’s relatively easy to get an overview of the current state of Artificial Intelligence (AI). It’s probably easier to understand the benefits of machine learning, particularlyMachine Learning (ML) that’s already applied to common tasks that we can benefit from now because we’re fitting those new technologies within existing frameworks.
What is much harder to determine, is how machine learning will be directly applied to post production processes, and what role AI will take in our collective production future.
In September 2010 Apple purchased Swedish facial recognition company Polar Rose, and today we learn they’ve purchased Israeli startup RealFace: “a cybersecurity and machine learning firm specializing in facial recognition technology”.
What is different between the two purchases is that this latest is based on machine learning.
…the startup had developed a unique facial recognition technology that integrates artificial intelligence and “brings back human perception to digital processes”. RealFace’s software is said to use proprietary IP in the field of “frictionless face recognition” that allows for rapid learning from facial features.
Another step towards our software identifying and labelling people in our media.
In the Overview I pointed out that most of what is being written up as Artificial Intelligence (AI) is really the work of Learning Machines/Machine Learning. We learnt that Learning Machines can improve your tax deduction, do the work of a paralegal, predict court results, analyze medical scans, and much more. It seems that every day I read of yet another application.
There are readily available Learning Machines available for all comers, but there are ways to benefit from them without even using one.
Over the last couple of years I’ve become more and more interested in the ways that the research being done into Artificial Intelligence (AI) might be applied to production and post production. In this article I’ll be giving an overview of what AI is at this stage of development, and what technologies are being used. Later articles will cover immediate and future applications and implications.
Obviously the 10th anniversary blew past without me noticing, but the question came up the other night and it turns out I posted my first article “Why aren’t there workstation class graphics cards for Mac?” I concluded that article with:
Until we get support for these tools, there remain good reasons to go with Windows for true power graphics users.
Who would have thought we’d be having the same discussion 12 years later! Also that week I wondered whether HDV would “be something” (and for five years it was); discussed DV, HDV and ‘good enough’; the problems of transitioning 4:3 material into 16:9 and NAB 2005 rumors. Apparently I started being “officially jaded” about NAB in 2005!
I chose the theme of “The present and future of post production business and technology” and – apart from some dabbling into production itself – the theme seems as relevant now as it was then.
Final Cut Pro was at version 4 and Premiere Pro wasn’t available on macOS/OS X. We’ve come a very long way. I’ve discussed an enormous range of topics: from predictions about Final Cut Pro X to recent writings on artificial intelligence. Along the way I’ve talked business, marketing, and wondering who would buy YouTube!
That was one of my big predictive misses suggesting copyright issues would kill any purchaser, and dismissing Google because of their existing (at the time) Google Video. But overall I have been remarkably accurate along the way.
In that first month I wrote about the rise of video as another form of literacy, a theme that had been constant for at least five years before that, and one we’re seeing come to fruition now. When there are over 2.5 million seats of the best-selling NLE it’s obvious that professional video production is now serving many more roles than just movies and television!
The evolving nature of the production industry has been another regular theme, with broadening platforms, evolving business models, and the ‘threat’ (or is it an opportunity or business model problem) of unauthorized distribution also being regular themes.
This blog has been one of the few constants in the 12 years since I started. When I started Intelligent Assistance was still in the training business, with our series of Intelligent Assistants. They too had evolved from the groundbreaking DV Companion for Final Cut Pro in 2000, to more creative topics like the first color grading tutorial series (if not the first) for Final Cut Pro called Practical Color Correction, or our Killer Titles series.
In February 2005 the DV Guys hadn’t finished their five year run NAB 2000-NAB 2005. Greg and I had not created The Digital Production BuZZ to replace it, which we sold to Larry Jordan and Associates in late 2007 and is now part of the Thalo Artist Community.
We used the proceeds of the sale to Larry Jordan to preview our first piece of software: First Cuts for Final Cut Pro, which was a practical application of my strong interest in metadata for story building – now under my ‘Content Metadata’ umbrella – and an attempt at a knowledge system. First Cuts essentially embodied my style for creating edits, built around a very large number of interactive ‘rules of thumb’.
As software developers the upheaval that was Final Cut Pro X hit us very hard at first. The preview demolished half our business for three months. We saw the benefits of Final Cut Pro X very early – in fact I published Understanding the Metadata Foundations of Final Cut Pro X five minutes after Final Cut Pro X was released.
Although I didn’t know it at the time, apparently my little book influenced Glenn Ficarra toward experimenting with Final Cut Pro X for Focus.
Final Cut Pro X – for all its disruption – has been good to us, and I think we have benefited the Final Cut Pro X community.
Because Final Cut Pro X was so controversial in 2012, we took the opportunity to be part of a reality TV show on a solar powered boat. While that didn’t work out quite the way it was planned it lead directly to the development of Lumberjack System, which is increasingly unlocking the power of metadata for logging and pre-editing. Lumberjack became the second company in our small portfolio.
That project also lead to my subsequent interest in small production kits. How small can we make a production kit while still maintaining quality. The confluence of audio synchronization for multicam clips, small cameras, alternate mounts and small, affordable storage lets us create without the back breaking load.
One of the most consistent themes throughout the blog with 84 different posts on the subject. How we acquire and use metadata – particularly metadata about the content of shots – is my primary focus these days, which has lead to an increasing interest in what the evolving field of Artificial Intelligence with 20 related articles so far. Overall, I’ve written 1377 articles in 12 years, or about one every three days on average.
Personally Greg and I moved from Woodland Hills to Burbank in 2006 where we spent 10 happy years in Avalon Burbank. The easy walkability of Burbank and great pool in the apartment complex helped in my transition from middle-year sloth into a more healthy version of myself.
2008 I was granted Permanent Residence as an Alien of Exceptional Ability, and many readers helped with submissions for that. In 2016 I became an American Citizen
While in Burbank we married after only a 17.5 year ‘engagement’ and moved into our own little house a year ago in February 2016.
There’s a lot of interesting material in that 12 years. I sometimes return to older articles and get surprised by what I wrote at the time! Hopefully I will continue to be interesting over the next 12 years.
Steve Martin is the creative force behind Ripple Training and has been using and teaching Final Cut Pro since 1999. Since Final Cut Pro’s introduction, he has introduced thousands of people to Final Cut Pro through his classes, workshops and training products. He has consulted and/or trained for Apple, Adobe, Disney, Canon, Walmart and other companies. He is also a writer, producer and avid photographer. We caught up with Steve for lunch at the 2016 Final Cut Pro X Creative Summit on October 31st.