CAT | Business
Maybe I’m pushing this subject a bit hard, but I really believe we are on the cusp of a wide range of human activities being taken over by smart algorithms, also known as Machine Learning. As well as the examples I’ve already mentioned, I found an article on how an “AI” saved a woman’s life, and how it’s being used to create legal documents for homeless (or about to be homeless) in the UK.
I recently commented on the importance of metadata for rights management during distribution. While cleaning my email inbox I revisited a story from late last year, on how over-the-top content providers (generally niche) can use metadata from social media and other sources to help grow their audiences.
Apple has reportedly purchased machine learning company Turi. The expectation is that they will use it to improve other Apple products, most likely Siri. But what is machine learning and how does it fit with the algorithms I’ve been talking about lately.
Stephen Galloway writes at The Hollywood Reporter:
The majors are still the first port of call for any significant project; they still have an unparalleled ability to get that project developed, cast, shot, marketed and into theaters; and despite extraordinary technological and economic change, they haven’t allowed any upstarts to challenge their hegemony.
He then goes on to document the changes that Amazon and Netflix have wrought on the “old bastions of Hollywood Power.”
All of which is good news: more outlets lead to more production.
Since starting work on the Lunch with Philip and Greg I’ve battled a little with the multicam. Largely because I’m using it in an atypical way, although I suspect setups like mine will become more common in the future.
My solution was Automator actions, triggered by Function keys and activating an AppleScript, so that the mode is first switched to Video Only (for angles 1 or 2) or Audio only (3, 4 and 5) before switching to the angle. It reduces a lot of repetitive strain injury potential!
The tutorial is over at FCP.com, but here’s a little background.
While Augmented Reality and Virtual Reality often get conflated, they are very different beasts.
While Apple’s Eddy Cue says they’re not interested in original programming, that’s not 100% true. Cue said that Apple was only interested in the content space when the projects are tied to its existing products.
But it does not explain Planet of the Apps! I think Apple will be interested in original programming as soon as its seen to be strategically valuable. It’s a topic I’ve written on before in an article from December 2009.
What if Apple or Google simply bypassed Networks and Studios? which does some rough financial calculations which show that Apple could fund as much TV Show and Movie production as they want!
I’m also not the only one who thinks there will be more Apple original programming in the future:
From Variety: Apple Eyes Move Into Original Programming
Fast Company (as part of a bigger piece) suggest Apple is covertly pursuing original television programming deals.
And in yesterday’s Q3 Earnings Report Apple CEO Tim Cook said that we have to think about the Apple TV (current generation with tvOS) in a different way.
Cook: The introduction of the Apple TV and tvOS last October and the subsequent OS releases, what’s coming out this fall… think of that as sort of building the foundation for what we believe can be a broader business over time. I don’t want to be more precise than that, but you shouldn’t look at what’s there today and think: “we’ve done what we want to do.” We’ve built a foundation that we can do something bigger off of.
Given that Apple are slowly repositioning themselves to rely more on services revenue, and Apple TV is a foundation… Let the speculation begin.
Avid Media Composer has always been great at tracking metadata. Without accurate timecode, Media Composer would have never become established. Over the years Avid have continued to add metadata support to the app.
Reading an Avid blog on Spanish Broadcaster RTVE’s technical deployment in Rio, I was struck by this:
All of the cataloging and indexing process will be carried out by means of an autometadata software developed by the Corporación itself, which will enable the use of metadata provided by OBS and those selected by TVE’s documentary makers.
I’d love to know more about what the “Corporation” has done. Maybe we can set up a meeting for October when we’ll be back in Barcelona!
Spoiler alert: they use a lot of Avid technology!
A few days ago I wrote about metadata’s application to distribution. A recent panel discussion at the Rights and Metadata Madness conference outlined some of the challenges and case studies from Rovi, MLB and Viacom outlining their metadata needs and the practices they’ve developed to deal with them.
The article is worth a read, but I’ll highlight the challenge outlined by Michael Jeffrey, VP of market solutions at Rovi:
A feature-length movie with a sports theme and containing content that includes music from other properties can have assets from 20-50 separate entities.
And each of those entities can have restrictions on what the maker of that movie can show, he said, adding that it’s possible you can’t show any beer cans or can’t use an actor in any promotions.
Now let’s add the formatting, duration, and other issues from my earlier post!
Google today launched a new API to help parse natural language. An API is an Application Programming Interface, that developers can use to send data to, and get a response back. Natural Language Parsing is used to understand language that is available in computer-readable form (text). Google’s API joins an increasingly long list of very smart APIs that will understand language, recognize images and much more.
A lot has changed since I last wrote about Advances in Content Recognition late last year.