CAT | Metadata
2016 was a year of consolidation and growth for Greg and I: citizenship, green card, artificial intelligence and a house and yard dominated the year. 2017 looks like being another interesting and exciting year.
I’d like to introduce you to our first new piece of software for about two years: FindrCat. FinderCat is an easy-to-use app that converts your Final Cut Pro X Keywords into Finder Tags, so you can then filter and search for your media via Finder. In a world of Media Asset Management (MAM), and Digital Asset Management (DAM) this is a ‘no M’am’ asset organization tool.
The biggest advantage is that the FCP X keywords now travel with the media files, and will return to FCP X as keywords when re-imported, on any system.
I guess it won’t be any surprise that I have a lot of metadata entered in my Aperture photo library. In fact the lack of metadata support in Photos is the reason I can’t migrate there.
The real value of metadata is to help find photos, but sometimes the right piece of metadata is beyond value. For some paperwork related to my current husband’s ‘green card’ I needed the date of birth of my former wife. I could not remember it, but looking in my photo library, I found one taken on her 23rd birthday.
Of course I have the original date set on my photos, even those that I scanned from prints or slides. Because I had added the metadata when I knew it, I now had the all-important date I needed, and was able to file the paperwork.
Never underestimate the value of metadata!
As you probably all know, I have two day jobs heading Intelligent Assistance Software and Lumberjack System. We’re very proud of the work we’ve done through both companies. We make a decent income from them for sure, but what makes us particularly happy when our tools get people’s work done faster. They get to go home to their families earlier and production has less drudgery.
So it pleases us greatly when that gets recognized, as it did this trip.
In this latest episode of The Terence and Philip Show Terry and I discuss metadata, my citizenship, smart APIs, Artificial Intelligence and more.
The extensive article by Steven Levy – The iBrain is Here – is a fascinating read on how Apple are using Machine Learning, neural networks and Artificial Intelligences across product lines. It’s well worth the time to read through, but this quote from Phil Schiller stood out:
“We use these techniques to do the things we have always wanted to do, better than we’ve been able to do,” says Schiller. “And on new things we haven’t be able to do. It’s a technique that will ultimately be a very Apple way of doing things as it evolves inside Apple and in the ways we make products.”
The ways this could all be aligned with editing? Speech-to-text; keyword extraction (just like Magic Keywords in Lumberjack System); sentiment extraction; image recognition; facial detection and recognition; speech controlled editing (if anyone really wants that), and the list goes on.
I’d like to believe the Pro Apps Team are working on this.
UPDATE: Ruslan Salakhutdinov is Apple’s first Director of AI.
I recently commented on the importance of metadata for rights management during distribution. While cleaning my email inbox I revisited a story from late last year, on how over-the-top content providers (generally niche) can use metadata from social media and other sources to help grow their audiences.
Avid Media Composer has always been great at tracking metadata. Without accurate timecode, Media Composer would have never become established. Over the years Avid have continued to add metadata support to the app.
Reading an Avid blog on Spanish Broadcaster RTVE’s technical deployment in Rio, I was struck by this:
All of the cataloging and indexing process will be carried out by means of an autometadata software developed by the Corporación itself, which will enable the use of metadata provided by OBS and those selected by TVE’s documentary makers.
I’d love to know more about what the “Corporation” has done. Maybe we can set up a meeting for October when we’ll be back in Barcelona!
Spoiler alert: they use a lot of Avid technology!
A few days ago I wrote about metadata’s application to distribution. A recent panel discussion at the Rights and Metadata Madness conference outlined some of the challenges and case studies from Rovi, MLB and Viacom outlining their metadata needs and the practices they’ve developed to deal with them.
The article is worth a read, but I’ll highlight the challenge outlined by Michael Jeffrey, VP of market solutions at Rovi:
A feature-length movie with a sports theme and containing content that includes music from other properties can have assets from 20-50 separate entities.
And each of those entities can have restrictions on what the maker of that movie can show, he said, adding that it’s possible you can’t show any beer cans or can’t use an actor in any promotions.
Now let’s add the formatting, duration, and other issues from my earlier post!
Google today launched a new API to help parse natural language. An API is an Application Programming Interface, that developers can use to send data to, and get a response back. Natural Language Parsing is used to understand language that is available in computer-readable form (text). Google’s API joins an increasingly long list of very smart APIs that will understand language, recognize images and much more.
A lot has changed since I last wrote about Advances in Content Recognition late last year.