The present and future of post production business and technology | Philip Hodgetts

Endgaget recently had an article on transferring facial movements from a person in one video, to a different person in a different video. Unlike previous approaches, this latest development requires only a few minutes of the target person’s video, and correctly handles shadows.

Combined with other research that allows us to literally “put words in people’s mouths” by typing them and having them created in a person’s voice that never said the words. Completely synthesized and indistinguishable from the person saying it.

Transferred facial movements plus created words in that person’s voice and it will be a forensic operation to determine if the results are “genuine” or created.

This is the first time I’ve taken a deep look at a TV show and worked out what I think would be the perfect metadata workflow from shoot to edit bay. I chose to look at Pie Town’s House Hunters franchise because it is so built on a (obviously winning) formulae, and I thought that might make it easier for automation or Artificial Intelligence approaches.

But first a disclaimer. I am in no way associated with Pie Town Productions. I know for certain they are not a Lumberjack System customer and am also pretty sure they – like the rest of Hollywood – build their post on Avid Media Composer (and apparently Media Central as well). This is purely a thought exercise built around a readily available example and our Lumberjack System’s capabilities.

(more…)

In some way I guess this is another example of Artificial Intelligence (by which we mean Machine Learning) taking work away from skilled technicians, but human recall has been replaced with facial identification at the recent Royal Wedding in the UK, where Amazon’s facial recognition technology was used to identify guests arriving sat the wedding.

Users of Sky News’ livestream were able to use a “Who’s Who Live” function:

As guests arrived at St. George’s Chapel at Windsor Castle, the function identified royals and other notable guests through on-screen captions, interesting information about each celebrity and how they are connected to Prince Harry and Meghan Markle.

The function was made possible by Amazon Rekognition, a cloud-based technology that uses AI to recognize and analyze faces, as well as objects, scenes and activities in images and video. And Sky News isn’t the first to use it: C-SPAN utilizes Rekognition to tag people speaking on camera.

Rekognition is also being used by law enforcement.

Facial recognition and identification would obviously be useful for logging in reality and documentary production.

Or more importantly, why do introverts make good editors?

Episode 81: Are Editors Introverts?

In a new Terence and Philip Show we start with the question “Should Apple be present at Trade Shows like NAB?” and then extend discussion to question whether there is still a role for big trade shows like NAB and IBC.

http://www.theterenceandphilipshow.com/episode-80-shoul…o-to-trade-shows/

I was privileged to be invited to a panel at 2018 HPA TR-X: Everything You Thought You Knew About Artificial Intelligence in M&E and What You Didn’t Know You Didn’t on a panel titled AI Vision Panel: What Will Happen and When?

It was an interesting topic, although our panel got quite pessimistic about the future of society if we’re not very careful with how AI/Machine Learning comes into our lives, but that’s a blog post for another time.

What has really set me thinking was a comment by John Motz, CTO Gray Meta that his 12 year old and her friends, spend more time creating media for each other than simply consuming it.

(more…)

The latest show starts with discussing a recent survey that claims TV holds the top spot in tech devices, but we aren’t so sure.

This discussion, as usual, covers a wide range of topics including internet availability, changing business models and the opportunity YouTube presents.

Episode 79: TV holds the top spot in tech devices?

A simple indicator of the growing influence and impact of Artificial Intelligence and Machine Learning is the Hollywood Professional Association (party of SMPTE) inclusion of it in their annual retreat.

For the 2016 Retreat I proposed a presentation on AI & ML that wasn’t deemed relevant at that time.

For the 2017 Retreat I made pretty much the same proposal, which lead to a panel discussion at the Retreat that I was pleased to contribute to.

For the 2018 Retreat the half day tech focus the day before the main conference is:

A half day of targeted panels, speakers and interaction, this year focusing on one of the most important topics our industry faces, Artificial Intelligence and Machine Learning.

Two years ago it wasn’t relevant.

A year ago it was one panel in three days of conference.

This year it’s the tech focus day ahead of the conference!

I’ll be back on a panel this year – Novel AI Implementations: Real World AI Case Studies – and hope to see you there. The HPA Retreat is the closest thing to TED talks for our industry and discuss the topics that will be crucial to film and television production in 2-3 years. Get a heads up before everyone else.

Jan/18

18

Overcoming Imposter Syndrome

A few weeks ago we published an episode of The Terence and Philip Show dealt with Imposter Syndrome. Yesterday I came across an article on 21 Ways to Overcome Imposter Syndromehttps://startupbros.com/21-ways-overcome-impostor-syndrome/ that seemed like a logical follow on!

I think I mentioned it on the show, but one of the most important things when getting introduced to a new group: Get a GREAT introduction.  As long as you don’t every disprove the introduction that’s how people will think of you. With that in mind  #14 Say what you can resonated with me.

Well, from a couple of days of reading and email newsletters, but there is quite a focus.

MESA Alliance quotes Deluxe Entertainment Services Group chief product offer Andy Shenkler as saying:

“AI is obviously playing a fairly broad role, especially with the areas that we at Deluxe are working on,” he told the Media & Entertainment Services Alliance (MESA) in a phone interview. That includes “everything from the post-production creation process, localization” around advanced language detection and auto translation – “and then even down into the distribution side of things,” he said, noting the latter was “probably the least well-known and discussed” part of the equation.

That article goes on to talk about who’s technologies they use and how they use it to assign metadata to incoming assets. Speaking of Content Metadata (in this case about finished content, not for use in production) Roz Ho, senior vice president and general manager, consumer and metadata at TIVO, writes in a guest blog at Multichannel News:

Not only does machine learning help companies keep up with the tsunami of content, it can better enrich metadata and enable distributors to get the right entertainment in front of the right viewers at the right time.

Machine learning takes metadata beyond cast, title and descriptions, and enables content to be enhanced with many new data descriptors such as keywords, dynamic popularity ratings, and moods, to name a few.

Liz finishes with a short dissertation on how these machines, and people enhanced by them, will be the direction we take in the future.

And out of CES some headlines:

CES 2018: Consumer Technology Association Expects Major Growth for AI in 2018

CES 2018: AI Touted Heavily by LG, Samsung, Byton on Eve of CES

It seems like every day there is news yet another application of Machine Learning (AI) into the Media and Entertainment space, either in production – where it is helping decide what goes in to production as well as helping in production – through to helping people find more appropriate content.

<< Latest posts

Older posts >>

July 2018
M T W T F S S
« Jun    
 1
2345678
9101112131415
16171819202122
23242526272829
3031