The present and future of post production business and technology | Philip Hodgetts

Oct/12

18

But How Do You Really Feel? Someday the Computer May Know

As I’m sure you’re all aware, my special interest is in the Pre-post production area, specifically how we can automate out the boring work and optimize workflow for editors to do their thing in turning the raw material into a polished gem.

Of course, metadata is going to play a big part in doing this. In fact, story building algorithms are relatively easy – as we demonstrate with First Cuts Studio. What is hard is to derive the necessary metadata without taking the time for a human to enter it.

If we ever want to be able to judge performance or recognize emotion in a face in a shot, we need a computer to recognize emotion. And they are. In the New York Times from October 15th comes But How Do You Really Feel? Someday the Computer May Know:

In a Cairo school basement, two dozen women analyze facial expressions on laptops, training the computers to recognize anger, sadness and frustration.

At Cambridge University, an eerily realistic robotic head named Charles sits in a driving simulator, furrowing its brows, looking interested or confused.

And in a handful of American middle school classrooms this fall, computers will monitor students’ emotions in an effort to track when they are losing interest and when they are getting excited about lessons.

All three are examples of an emerging approach to technology called affective computing, which aims to give computers the ability to read users’ emotions, or “affect.”

These researchers are not the only ones working in the field. Affectiva applies the research they’ve done in recognizing emotion to determining the effectiveness of advertising.

I’m constantly reminded that all the technology we need to truly automate the basic annotation (metadata) and assembly of rough string outs exists right now. Not necessarily “finished” and not necessarily applied to our industry, but facial recognition, speech recognition, keyword extraction, emotion detection and geolocating are all established technologies, just waiting for application in new areas.

No tags

6 comments

  • Alex Elkins · October 18, 2012 at 9:44 am

    Interesting article, Philip. I like your take on the fact that much of the technology we as editors are waiting for does already exist. It’ll be interesting to see how companies employ it going forward. The key, in my opinion, lies in a kind of semi-automation, or a way of speeding up the editors review/logging process, which simply can’t be outright replaced by computer analysis.

    • Author comment by Philip · October 18, 2012 at 9:46 am

      My personal goal is to generate a first assembly (string out) of both a narrative and doc/reality without human input. Something that the editor can then polish and do their voodoo on!

  • Shorty · October 18, 2012 at 9:55 am

    I’m not AWAY, but I am AWARE… ;-)

    • Author comment by Philip · October 18, 2012 at 9:56 am

      I can’t work out how that makes sense.

      • Shorty · October 19, 2012 at 10:22 am

        Simply check the first line of your post.

        • Author comment by Philip · October 19, 2012 at 10:28 am

          Got it thanks.

<<

>>

October 2012
M T W T F S S
« Sep   Nov »
1234567
891011121314
15161718192021
22232425262728
293031