My DV Expo topics
9-5 September 20 Basic Tech for Producers (and recent Film School Graduates)
In this session, technology expert and DV magazine contributor Philip Hodgetts will cover the technological choices in production and post in a non-geeky way to help producers — and others without a technical background — make good technology choices for their productions. From formats to software choices; selecting cameras to creating Web video; designing graphics that will work and much more. PRICE: $195 ($245 after Aug 31) Click here to register now.
9-5 September 21 Using Metadata For Production and Asset Management
Metadata is becoming increasingly important throughout the production cycle–from camera to asset management. In this session learn about the types of metadata in use; how each major NLE (Final Cut Pro 7, Final Cut Pro X, Premiere Pro CS 5.5 and Media Composer 5.5) handles metadata and how we can use that metadata to speed postproduction and VFX. Once post is done, assets need to be management through through distribution and repurposing. What tools are available, how are they used and how do they fit into the metadata structures promoted by SMPTE and other standards bodies. PRICE: $195 ($245 after Aug 31) Click here to register now.
9-5 September 22 Avoiding Postproduction Nightmares
Post expert and DV magazine contributor Philip Hodgetts details the most common (and costly) problems inadvertently created during production that will be “fixed in post.” From color correction to audio, and editing to the final QC pass on deliverables, he’ll not only reveal the tricks of the trade that he’d use to save your production, but also explain how you can avoid these costly issues in the first place. PRICE: $195 ($245 after Aug 31) | Click here to register now.
In the light of full disclosure, I certainly expect to be paid but I always deliver good value. There will be some overlap between the Basic Tech and Avoiding Postproduction Nightmares sessions as they both seek to make the technology understandable, but with a different focus to each day’s class.
Prius Project bicycle can change gears with just a thought http://tinyurl.com/3rt66zc
This is pointed to more as an indicator of the distant future – one where direct brain control takes over from keyboard, touch pad or voice control. Still, it’s interesting what’s being done now as an indicator of what will be the future. Will it be control of software by thought, or will it go beyond that to imagining the edit or look and having the software work off our brain waves to results?
For the Prius Project bicycle, a team from Deep Local is working on a helmet with built-in neuron transmitters that allow the rider’s brain patterns to trigger the electronic shifters to move gears up or down. The system is said to take just 10 minutes to learn, after which the rider will be able to shift gears by just thinking about doing so.
In this context I’ll once again point to Tan Le: A headset that reads your brainwaves – Tan Le (2010) – a TED video – to consider in the context of this Prius Project. Direct brain control of software is much closer than I ever thought.
Google Acquires Facial Recognition Software Company PittPatt http://tinyurl.com/3dworyl
Facial Recognition – actually identifying the person – is more advanced than facial detection – simply determining how many faces are in a shot – and is going to become an important source of postproduction metadata.
Final Cut Pro X attempts to analyze shots (optionally) for facial detection, as does Premiere Pro CS5 and later. Final Cut Pro X also attempts to derive from the size of the face, the type of shot: Wide, M, MC, etc. Right now the technology is a little “hit and miss” or basically unreliable. For now. These technologies will get better. Apple purchase a Swedish company last September to boost it’s efforts in Facial Recognition.
Meanwhile Google are also building up their portfolio of recognition technologies with the purchase of PittPatt.
When we get reliable Facial Detection, we’ll be able to find shots of individuals across our source media wherever they appear in a shot, and we’ll only have to apply a name once. When we get reliable facial detection, which is not yet, today.