Prius Project bicycle can change gears with a Thought!

Prius Project bicycle can change gears with just a thought

This is pointed to more as an indicator of the distant future – one where direct brain control takes over from keyboard, touch pad or voice control. Still, it’s interesting what’s being done now as an indicator of what will be the future. Will it be control of software by thought, or will it go beyond that to imagining the edit or look and having the software work off our brain waves to results?

For the Prius Project bicycle, a team from Deep Local is working on a helmet with built-in neuron transmitters that allow the rider’s brain patterns to trigger the electronic shifters to move gears up or down. The system is said to take just 10 minutes to learn, after which the rider will be able to shift gears by just thinking about doing so.

In this context I’ll once again point to Tan Le: A headset that reads your brainwaves – Tan Le (2010) – a TED video – to consider in the context of this Prius Project. Direct brain control of software is much closer than I ever thought.

Google Acquires Facial Recognition Company PittPatt

Google Acquires Facial Recognition Software Company PittPatt

Facial Recognition – actually identifying the person – is more advanced than facial detection – simply determining how many faces are in a shot – and is going to become an important source of postproduction metadata.

Final Cut Pro X attempts to analyze shots (optionally) for facial detection, as does Premiere Pro CS5 and later. Final Cut Pro X also attempts to derive from the size of the face, the type of shot: Wide, M, MC, etc. Right now the technology is a little “hit and miss” or basically unreliable. For now.  These technologies will get better. Apple purchase a Swedish company last September to boost it’s efforts in Facial Recognition.

Meanwhile Google are also building up their portfolio of recognition technologies with the purchase of PittPatt.

When we get reliable Facial Detection, we’ll be able to find shots of individuals across our source media wherever they appear in a shot, and we’ll only have to apply a name once. When we get reliable facial detection, which is not yet, today.