Appleâ€™s iOS facial recognition could lead to Kinect-like interaction http://tinyurl.com/448qlac
Back in 2010 Apple purchases Polar Rose and now it seems we’re seeing the first uses in a Framework for iOS users. Frameworks are code building blocks that allow developers to use advanced technology features without needing to understand how the technology works. Frameworks largely abstract the technology for the developer.
Now this article tends to confuse facial detection – there’s one or more faces in this picture – with facial recognition – I see Steven in this picture – but is a good primer on what can be done with facial detection and recognition in iOS applications:
- apps that track eye movement to work out where someone is looking (and then place interface or other elements there)
- emotional state of the user (but not without additional work by the developer, this isn’t part of the iOS Framework but it is technology that’s working)
- User identification – no password to log in, just show your face to automatically log into your User Space.
My interest is, not surprisingly, how this might be adapted to postproduction. Certainly the ability to quickly identify all shots with “Steve” in them would be valuable metadata. Shots where “Steve” was “happy” would be even more valuable metadata.
Since Apple have indicated that (eventually) iOS and OS X will be feature compatible, how long before we can start integrating this metadata into editing workflows?