Categories
General

A Glimmer of Hope Detecting Deepfakes

A deepfake allows us to animate any face and have it say whatever we want that face to say. Useful in resurrecting long dead actors (or recently dead to finish a movie), but incredibly dangerous when it – inevitably – it moves into the political arena. Exonet describes the problem in great detail in this video. From 2017 when you needed a lot of money and a lot of source material to base the deepfake on, to 2022 where only 20 seconds of video are required to make a believable deepfake.

A good deepfake is very hard to identify with currently available tools, but there is a glimmer of hope. Researchers are using Neural Networks to observe blink rates because deepfakes blink less often than real people! That worked well, until deepfake software dded more blinking! Likewise looking for image artifacts from the process worked for a while, until deepfakes got better.

Adobe are in the forefront of tackling the problem by focusing on the source – or authenticity – of the content. In this blog post from 2021 Adobe they not only outline a process for more accurately tracking the source from content creator to viewer, but point out that it will help creators be better credited as their work travels the Internet.

At the heart of that post is the question of how we know something is authentic. Adobe’s solution is the Content Authenticity Initiative (CAI), which is a collaborative community of technology companies and creators, technologist and journalists focusing on creating open standards for proving authenticity.

While Adobe’s approach will have to do until we have more direct ways to detect deepfakes, when it comes to synthesized voices, there is a “tell.” Researchers at the University of Florida have discovered they can identify synthesized voices by analyzing the shape of the vocal passages required to create that sound!

The problem of identifying deepfakes – visual or auditory – is not going to go away, so it will be important to educate ourselves and others on checking sources before accepting a deepfake as genuine.