These days, it’s hard to tell the real from the fake. You never know if a quote, or a photo, or a Facebook meme is truthful or manufactured as part of some scheme or for some deep political purpose. Video footage seems more reliable, but we’ve all seen examples of how careful editing can change the context and the perception.
Now, it’s going to get even harder to distinguish the real from the fake. The development of artificial intelligence programming and facial recognition software is allowing for the development of increasingly realistic, seemingly authentic video footage that is in fact totally fictional. The new word to describe the result is “deepfake,” which refers to the use of AI technology to produce or alter video to present something that didn’t occur in reality. And the use of rapidly improving technology to produce deepfake video is erasing boundaries that used to allow humans to spot video frauds by focusing in on gestures, subtle facial movements, and other “real” human behavior that computers just couldn’t effectively simulate. The avatars in even the most advanced video games still look like, well, avatars.
But that is all changing. A team of engineers from the Samsung AI Center and the Skolkovo Institute of Science and Technology in Moscow has developed new algorithms that are far more advanced and successful in replicating realistic human faces. The software is the product of studies of thousands of videos of celebrities and ordinary people talking to cameras. It focuses in on “landmark” facial features and uses a neural network to convert the landmark features into convincing moving video. The new software also self-edits by critically scanning the individual video frames that are produced, culling out those that seem unnatural, and substituting improved frames.
As a result of all of this, the new software can produce realistic video from a single, static image. Take a look at the video of a chatty Mona Lisa embedded in this article, created from the application of the new software to the single image in the famous portrait by Leonardo da Vinci, and then tell yourself that it doesn’t look astonishingly, and disturbingly, realistic. If Mona Lisa can talk, it sure seems like we’ve crossed a new boundary in the ongoing battle of real versus fake.
Like any new technology, the AI that allows for the creation of realistic video footage from a single image could have positive applications, and negative applications. It’s just hard not to focus on the negative possibilities in the current era of fakery and fraud, and wonder how this new technology might be used for political dirty tricks or other chicanery. We’re all just going to have to be increasingly skeptical about what is real, and what is false and realize that passing the “eye test” might not be much of a test any more.