Into The World Of Deepfakes

A photo of the Pope in a puffy coat, shown above, “went viral” on social media recently, with lots of people offering comments about the Pope’s apparent choice of cold weather gear. There was only one problem: the photo wasn’t real. Instead, it was a “deepfaked” image, generated by a new and improved edition of AI software, that fooled millions of people.

The images of the Pontiff followed wide circulation of deepfaked images that supposedly showed scenes of former President Trump being arrested by New York City police officers. Those photos were featured on many websites. People knew that an indictment and arrest hadn’t happened yet, but the images were so remarkably “real”-looking that they became a hot topic on the internet and social media apps.

It’s time to recognize that we now live in a deepfake world, folks.

In recognition of that reality, the news media is starting to run stories about what you can do to try to spot deepfaked images, like those purporting to be of the Pope in a puffy white jacket. Basically, the advice comes down to thoroughly scrutinizing images and looking at every element and feature to see whether something looks weird, incomplete, or distorted. If you carefully examine the deepfaked image of the Pope, for example, you might notice clues of deepfakery from the hands, the glasses, and the crucifix.

The problem, of course, is that people won’t do that kind of detailed analysis, unless they suspect that there is a reason to do so. As one person said in the article linked above, she accepted the Pope deepfakes as real without a second thought. The Pope wearing a poofy coat isn’t major news. The Trump arrest deepfaked images, on the other hand, involved what would have been a huge development and could easily be checked against the news websites for confirmation.

This suggests that the issue of deepfaked images is going to be problematic at the plausible margins of our world, with purported photos of celebrities, politicians, and world leaders wearing something, eating or drinking something, or otherwise doing something the social media world might be interested in. I hadn’t seen the deepfaked photos of the Pope because I don’t really do social media. But if you do dip your toe in the social media waters, you might want to pause before reposting an image that might not be real.

If the great leaps forward in AI image generation capabilities cause people to think for a minute before making a snarky comment about a purported photo they have seen, that would be a good thing. I’m not holding my breath that this will happen, but wouldn’t it be ironic if AI deepfakery caused the social media world to be a bit more cautious?