Our Only Defense To “Deepfake” Videos

For those of you who believe everything you see, be ready to second-guess your eyes as any video you see could now possibly be fake. Through the power of editing and computer science, computer-generated videos can now be faked and honestly, you should be scared of the possibilities.

The videos are called “deepfake,” essentially videos that were created to make it appear as if someone is saying or doing something that they did not actually do. Are you scared yet? Everyone should be on alert about blackmail.

Obviously, video manipulation is not a new thing but it is so much harder to spot deepfake now than it used to be.

Deepfake works by creating a neural network AI to match the shape of the person of interest’s mouth and then the fake mouth is placed on top of real footage and audio of them. The fake mouth is controlled by someone else. More or less, like how Paul Walker’s brother played him in the ending of Fast & Furious.

I first noticed this practice when Snapchat created its face swap filter that would swap the faces of two people looking into the phone, which is not too concerning. So, I kept my worries to myself. However, when the gender swap was created that allowed a man to use a filter to look like a woman and vice versa, I knew technology had advanced to a dangerous place.

What I find most troubling about the deepfake process is that it does not take much to do. You know the joke of “there’s an app for that?” Well, there’s one for creating deepfake videos as well, called FakeApp.

FakeApp is a free online tool that everyone has access to as the anonymous developers have left the code open to the public – open source essentially.

“We are outgunned,” said Hany Farid, University of California at Berkeley computer science professor. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”

Creating deepfake videos is not too difficult, it just takes time. Since it is such an easy process, the main concern of deepfake is for presidential candidates who are often giving speeches on camera where the malpractice scientists can study their maneuvers. Obviously, with this software, political sabotage and propaganda are imminent.

For example, University of Washington researches performed the deepfake procedure on President Barack Obama by collecting 14 hours of footage of him and then making him say that his employees can survive on $8 an hour in New York City.

This was just a demonstration, but it has to be known that the possibilities of what you can now force someone to say are endless. The enemies of political candidates might not be so nice about what they force someone to appear to say.

Although this product seems very damning, there are some who are still hopeful such as University of Washington Professor Ira Kemelmacher-Shilzerman.

“We’re developing technology, every technology can be used in some negative way and so we all should work toward making sure it’s not going to happen,” Kemelmacher-Shilzerman said on BBC news. “And even one of the interesting directions is once you know how to create something, you know how to reverse engineer it and so one could create methods for identifying edited videos versus real videos.”

Farid is doing just that as he is developing a software to spot deepfake videos. Farid hopes to upload a website where traditional news organizations can run the videos through his software before they put out the video as fact.

In an act to protect the candidates of the 2020 presidential election, Farid has gone through the process of what he calls “fingerprinting,” which is downloading a significant amount of footage of individuals to learn how they move when they talk.

“By the end of this calendar year,” Farid said. “That is the goal is that we will have most, if not all, of the candidates fingerprinted.”

Share This