Demystifying Deepfakes: 3 Truths About AI-Generated Videos

by Diana Drake

Recently amidst all the Twitter memes and hashtags, something big was trending.

Del Harvey, vice president of trust and safety at Twitter, announced a draft of how the social-media platform plans to handle “synthetic and manipulated” media that purposely tries to mislead or confuse people. In his blog post, Harvey said, “Twitter may: place a notice next to Tweets that share synthetic or manipulated media; warn people before they share or like Tweets with synthetic or manipulated media; or add a link – for example, to news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.”

One concept that has been getting lots of attention lately helped to inspire Twitter’s announcement: deepfakes.

A deepfake is an artificial intelligence-generated video that shows someone saying or doing something that he or she never actually did. These altered videos use neural networks to overlay someone’s face – in other words, computing systems learn to perform tasks by considering examples and recognizing patterns. For example, Facebook’s Mark Zuckerberg was deepfaked this year when a video of him showed up online preaching about FB’s powers – but it wasn’t actually Zuckerberg. The video had been altered to look like him.

As Zuckerberg and other deepfaked celebs made news in recent months, the deepfake discussion also escalated at the University of Pennsylvania in Philadelphia, Pa., where Knowledge@Wharton High School is based. During PennApps XX in September – one of the world’s largest college hackathons for computer programmers and others to collaborate intensively on software projects – a team of four local students took home the grand prize, beating out some 250 teams from more than 750 high schools and colleges. Their winning project was DeFake, described as “a browser extension that uses machine learning to assess the likelihood that a given video has been subtly manipulated.” It’s an app designed to detect deepfakes.

KWHS caught up with Sofiya Lysenko, a senior at Abington Senior High School in Pennsylvania and a leader of the DeFake team, to learn more about deepfake technology. “I think what is so fascinating about deepfakes is how difficult they are to detect with known computational methods,” noted Lysenko, who has competed in PennApps for several years and has also won attention for other machine learning projects. When she was 14, for instance, she created a program that could predict the next mutation in the Zika virus. “Our team was stuck in the very beginning about how to resolve [deepfake detection]. We investigated several methods, which ultimately we found to be unsatisfactory, until we tried and were successful with the final methods of the project,” added Lysenko, who created DeFake’s machine-learning algorithm that helps determine if a video is fake or real. “Machine learning and computer vision [a field of computer science that enables computers to see, identify and process images in the same way that human vision does] are becoming interesting topics to learn because of all the applications that stem from them, such as deepfakes.”

Inspired by Lysenko’s deep research into deepfakes, here are a few additional truths to help demystify this infamous technology:

  1. Still not quite sure how this works? Michael Kearns, a computer and information science professor at the University of Pennsylvania, recently suggested that the process to create a deepfake is like a “personal trainer” for software. Speaking with The Christian Science Monitor in October, Kearns explained how a deep-learning application compares one image with another to identify distinguishing characteristics and uses that information to then create a synthetic image. Each time the program successfully identifies the differences between a fake image and a real one, “the next fake it produces becomes more seemingly authentic” – or in better shape, as the personal trainer analogy suggests. As deepfakes become more and more indistinguishable from the real thing, Kearns added this warning: “Be ever vigilant.”
  1. Growing concerns about deepfakes – and even DeFake’s recent hackathon grand-prize victory – have lots to do with the race for president in the U.S. “Deepfakes are a threat that needs to be detected due to the possibility that this could be used as a quick and deceptive form of misinformation as we approach the 2020 U.S. Presidential elections,” said Lysenko. In fact, DeFake describes the motivation for its machine-learning project like this: “The upcoming election season is predicted to be drowned out by a mass influx of fake news. Deepfakes are a new method to impersonate famous figures saying fictional things, and could be particularly influential in the outcome of this and future elections. With international misinformation becoming more common, we wanted to develop a level of protection and reality for users.” Alex Wang, a University of Pennsylvania freshman studying computer and information science in the School of Engineering and Applied Science, provided this context in an opinion piece in the Penn Political Review: “Much of the concern surrounding deepfakes centers around the 2020 election due to the existence of both large datasets and motivations to target political figures. What would the public reaction be if a doctored video of Elizabeth Warren disparaging Mexican immigrants were to be released?…Would it be legal for Joe Biden’s campaign to create negative deepfakes of opposition candidates?”
  1. In the universe of Internet interaction, deepfakes have much broader implications about how we communicate and how we make decisions based on what we believe to be true. Wharton management professor Saikat Chaudhuri, who is also the executive director of Wharton’s Mack Institute for Innovation Management, recently interviewed Sherif Hanna, vice president of ecosystem development at Truepic, a photo and video verification platform, on his SiriusXM radio show, Mastering Innovation. Hanna, whose company has developed a solution to verify the source of images, has given a great deal of thought to this issue of misrepresentation, noting that the website “thispersondoesnotexist.com” presents a completely AI-generated image every time you hit the refresh button. “We as a society and as a world at large depend on photos and videos in almost every aspect of daily life…at the same time there’s a rising tide of threats against those photos and videos that we’ve come to rely on and there’s a decline in trust of what you see,” said Hanna. “The danger for society is losing consensus around the shared sense of reality. That’s kind of a big deal if we can’t all agree on what it is that happened because we can’t agree that we trust the photos and videos that document events. It becomes very difficult to make joint decisions as a society if everyone’s perception of what happened differs substantially.”

Truepic and the PennApps project DeFake are working to restore and preserve truth – at least in what we see. Lysenko, who plans to pursue a career in research by leading her own research group and teaching at a university, as well as developing technologies within startups and companies, calls machine learning “truly a super power when it comes to solving some of the hardest challenges today.”

Skeptics, like Wang, point out that this superpower can also serve to make the challenge even greater, calling deepfake detection a losing battle. “The ongoing battle to detect deepfakes is a perfect mirror for the technology itself: as algorithms to detect deepfakes improve, deepfake creators adapt to changes by generating even more realistic ones.”

Is what you’re viewing real? “Be ever vigilant.”

Related Links

Conversation Starters

What are deepfakes and why are they making headlines? Will deepfakes change your behavior on the Internet?

How did humor help give rise to deepfakes? Have they crossed an important line?

Sherif Hanna says, “The danger for society is losing consensus around the shared sense of reality.” Discuss and debate this topic in small groups. Is AI threatening our very existence as seekers of truth? Use the articles in the Related KWHS Stories tab to help inform your discussion. Is it possible to restore trust on the Internet?

Many say that deepfakes haven’t played a role in the U.S. election after all. What happened? Check out the Related Links with this article for a Mashable article about deepfakes during the election.

5 comments on “Demystifying Deepfakes: 3 Truths About AI-Generated Videos

  1. I believe that there is a direct correlation between the dangers of deepfakes and the way people are swayed by fake news in the media today. If we take a look at the Coronavirus Pandemic that is happening now we can see a prime example of this; mass hysteria has broken out due the influence of the media which in turn has led to panic buying and vast economic consequences. So, the more people that are blindly listening to whatever the news tells them, the more dangerous deepfakes become. Essentially, these people will began feeding fuel to the fire. From a recent Model UN conference I took part in, I learned that due to Moore’s Law the rise of AI in the coming years is essentially undeniable. Thus, I believe it’s important that we turn to solutions such as project DeFake that attempt to narrow down the amount of people that fall “victim” to deepfakes. Furthermore, rather than directly addressing how we can minimize the amount of deepfakes are being shared and circulated, I think we should we asking ourselves how we can change the general public’s perception of the media so they can learn how to recognize and properly respond to this type of fake news.

  2. Not only are deep fakes very realistic and hard to decipher from real videos, they have a tremendous influence on public opinion. As Sophia Lysenko said, deep fakes are dangerous and could cause voters to be misled with false information ultimately influencing the election results. The ways technology can be used to manipulate and sway the opinions of others has reminded me of a presentation I did on the ethics of photo manipulation in journalistic practices and public opinion for Future Business Leaders of America. While researching for my speech I found an interesting theory proposed by Dr. Allan Paivio. His theory is called the “Dual-Coding Theory” and it states that humans have two cognitive processes that function separately. One process is verbal and the other is non-verbal, like an image or video, and the reaction of one can trigger a response from the other. So something like the word “dog” would cause an image of a fluffy-four legged friend to pop into most people’s heads.

    The Dual-Coding theory can be applied to deep fakes in political campaigns. People may release deep fake videos of presidential candidates saying things that will cause people to think that the candidate says and does such things that they do not agree with. Then the next time they hear that candidate’s name mentioned, the image of the deep fake video they saw previously will come to their mind and they will associate the negative things in the deep fake video to the candidate. This could leave a permanent impression on the voter and cause them to not vote for that candidate in the upcoming election because of the false information they have about the candidate from the deep fake video. In order for the integrity of the United States’ elections to be maintained this manipulation needs to be stopped. For my business project we also reached out to the president of the National Press Photographers Association (NPPA), Akili Ramsess, to ask her how to stop photomanipulation. She referred us to the code of ethics outlined by the NPPA, in which I found many things that could also be applicable for deep fakes, such as principal five, “While photographing subjects do not intentionally contribute to, alter, or seek to alter or influence events.” I believe that there should be more organizations like Lysenko’s “DeFake” that find deep fakes as well as regulate companies/people that work with deep fakes. Technology is omnipresent in our society today and it needs to be regulated in order to preserve our rights.

    • Replying to Andrew O

      Hey Andrew, I find your insight and unique experiences quite intriguing, and it got me thinking: What is the most effective way of stopping and preventing deep fakes? You proposed more companies and organizations that are similar to Lysenko’s “DeFake.” However, although that might seem to work at first glance, I don’t believe it is sustainable or practical to have private organizations try and stop the mass spread and production of deepfakes. What I propose instead is the passing of legislation that makes it a felony to aid in and produce deepfakes. This way, the effort to stop deepfakes is not only nationwide, but also supported for and funded by the U.S government. The government can enforce this piece of legislation by creating their own agency that is funded by the government and takes oversight and control in preventing the use and spread of deepfakes.

      Having the government stepping in isn’t a new idea. In 2018, Senator Ben Sasse submitted a bill called the Malicious Deepfake Prohibition Act of 2018. This bill got read twice in the Senate and was unable to pass. The bill was deemed to have too many holes and not be sufficient enough to be passed.

      Reason.com provides a good example of why the bill that Senator Ben Sasse proposed doesn’t work. A part of the bill states that it prohibits the distribution of an audiovisual record with the intent that the distribution would facilitate tortious conduct. Let’s say for example, Bob throws an old phone with a deepfake in it at an angry neighbor. This action might count as an act of facilitating and distributing deepfakes, and Bob might even end up with a ten-year felony because it “facilitates violence,” a punishment that is too extreme This scenario demonstrates how the proposed bill may produce undesired consequences and needs to be refined.

      Now there are some very important key points that you bring up. For example, you talk about how deepfakes can easily sway the mind, especially in politics. Using the dual-coding theory in which you explained, we know that even one image or a video of a deep face of a political candidate can forever brand a negative image into a person’s brain whenever they hear of or think of that candidate. That is the exact reason why some states are contemplating and are in the process of passing their own deep fake prohibition acts. However, not all states are, and the process is extremely slow. What we can do and should do is have the United States Congress take a look back at the 2018 bill proposed by Senator Sasse, refine it, and add in parts that are missing or vital to the success of this legislation. For example, it could narrow down and specify clearly what counts as a deepfake, as well as underline and create specific punishments for specific deepface crimes. Only then can our country work together to prevent the making and distribution of deepfakes. With our government stepping in, there will be no need for a variety of messy privately-funded organizations to work separately to try and combat the problem of deepfakes. Like you mentioned in your final sentence, technology is everywhere nowadays, and we must put a leash on it in order to preserve our rights. The best way to do that is to have our Congress pass a piece of legislation doing so.

  3. Deep fakes will change your behavior on social media because it gives them a reminder of how prejudice there is. For example deep flaking is when is false information on the subject. Also trump mentioned it in his speech because there is alot of fake news in the air and nobody knows which one to belive which is rediculous on the fact on how U.S citizens should be.

    • Responding to Andrew O., I want to agree with his point about “Dual-Coding Theory,” but add on and stress the importance of a single story and how it can affect the premise of “Dual-Coding Theory.” I want to emphasize this detail and argue with his point, after a recent speech I gave to my school about “The Power of Fake News in our society.” Although Andrew’s “Dual-Coding Theory” point about human behavior is generally correct, we can’t ignore the factor of a single story that can change the response to the “Dual-Coding Theory” experiment. This was brought up as a valuable point by Chimamanda Ngozi Adichie in her TedTalk, where Adichie mentions that in her own personal experience, she encountered many false narratives expressed about her when she first moved to college. She was generally misinterpreted by everyone because of false information about Adichie’s native country Nigeria. This should be noted in Dr. Allan Paivio’s theory, as his “Dual-Coding Theory” neglects the factor of a Single Story perspective: one can’t demystify deep fakes that easily if they are heavily influenced by only one story throughout their lifetime. This consequently might explain why people may not think logically and instead irrationally fight for their beliefs. This is a complicated yet powerful topic to discuss because neglecting the factor of different viewpoints can cause consequential dilemmas.

      An example of these intricacies is a recent news story about a father who read an article about a sex-trafficking in a local coffee shop he lived close to. He took action based on the false information he read by storming into the cafe and shooting multiple rounds into the air. Luckily, no one was hurt, but this clearly demonstrates how the “Dual-Coding Theory” isn’t necessarily correct — if the father didn’t have kids or an average person was reading the article, they wouldn’t have taken this drastic measure to go to the cafe and shoot rounds into the air. This presents the flaw in Paivio’s theory as many people in our society don’t generally follow the “Dual-Coding Theory.” as they are usually motivated to do something based on their experience about the topic. Therefore, I don’t fully believe that technology should be used to change one’s perspective forcefully. Instead, it should be used to educate them to make more rational decisions. The “Dual-Coding Theory” fails to mention how different perspectives vary, as we all grew up with different experiences, leading to us making rational decisions based on our own separate experiences. Therefore, I believe the most effective way to demystify and stop the spread of false news is to educate people more. We can’t solely prove against others based on how they generally will respond to the “Dual-Coding Theory,” as people will be influenced by their own experiences.

      A presentable example that we can relate in our society to ease up these tensions about education, instead of directly cutting one’s voice in speech, can be seen in different views held by people in different socioeconomic classes. People in the middle-to-upper classes might view dogs as adorable animals, while others, particularly in impoverished countries would view these animals as beaten up, stray, homeless, malnourished, or homeless animals that crawl the streets. So given this example, we should not be so steadfast in changing our society views so rapidly. as we all have different views. Therefore, we should all be rational and reconsider this situation before we neglect others based on our general opinions. Although deep fakes are ravishing our society harshly, most of its influence results from distinctive views between people.
      If we are all exposed to the diversity of views surrounding a topic, we may be more inclined to come to an agreement. In conclusion, I disagree with Andrew O.’s point about justifying others by using technology based on the general society: doing so is unreasonable as minorities between us still have views that are different because we are all unique.

Leave a Reply

Your email address will not be published. Required fields are marked *