Deepfakes

Req 5 — Deepfakes

5a.
Explain what a deepfake is and how it can affect an individual.
5b.
Describe what actions to take if you or someone you know is impacted by a deepfake.

Of all the topics in this merit badge, deepfakes may be the most important one for your everyday life right now. A deepfake is AI-generated or AI-manipulated media — video, audio, or images — that makes it look or sound like a real person is saying or doing something they never actually said or did. The technology has advanced so quickly that even experts sometimes struggle to tell the difference between real and fake.


How Deepfakes Work

Deepfakes are created using a type of machine learning called deep learning (that is where the “deep” in the name comes from). Here is the basic process:

  1. Data collection — The AI system is fed hundreds or thousands of images, video clips, or audio recordings of a target person.
  2. Training — The AI studies this data and learns the patterns of the person’s face, voice, and mannerisms — how their mouth moves when they speak, how their eyebrows shift when they express emotion, the unique qualities of their voice.
  3. Generation — The AI creates new media that mimics the person convincingly. It can swap someone’s face onto another person’s body, generate a completely fake video of someone speaking, or clone a voice to say anything.

The technology that makes this possible is called a Generative Adversarial Network (GAN). A GAN uses two AI models that work against each other: one generates fake content, and the other tries to detect whether it is fake. They go back and forth, and each round the fake gets more convincing. Think of it like a forger and a detective — the forger keeps getting better because the detective keeps catching mistakes.


How Deepfakes Affect People

Deepfakes are not just a technology curiosity — they cause real harm to real people. Here are the major ways:

Reputation Damage

A deepfake video or image can make it look like someone said something hateful, did something embarrassing, or was in a situation they were never in. Once shared online, this content can spread far faster than any correction. Even after a deepfake is debunked, the damage to a person’s reputation can last for years.

Emotional and Psychological Harm

Imagine discovering a fake video of yourself circulating at school — one that shows you saying or doing something you would never do. The emotional toll can be severe: anxiety, depression, social isolation, and a feeling of helplessness. For young people especially, deepfakes can be a devastating form of cyberbullying.

Financial Fraud

Criminals have used AI-cloned voices to impersonate company executives and trick employees into transferring money. In one well-known case, a deepfake voice call convinced a bank manager to authorize a $35 million transfer. On a personal level, scammers can clone the voice of a family member to make convincing phone calls asking for money.

Misinformation and Manipulation

Deepfakes of political figures, news anchors, or public officials can spread false information that looks completely real. During elections, fake videos of candidates saying outrageous things could influence how people vote — before anyone can verify the content is fake.


How to Spot a Deepfake

While deepfakes are getting harder to detect, there are still telltale signs to watch for:

Deepfake Detection Checklist

Look for these red flags when evaluating suspicious media:
  • Unnatural eye movement — Eyes that do not blink normally, look in odd directions, or seem “dead”
  • Facial boundary issues — Blurriness or distortion around the edges of the face, hairline, or jawline
  • Lighting mismatches — Shadows or lighting on the face that do not match the rest of the scene
  • Audio sync problems — Lip movements that are slightly out of sync with the words being spoken
  • Skin texture oddities — Patches of skin that look too smooth, too shiny, or inconsistent
  • Unnatural body movement — Stiff posture, jerky head movements, or a body that does not match the head
  • Source check — Ask yourself: Where did this video come from? Is it from a verified, credible source?
A Scout looking critically at a laptop screen, leaning in with a skeptical expression, as if analyzing whether a video is real or fake. Study room setting with good lighting.

What to Do If You Are Impacted

If you or someone you know becomes the target of a deepfake, here are the steps to take:

A Scout showing their phone screen to a trusted adult (parent or counselor), both looking serious but calm. The adult has a reassuring posture. Living room or office setting.

Step 1: Do Not Engage or Share

Do not respond to the person who created or shared the deepfake. Do not share it yourself — even to show others “look what someone made.” Every share increases the reach and the harm.

Step 2: Document Everything

Before anything gets taken down, save evidence. Take screenshots that include the URL, the poster’s username, the date and time, and any comments. This documentation will be important if you need to file a report later.

Step 3: Report the Content

Report the deepfake to the platform where it was posted. Most social media platforms have specific policies against manipulated media and will remove it if reported. Here is where to report on major platforms:

Step 4: Tell a Trusted Adult

This is not something to handle alone. Tell a parent, guardian, school counselor, or another trusted adult. They can help you navigate the situation, contact the platform, and decide whether legal action is appropriate.

Step 5: Contact Authorities if Needed

If the deepfake is threatening, harassing, or involves a minor, it may be a criminal matter. Your trusted adult can help you contact:

Step 6: Seek Support

Being targeted by a deepfake can be emotionally overwhelming. It is completely normal to feel angry, scared, or embarrassed. Talk to someone you trust about how you are feeling. If you need additional support, the Crisis Text Line is available 24/7 — text HOME to 741741.


The Bigger Picture

Deepfakes are a powerful reminder that technology is not inherently good or bad — it depends on how people choose to use it. The same AI that can create harmful deepfakes can also be used to restore old family photographs, dub movies into other languages, or create educational simulations. The ethics you developed in Requirement 4 apply directly here.

As AI continues to improve, detecting deepfakes will become harder. That means the skills you are building right now — critical thinking, media literacy, and knowing when to ask for help — will only become more important over time.

UNESCO — Deepfakes and the Crisis of Knowing UNESCO's analysis of how deepfakes threaten trust in information and what can be done about it. Cyberbullying Research Center Research-based resources for preventing and responding to cyberbullying, including deepfake-related harassment. Common Sense Media — How to Spot a Deepfake A practical, youth-friendly guide to identifying AI-manipulated media.