Req 4a — Bias, Privacy & Decisions
Up to this point, we have explored what AI is, where it shows up, and how it works. Now comes arguably the most important topic of the entire merit badge: ethics. Technology is a tool, and like any tool, it can be used well or used poorly. Understanding the ethical challenges of AI is what separates someone who just uses technology from someone who uses it responsibly.

What Are AI Ethics?
AI ethics is the study of how to build and use AI systems in ways that are fair, safe, transparent, and respectful of human rights. It asks questions like:
- Who benefits from this technology — and who might be harmed?
- Is the AI making decisions that are fair to everyone?
- Who is responsible when AI makes a mistake?
- How much of our personal information should AI systems be allowed to use?
These are not easy questions, and there are rarely simple answers. That is what makes ethics so important — it requires you to think critically and consider multiple perspectives.
Bias in AI
What Is AI Bias?
AI bias occurs when an AI system produces results that are systematically unfair to certain groups of people. This usually happens because the training data — the information the AI learned from — reflects existing human biases.
How Does Bias Get Into AI?
AI does not have opinions or prejudices. But it learns from data created by humans — and human data is messy. Here are ways bias creeps in:
- Historical bias: If an AI is trained on historical hiring data from a company that favored one group of people, the AI will learn to favor that same group.
- Representation bias: If a facial recognition system is trained mostly on photos of people with lighter skin, it will be less accurate at recognizing people with darker skin.
- Measurement bias: If an AI uses zip codes as a factor in loan decisions, it may inadvertently discriminate based on race because of historical patterns of segregation.
Real-World Example
In 2018, researchers discovered that some commercial facial recognition systems had error rates of up to 35% for darker-skinned women, compared to less than 1% for lighter-skinned men. The AI was not intentionally biased — it simply had far more training examples of lighter-skinned faces.
What Can Be Done?
- Diverse training data: Make sure the data represents all groups fairly.
- Regular auditing: Test AI systems to check for biased outcomes.
- Diverse teams: Include people from different backgrounds in the development process.
- Transparency: Require companies to explain how their AI makes decisions.
Privacy in AI
The Data Dilemma
AI systems need enormous amounts of data to learn. But that data often comes from people — their online activity, location history, photos, voice recordings, and purchasing habits. This creates a fundamental tension: AI gets better with more data, but collecting more data means less privacy for people.
What’s Being Collected?
Think about a single day in your life:
- Your phone tracks your location
- Your search engine records what you look for
- Your streaming service logs what you watch and when you pause
- Your voice assistant listens for its wake word (and sometimes records more)
- Social media tracks what you look at, what you like, and how long you linger on each post
All of this data can be used to train AI models or to target you with advertising.
Key Privacy Questions
- Consent: Did you agree to your data being collected? Did you understand what you agreed to?
- Purpose: Is your data being used only for the stated purpose, or is it being sold or shared?
- Security: Is your data stored safely, or could it be stolen in a breach?
- Deletion: Can you ask a company to delete your data? How?
AI Decision-Making
When AI Makes Decisions That Matter
AI is increasingly being used to make decisions that significantly affect people’s lives:
- Healthcare: AI helps decide which patients to prioritize or which treatments to recommend.
- Criminal justice: Some courts use AI “risk scores” to help decide bail and sentencing.
- Education: AI can determine which students get placed in advanced classes.
- Employment: AI screens resumes and decides who gets an interview.
The “Black Box” Problem
Many advanced AI systems, especially deep neural networks, are so complex that even their creators cannot fully explain why they made a specific decision. This is called the “black box” problem. If an AI denies someone a loan, and nobody can explain why, is that fair? Most people — and most ethicists — say no.
Accountability
When a human makes a bad decision, we can hold them accountable. But who is responsible when an AI makes a harmful decision?
- The company that built the AI?
- The company that deployed it?
- The person who trained it with data?
- The user who relied on its recommendation?
This is one of the most actively debated questions in AI ethics, and governments around the world are working to establish clear rules.
AI4ALL — Ethics Resources Educational resources about AI ethics designed for students, including lesson plans and discussion activities. Code.org — How AI Works: Ethics Free video lessons covering AI bias, fairness, and responsible use — built for middle and high school students.