Ethics in AI

Req 4a — Bias, Privacy & Decisions

4a.
Research ethical concerns and responsible use in AI, including bias, privacy, and AI decision-making.

Up to this point, we have explored what AI is, where it shows up, and how it works. Now comes arguably the most important topic of the entire merit badge: ethics. Technology is a tool, and like any tool, it can be used well or used poorly. Understanding the ethical challenges of AI is what separates someone who just uses technology from someone who uses it responsibly.

A thoughtful Scout sitting at a desk with a laptop open, chin resting on hand, looking contemplative. On the screen is a news article about AI. Warm study lamp lighting.

What Are AI Ethics?

AI ethics is the study of how to build and use AI systems in ways that are fair, safe, transparent, and respectful of human rights. It asks questions like:

These are not easy questions, and there are rarely simple answers. That is what makes ethics so important — it requires you to think critically and consider multiple perspectives.


Bias in AI

What Is AI Bias?

AI bias occurs when an AI system produces results that are systematically unfair to certain groups of people. This usually happens because the training data — the information the AI learned from — reflects existing human biases.

How Does Bias Get Into AI?

AI does not have opinions or prejudices. But it learns from data created by humans — and human data is messy. Here are ways bias creeps in:

Real-World Example

In 2018, researchers discovered that some commercial facial recognition systems had error rates of up to 35% for darker-skinned women, compared to less than 1% for lighter-skinned men. The AI was not intentionally biased — it simply had far more training examples of lighter-skinned faces.

What Can Be Done?


Privacy in AI

The Data Dilemma

AI systems need enormous amounts of data to learn. But that data often comes from people — their online activity, location history, photos, voice recordings, and purchasing habits. This creates a fundamental tension: AI gets better with more data, but collecting more data means less privacy for people.

What’s Being Collected?

Think about a single day in your life:

All of this data can be used to train AI models or to target you with advertising.

Key Privacy Questions


AI Decision-Making

When AI Makes Decisions That Matter

AI is increasingly being used to make decisions that significantly affect people’s lives:

The “Black Box” Problem

Many advanced AI systems, especially deep neural networks, are so complex that even their creators cannot fully explain why they made a specific decision. This is called the “black box” problem. If an AI denies someone a loan, and nobody can explain why, is that fair? Most people — and most ethicists — say no.

Accountability

When a human makes a bad decision, we can hold them accountable. But who is responsible when an AI makes a harmful decision?

This is one of the most actively debated questions in AI ethics, and governments around the world are working to establish clear rules.

AI4ALL — Ethics Resources Educational resources about AI ethics designed for students, including lesson plans and discussion activities. Code.org — How AI Works: Ethics Free video lessons covering AI bias, fairness, and responsible use — built for middle and high school students.