Req 4b — What Would You Do?
This requirement puts you in the hot seat. Your counselor will present you with five scenarios where AI creates an ethical dilemma, and you need to decide: What is the right thing to do? There is often no single “correct” answer — what matters is how you reason through the problem.

How to Approach Ethical Scenarios
When your counselor gives you a scenario, use this framework to organize your thinking:
Ethical Thinking Framework
Steps to work through any AI ethics scenario
- Identify the stakeholders: Who is affected? (The user, the company, the public, a specific group of people?)
- Identify the values in tension: What competing values are at stake? (Privacy vs. safety? Fairness vs. efficiency? Convenience vs. accuracy?)
- Consider consequences: What happens if you choose Option A? What about Option B? Who benefits and who is harmed?
- Apply the Scout Law: Is this choice trustworthy? Helpful? Fair? Kind? Think about how the Scout Law applies.
- State your decision and reasoning: Be clear about what you would do and why. It is okay to acknowledge uncertainty.
Practice Scenarios
Here are sample scenarios to think about before you meet with your counselor. These may not be the exact ones your counselor uses, but practicing will sharpen your ethical reasoning.
Scenario 1: The AI Homework Helper
A classmate tells you they used an AI chatbot to write their entire history essay and plan to turn it in as their own work. They say everyone is doing it and the teacher will never know.
Think about:
- Is using AI to write an essay the same as cheating?
- Where is the line between using AI as a learning tool and using it to avoid learning?
- What would happen if everyone did this? Would anyone actually learn the material?
- What would you say to your classmate?
Scenario 2: The Biased Hiring System
A company uses an AI to screen job applications. It was trained on data from the past 10 years of successful employees. Someone discovers that the AI consistently ranks women lower than men for engineering positions — because most engineers in the training data were men.
Think about:
- Is the AI being “unfair” or is it reflecting real-world unfairness?
- Should the company keep using the AI? Fix it? Scrap it?
- Who is responsible — the AI developers, the company, or both?
Scenario 3: The Social Media Algorithm
A social media platform’s AI keeps recommending increasingly extreme content to a teenager because the algorithm discovered that extreme content keeps people watching longer. The teenager starts believing conspiracy theories.
Think about:
- Should the AI be designed to maximize watch time, even if the content is harmful?
- Is the platform responsible for what its algorithm recommends?
- What safeguards should exist for younger users?
Scenario 4: The Self-Driving Car Dilemma
A self-driving car’s AI must make a split-second decision: swerve left and hit a fence (risking injury to the passenger) or continue straight and hit a pedestrian who stepped into the road unexpectedly.
Think about:
- Who should the AI prioritize — the passenger or the pedestrian?
- Who decides the rules the AI follows in these situations?
- Should self-driving cars be allowed if they cannot handle every situation perfectly?
Scenario 5: The Predictive Policing Tool
A police department uses AI that analyzes crime data to predict where crimes are most likely to occur. The AI recommends sending more officers to certain neighborhoods — neighborhoods that happen to be predominantly low-income and minority communities.
Think about:
- Is the AI helping prevent crime, or is it reinforcing existing patterns of over-policing?
- How does historical bias in policing affect the data the AI was trained on?
- What would be a more fair approach?