AI bias: When your chatbot plays favorites (and how we’re fixing it)

AI is everywhere—helping us write emails, choose Netflix movies, and even land jobs. But here’s the kicker: AI isn’t always fair. Sometimes it plays favorites, makes questionable decisions, and even picks up some pretty sketchy habits from its training data. Welcome to the world of AI bias. So, how does this happen, and more importantly, what are we doing to fix it?

What is AI bias?

AI bias happens when a machine learning model starts making unfair or skewed decisions because of the data it was trained on. Since AI learns from past data, it can accidentally pick up human prejudices and run with them.

Cringeworthy real-world AI bias fails

🤦‍♀️ The AI That Didn’t Like Women in Tech
Amazon scrapped an AI recruiting tool after discovering it was automatically downgrading resumes that included the word "women’s" (as in "women's soccer team" or "women’s college"). It learned this bias from historical hiring patterns that favored men. Oof.

🤦‍♀️ The Facial Recognition Flop
Studies from MIT and the ACLU found that AI-powered facial recognition systems—including those from IBM, Microsoft, and Amazon—were significantly less accurate when identifying people with darker skin tones. This led to wrongful arrests and major concerns about racial bias in law enforcement.

🤦‍♀️ Doctor AI Missed the Memo
A 2019 study revealed that an AI healthcare algorithm used in U.S. hospitals underdiagnosed black patients compared to white patients because it relied on historical medical spending data as a proxy for health needs—ignoring systemic disparities in healthcare access.

🤦‍♀️ Cops and Robo-Judges
The controversial AI tool COMPAS, used in the U.S. criminal justice system, was found to wrongly predict that black defendants were twice as likely as white defendants to reoffend, even when controlling for similar criminal histories.

🤦‍♀️ Search Engine Shenanigans
Ever searched for something and felt like the suggestions were... questionable? AI-driven search results have surfaced racist, sexist, and misleading auto-completions, reflecting the biases baked into the internet itself. Google even had to manually intervene after users discovered that searches for "professional hairstyles" mostly showed white women, while "unprofessional hairstyles" returned images of black women. Yikes.

Why does AI bias happen?

AI doesn’t have personal opinions (thankfully), but it soaks up everything from its training data—good, bad, and really, really questionable. Bias creeps in due to:

👉🏼 Garbage in, garbage out: If AI is trained mostly on one demographic (e.g., dudes in finance), it might not understand or fairly assess others.

👉🏼 History repeats itself: AI loves patterns, and if historical data has bias (spoiler: it does), AI will reinforce those patterns.

👉🏼 Oops, bad code: Sometimes AI models are designed in ways that unintentionally favor certain groups.

👉🏼 Lack of perspective: If the team building AI lacks diversity, they might not spot biases lurking in the code.

How we’re teaching AI to be less judgmental

Good news! AI bias isn’t a forever problem. Researchers and developers are coming up with ways to make AI play nice with everyone.

Give AI a better diet (AKA better training data)

AI is only as good as what it learns. Feeding it diverse, well-balanced data helps reduce bias from the start.

AI report cards (bias audits & transparency)

AI companies are running tests to catch biases before the AI goes rogue. More transparency = more accountability.

Keeping humans in the loop

Instead of letting AI make all the calls, human oversight ensures questionable decisions get flagged before they cause problems.

Making AI play fair (fairness algorithms)

Developers are tweaking AI models to detect and fix biases before they affect real people.

More diversity in AI development

When different perspectives are in the room, AI gets built in a way that works for everyone.

Final Thoughts

AI bias is like that rowdy kid in class—you can’t ignore it, but you CAN correct it. By improving training data, keeping AI accountable, and making sure humans stay in charge, we can make AI work for all of us.

So, next time your chatbot gives a weirdly biased response, just know: AI isn’t perfect yet, but we’re working on it.

Lisa Kilker

I explore the ever-evolving world of AI with a mix of curiosity, creativity, and a touch of caffeine. Whether it’s breaking down complex AI concepts, diving into chatbot tech, or just geeking out over the latest advancements, I’m here to help make AI fun, approachable, and actually useful.

https://www.linkedin.com/in/lisakilker/
Previous
Previous

AI Singularity: The good, the bad, and the (possibly) terrifying outcomes

Next
Next

How AI can help small business owners really kick some serious ass