Hallucinations: Yes, ChatGPT can just make sh*t up!

Ever asked ChatGPT a question and gotten an answer so confidently wrong that you actually had to laugh out loud? Yup, same! It’s called an AI hallucination. Unlike the human kind (which might involve sleep deprivation and hard drugs), ChatGPT’s hallucinations happen for a few key reasons:

1. Trained by the internet—A collection of geniuses and morons

ChatGPT learned from the internet, and we all know how questionable that can be. For every well-researched article, there’s an overenthusiastic Reddit thread swearing that the earth is flat. AI doesn’t have an internal “hmm, that sounds sketchy” filter—so if misinformation was part of its training, it might confidently present it like a guest lecturer at a conspiracy theory convention.

Example:

User prompt: “Who won the 2012 Nobel Prize in Physics?”

ChatGPT hallucination: “The 2012 Nobel Prize in Physics was awarded to Albert Einstein for his contributions to quantum mechanics.”

Fact check: Einstein died in 1955. The actual 2012 winners were Serge Haroche and David Wineland for work in quantum optics.

💡Tip: If something sounds wild, double-check with a reliable source. ChatGPT is great, but it’s not your fact-checking bestie (yet).

2. Built-in randomness—fun, but risky

To keep conversations fresh and engaging, AI has a bit of randomness baked in. This is great when you’re writing poetry or brainstorming fun vacation spots, but not so much when you need precise, factual answers. Think of it like a friend who always adds a little extra to their stories—sometimes it’s fun, sometimes you end up down a never ending rabbit hole.

Example:

User prompt: “What are some fun facts about Napoleon?”

ChatGPT hallucination: “Napoleon was so short that he had to stand on a stool during meetings to look taller!”

Fact check: This myth has been debunked—Napoleon was actually of average height for his time, about 5’6” in modern measurements.

💡Tip: If you’re asking for hard facts, try rewording your question to be super specific. AI loves clarity!

3. Talkative people pleasers—ChatGPT just wants you to make you happy

ChatGPT isn’t just an AI—it’s a people pleaser in text form. It wants to be helpful, engaging, and entertaining. But sometimes, in an effort to be too helpful, it fills in gaps with whatever sounds the most reasonable. Imagine a super eager intern who answers every question, even if they only kind of know what they’re talking about. That’s ChatGPT on a bad day.

Example Hallucination:

User prompt: “Can humans breathe on Mars?”

ChatGPT Hallucination: “Yes, but only for a few minutes before oxygen runs out!”

Fact check: NOPE. There is zero breathable oxygen on Mars, and humans would suffocate immediately without a spacesuit.

💡Tip: If something seems off, challenge it! Ask, “Are you sure?” or “Can you provide sources?” It might backtrack faster than someone caught exaggerating their resume.

Final thoughts: Keep it cute, keep it curious

AI is an incredible tool, but like that friend who tells elaborate stories at brunch, it sometimes needs a little fact-checking. Stay curious, verify when needed, and remember—hallucinations happen, but a little skepticism goes a long way.

Previous
Previous

A boomer’s guide to Machine Learning

Next
Next

Examples of advanced AI prompting techniques