What is AI Hallucination?
When an LLM confidently produces information that is factually wrong — fabricated stats, fake citations, made-up events.
An AI hallucination happens when an LLM confidently states something that isn’t true — fabricating numbers, citing nonexistent papers, describing events that never occurred, or inventing URLs.
A real example
User: “Cite three studies on melatonin’s effects.”
ChatGPT: “1. Smith et al. (2019), Journal of Sleep Research, vol 28, demonstrated… 2. Park & Lee (2020), Nature Medicine…”
It sounds professional — but the studies don’t exist. Author names, volume numbers, conclusions — all invented.
Why LLMs hallucinate
LLMs have no fact-checking mechanism. At their core, they predict the most likely next token. When they don’t know, they pattern-match instead of saying “I don’t know.”
Things that increase hallucination:
- Asking about events after the training cutoff
- Niche topics with little training data
- Requests for very specific citations (numbers, URLs, names)
- Vague or context-poor prompts
How to reduce hallucination
1. Use RAG
Let the model look up real documents before answering — the most effective fix. See RAG.
2. Enable web search
Claude, ChatGPT, and Perplexity can search in real time. Turn it on for current events.
3. Demand sources
“Cite specific sources (URL or paper). If unsure, say ‘source unknown’.“
4. Lower the temperature
Set temperature: 0 when accuracy matters (e.g., document summarization).
5. Verify
Always check important figures and citations before using them. Don’t trust LLMs blindly.
Hallucination isn’t always bad
For creative work — fiction, brainstorming, poetry — “hallucination” is just creativity. The problem only arises when factual accuracy matters.
Related
- RAG — the #1 mitigation
- Prompt Engineering