Back to glossaryDefinition

AI Hallucination

AI hallucination is when an AI model generates information that sounds confident and plausible but is factually incorrect or entirely fabricated. The term reflects that the model isn't lying — it genuinely has no awareness that the output is wrong. Hallucinations happen because LLMs generate text based on pattern matching, not factual lookup.

They are most common when models are asked about specific facts, recent events, or niche topics where training data is sparse. The practical implication: always verify AI-generated factual claims before using them in important contexts.