In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting,[1][2]confabulation[3] or delusion[4]) is a response generated by AI that contains false or misleading information presented as fact.[5][6][7] This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.[7]
For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time,[8] with factual errors present in 46% of generated texts.[9] Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[10][8][9] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[3]