What Hallucination Meaning, Applications & Example
When AI models generate false or unsupported information.
What is Hallucination?
Hallucination in AI refers to a situation where a model generates outputs or predictions that are incorrect, nonsensical, or ungrounded in the input data. This often occurs in natural language processing or image generation models, where the model creates responses that may sound plausible but are not factually accurate.
Types of Hallucination
- Textual Hallucination: When language models generate text that is factually incorrect or irrelevant to the query.
- Visual Hallucination: When generative models create images that are distorted or feature nonexistent elements.
- Factual Hallucination: Occurs when a model invents information, such as producing a made-up name or event as part of a response.
Applications of Hallucination
- Chatbots: Hallucination in chatbots can result in misleading or fabricated responses, reducing user trust.
- Text Generation: In AI-written articles, hallucination may result in invented facts or inaccurate details.
- Image Generation: Hallucinations in generative adversarial networks (GANs) may produce unrealistic or inconsistent images.
Example of Hallucination
In a text generation task, a model may incorrectly state that “Albert Einstein invented the telephone,” even though the historical truth is that Alexander Graham Bell is credited with this invention. This incorrect generation is an example of hallucination in AI.