What Hallucination Meaning, Applications & Example

When AI models generate false or unsupported information.

What is Hallucination?

Hallucination in AI refers to a situation where a model generates outputs or predictions that are incorrect, nonsensical, or ungrounded in the input data. This often occurs in natural language processing or image generation models, where the model creates responses that may sound plausible but are not factually accurate.

Types of Hallucination

  1. Textual Hallucination: When language models generate text that is factually incorrect or irrelevant to the query.
  2. Visual Hallucination: When generative models create images that are distorted or feature nonexistent elements.
  3. Factual Hallucination: Occurs when a model invents information, such as producing a made-up name or event as part of a response.

Applications of Hallucination

Example of Hallucination

In a text generation task, a model may incorrectly state that “Albert Einstein invented the telephone,” even though the historical truth is that Alexander Graham Bell is credited with this invention. This incorrect generation is an example of hallucination in AI.

Read the Governor's Letter

Stay ahead with Governor's Letter, the newsletter delivering expert insights, AI updates, and curated knowledge directly to your inbox.

By subscribing to the Governor's Letter, you consent to receive emails from AI Guv.
We respect your privacy - read our Privacy Policy to learn how we protect your information.

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z