DEV Community

Cover image for What are AI hallucinations?
Ank
Ank

Posted on

What are AI hallucinations?

What are AI hallucinations?

AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

Generally, if a user makes a request of a generative AI tool, they desire an output that appropriately addresses the prompt (that is, a correct answer to a question). However, sometimes AI algorithms produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. In other words, it “hallucinates” the response.

The term may seem paradoxical, given that hallucinations are typically associated with human or animal brains, not machines. But from a metaphorical standpoint, hallucination accurately describes these outputs, especially in the case of image and pattern recognition (where outputs can be truly surreal in appearance).

AI hallucinations are similar to how humans sometimes see figures in the clouds or faces on the moon. In the case of AI, these misinterpretations occur due to various factors, including overfitting, training data bias/inaccuracy and high model complexity.

Preventing issues with generative, open-source technologies can prove challenging. Some notable examples of AI hallucination include:

Google’s Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system.


Microsoft’s chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.

Meta pulling its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.
Enter fullscreen mode Exit fullscreen mode

While many of these issues have since been addressed and resolved, it’s easy to see how, even in the best of circumstances, the use of AI tools can have unforeseen and undesirable consequences.

Top comments (0)