What are AI hallucinations?

AI hallucinations are when a large language model (LLM), such as Google Bard, ChatGPT, or Bing AI Chat, presents false information as facts.

Another term for an AI hallucination is confabulation.

AI hallucination example

An image generated by DALL-E 2 based on the text prompt “1960’s art of cow getting abducted by UFO in midwest” Source: Wikipedia

Why do AI hallucinations occur?

LMMs are designed to generate unique, fluent, and coherent text, but they lack the ability to apply logic to their content. AI can’t determine on its own if the text it generates is plausible or factual in reality.

However, AI hallucinations are not always plausible. Sometimes they are nonsensical. There is no obvious way to determine what causes hallucinations on a case-by-case basis.

What causes AI hallucinations?

Data quality

AI hallucinations can happen because of inaccurate information in the source content. LLMs rely on large data sets, but that data can contain errors, biases, or inconsistencies.

For example, ChatGPT’s and Google Bard’s training dataset consisted of various sources on the internet, including Wikipedia articles, books, and other publications.

Generation method

Hallucinations can also occur from training and generation methods. For example, models might be biased towards generic or specific words, influencing the information they generate.

Input context

AI hallucinations can happen if the input prompt is unclear, inconsistent, or contradictory. While the previous two reasons for hallucinations are out of the user’s hands, precise user input can help avoid AI hallucinations.

What are the types of AI hallucinations?

AI hallucinations can range from minor inconsistencies to completely fabricated or contradictory information.

There are several types, including:

1. Sentence contradiction is when LLM generates a sentence contradictory to the previous sentence.

For example:

Prompt: Write me a birthday card for mom.

Expected AI answer: Happy birthday, mom. I love you.

Hallucinated AI response: I am so happy we are celebrating our first anniversary! To many more. Love, Lucy.

2. Prompt contradiction is when LLM generates an answer contradictory to your input.

For example:

Prompt: Tell me about the benefits of meditation.

Expected AI answer: Meditation has numerous benefits, including stress reduction, improved focus, and emotional well-being.

Hallucinated AI response: Meditation transcends earthly concerns, unlocking portals to a parallel dimension where unicorns offer wisdom, and your thoughts are glittering butterflies guiding you through the cosmic realm of inner tranquility.

3. Factual contradiction is when a LLM generates an incorrect answer and presents it as a fact.

For example:

Prompt: What is the capital of France?

Expected AI answer: The capital of France is Paris.

Hallucinated AI response: The capital of France is Zagreb.

AI hallucination example

Example of an AI hallucination. Source: The New York Times

4. Irrelevant or random hallucinations is when a LLM generates random information without connection to the input or output.

For example:

Prompt: Can you recommend a good chocolate chip cookie recipe?

Expected AI answer: Certainly! Here is a classic chocolate chip cookie recipe with step-by-step instructions.

Hallucinated AI response: Here is a classic chocolate chip cookie recipe. Today’s temperature in Toronto is –2 degrees.

How to prevent AI hallucinations?

There are several ways users can minimize the occurrence of AI hallucinations:

  • Use clear and specific prompts – give the model a role to play, add additional context to guide the LLM to the intended output 
  • Give examples – give several examples of the desired output format to help the LLMs recognize patterns 
  • Tune LLM’s parameters – LLMs often have parameters that users can tune. For example, temperature parameters, which control output randomness, when the temperature is set higher, LLM’s outputs are more random

FAQs about AI hallucinations

Related content:

Dec 6th, 2023
5 min read