What is AI hallucination and why does it happen?
AI chatbots sometimes produce overconfident but plausible false statements, this error mode is called 'hallucination'. For example, an AI chatbot would invent a historical date even if it doesn't know the answer. According to OpenAI researchers, these hallucinations persist because the current models for assessing a language model encourage guesswork instead of making it admit its uncertainty.