When an AI model generates plausible-sounding but factually incorrect or made-up information. Like a confident student giving a wrong answer that sounds right.
An AI chatbot confidently cited a court case that didn't exist, a classic hallucination that caused problems when used in a legal brief.