When an AI model generates plausible-sounding but factually incorrect or made-up information. Like a confident student giving a wrong answer that sounds right.
An AI chatbot confidently cited a court case that didn't exist, a classic hallucination that caused problems when used in a legal brief.
Hallucination is an AI/ML behavior (a model producing incorrect but plausible output), not a cloud service. All major clouds offer tools to reduce hallucinations (e.g., grounding with retrieval, evaluation, guardrails, and monitoring), but there is no direct one-to-one service named “Hallucination.”