Generative AI Models are "trained to hallucinate"

It’s important to remember that generative models shouldn’t be treated as a source of truth or factual knowledge. They surely can answer some questions correctly, but this is not what they are designed and trained for. It would be like using a racehorse to haul cargo: it’s possible, but not its intended purpose … Generative AI models are designed and trained to hallucinate, so hallucinations are a common product of any generative model … The job of a generative model is to generate data that is realistic or distributionally equivalent to the training data, yet different from actual data used for training.

UCLA Computer Science Professor Stefano Soatto writing for InsideBigData