Why AI Models Hallucinate

Have you ever wondered why AI models hallucinate?

Spoiler alert: It is all down to being trained by humans and the need to please us, and if we are honest, humans are not perfect either.

The 'hallucinations' and mistakes are often used to defend a position against AI solutions. Our brains use them as a convenient rationalisation for the threat we feel.

When a machine achieves a 97-99% accuracy, we question the 1-2% missing.

Humans get tired, we take shortcuts, we hallucinate based on our own biases and experiences. Humans are held to a different measure, variation is expected.

AI Models and Agents are as good as the training they are given; the knowledge they base their assumptions on and the instructions. Garbage in, Garbage out.

LINKS:
Her full Substack article: https://jocapel.substack.com/p/why-do-ai-language-models-hallucinate

A great TED talk on how our brain perceives 'reality': https://www.youtube.com/watch?v=oYp5XuGYqqY

Original Post: https://lnkd.in/eN65_a3D

BESCI AI OPINION

The algorithms were trained by humans, and have naturally built a muscle that pleases their trainers. Over time, the quality of the training and data should get better.

Meanwhile, for many of the publicly available tools (ChatGPT, Claude, Grok), the hallucinations will continue as they work to please you and keep you on their platform. We find Copilot less pleasing, and more businesslike - although you can configure whatever you want.

Ironically, humans aren't perfect either and make stuff up so that they can fit into their social tribes, or to rationalise the feeling that they are reacting too - yet we don't hold humans to the same high standards.

Previous
Previous

What would happen if you let AI agents loose in Minecraft?

Next
Next

Are AI chatbots effective at changing political opinions?