Does AI make things up?
A recent experiment by researchers at the University of Gothenburg shows how quickly false data can enter the narrative.
The researchers invented a fake disease (bixonimania) complete with obviously bogus papers that were published as 'preprints'.
Within weeks AI systems confidently described the condition as real, offering explanations and guidance.
It even made it into a peer‑reviewed article before being retracted.
Large language models don’t verify facts. They reproduce patterns.
When misinformation is wrapped in credible looking language, it flows smoothly through the AI, scientific, and even human checks and balances.
It passes the smell-test.
To be clear: AI didn't make up the condition, but it confidently amplified the false information, giving credibility to it.
When used for good, AI can democratise and share information widely, with context and in formats easy to understand.
When used for bad, false narratives can easily enter the pipeline and gain credibility. it becomes hard to tell what is real, and what is not.
Just like the news. Look for credible sources. Be wary that you might be being scammed. Especially if it looks too good to be true.
Even better. Curate your own, trusted data.
SOURCE
https://www.nature.com/articles/d41586-026-01100-y
BESCI AI OPINION
Garbage in - Garbage out. The training data used by many AI platforms is based on the internet, or assumptions. Assume the advice you are being given is directional and confirm with an expert before you make life changing decisions based on it.