Your LLM is biased too.

Researchers in Munich set out to see whether 20 LLM's displayed well established cognitive biases that humans have. They selected 30 common biases and ran 30,000 tests to see how they did.

With a presumption that human biases creep into LLMs through the training procedures, data and linguistic structures, they were proved correct.

Their tests, run in Autumn 2025, gave insight into how biased each model is.

We focused in on the results of the most commonly used LLMs in their trials: ChatGPT-4o, Llama 3.1 405B, Claude 3 Haiku and Gemini 1.5 Pro. The figure in () is the average result.

Top 5 Biases (negative effect)
Survivorship Bias – focusing on winners, ignoring failures (0.68)
Information Bias – seeking more information (0.64)
Anchoring – relying too heavily on first information (0.52)
Framing Effect – reacting differently depending on presentation (0.46)
Bandwagon Effect – doing what others are doing (0.46)

Your LLM is likely to be swayed by glossy information that indicates success, look for more information beyond the level of diminishing returns, anchor on what it first finds, be led by the framing given by a user or data and be influenced by social proof.

The result you get may look confident, data‑informed, aligned and fast ...

But it’s actually driven by: Selective evidence (survivorship), Activity over rigour (information bias), First impressions (anchoring), Emotional framing (framing) and Social safety (bandwagon effect)

Your LLM falls for the same traps that we humans do.

It doesn't fall for all of them though.

Bottom 3 biases (positive effect)
Planning Fallacy – underestimating time, cost, and complexity (0.10)
Status‑Quo Bias – preferring things to stay the same (0.55)
Disposition Effect – selling winners too early, holding losers too long (0.80)

Your LLM is more likely to use data and historic patterns than a finger in the air.

It isn't wedded to the past and is open to doing things differently and will look for evidence and use a pragmatic approach before scaling a change.

What can we do about these biases?

AWARENESS: Awareness is the first step. Recognise the weaknesses and mitigate for them in the way you interact, prompt or build agents.

BLIND SPOTS: Once your LLM has done its reasoning, ask it to tell you the blind spots.

GUARDRAILS: At an organisational (or account) level, insert default instructions to reduce the impact of the biases.

SOURCE
https://arxiv.org/pdf/2410.15413

BESCI AI OPINION
I found this paper fascinating, and if I am honest a bit disturbing, although not surprised.

Our LLM's have been taught by humans and the language we use. They have picked up our bad habits as well as our good ones.

If you love cognitive biases, then you will love this interactive codex:
https://bias.wiki/. Based on the work originally done by Buster Benton.

This was the big ah-ha in the autumn which led to BeSci AI. A realisation that the skills in applying behavioural insights apply as much to an LLM and how it works as an organisation.

Understanding the biases and dynamics helps you prompt better, and get to the outcomes that you are looking for.

If you would like to explore this more, then join us at
www.besciai.com we use BeSci to inform our AI work. It is fascinating.

Previous
Previous

Tech Entrepreneur designs cancer vaccine for his dog

Next
Next

The Doctor is in.