Are we cognitively surrendering to AI?

A fascinating paper by the team at the Wharton Behavioural Lab, on how our brains are being shaped by the accessibility of AI.

More than cognitive outsourcing, like using a calculator to help you reach a decision, they talk about cognitively surrendering, where you 'abdicate critical evaluation, relinquish cognitive control and adopt the AI's judgment as your own'.

If you rely on your GPS to get you to a destination, you abdicate the directional decisions to your SatNav.

The more AI 'helps' us, the more likely we are to cognitively surrender, adopting AI outputs with minimal scrutiny, overriding our gut feel (System 1) and deliberation (System 2).

The authors argue that AI creates a System 3, where the thinking happens outside our brain. In their studies (n=1,372; 9,593 trials), they found that participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3.

This has significant implications for humans and decision making, with the loss of critical thinking as AI reshapes the way humans reason and its impact on autonomy.

The Wharton team position System 3 as an active participant in cognitive processes, used to supply fast answers (pre‑empting or suppressing System 1) and to circumvent effortful thinking (short‑circuiting System 2) as well as building 'scaffolding' to support your reasoning.

They define System 3 through four lenses; external, automated, data-driven, and dynamic.

EXTERNAL
It resides externally, in artificial infrastructures (models, embedded
algorithms, LLMs) it is not constrained by human capacity.

AUTOMATED
Using statistical, rule-based, or generative algorithms,. It engages in pattern recognition, prediction, summarization, and synthesis performed at scale and speed beyond human capacity.

DATA DRIVEN
Founded on large-scale training data and feedback. The performance and accuracy reflects the underlying data distribution, including its gaps and biases.

DYNAMIC
It does not function in isolation. It is interactive, responding to human and
environmental inputs in real time.

They recognise that In some domains, such as fact retrieval, language translation, or complex pattern recognition, System 3 can outperform System 1 and System 2 in both speed and accuracy.

However, these come at a cost: System 3 may lack affect, situational judgment, and normative reasoning grounded in human experience.

*** Note: this is likely to be a product of the data the model has been trained on, not the potential.

When we are cognitively overloaded, we are wired to go for easy, and AI often provides easy.

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

BESCI AI OPINION

SO WHAT:

Do we just give in, and accept that cognitive surrender is going to happen, or do we become diligent about designing flows, processes, check points to make sure that it doesn't.

The two eyes on the data, or the two fingers on the red button?

We have to remember that humans aren't perfect either, we have our own political agendas based on our own needs.

One of my favourite prompts after getting any feedback from AI is 'what are my blindspots?'. Many won't ask, and don't want to know.

It takes cognitive capacity to engage full reasoning and critical thinking.

I would love to say that AI will give greater cognitive capacity, like the heady days of automation, that sold us a life of being on the beach while the machines did our jobs.

In reality, our feed will continue to be as full as the capacity we have, no respite.

It is time to get smart, and for those that specialise in organisational behaviours to create the interupts, the disruption patterns that mean you don't sleep walk into average.


Cognitive surrender, the loss of critical thinking, of wisdom is part of the ongoing rhetoric about AI, and this paper gives us something more than an uneasy feel in the pit of our stomachs.

Underpressure, AI became the dominiant shortcut. Like a cognitive bias, the participants surrendered their reasoning.

Which is a worry when you look at the quality of the training data used in most LLM's. The speed to surrender without critically evaluating, because it is easy.

AI boosted confidence too. Interestingly the confidence stayed high, even when the AI was wrong, like a gambler doubling down on their betting strategy, even though it is losing. Confidence is often a signal of reliability, and competence, which could be misplaced.

Those that trusted the AI were more likely to use it, even when it is wrong and were blinded to its faults. If you have ever met someone whos views are entrenched, it is a similar pattern. The effort to change their mental model is simply too great. Cognitive overload drives a defensive response.

If you think of the impact of a System 3 running in your organisation, it will reshape decision making, suppress critical thinking boost confidence, possibly wrongly.

Will you give in to AI?

One of the concepts that comes out strongly in the paper is the risk of outsourcing your reasoning, your cognitive thinking.

It feels so easy, so attractive, and much of the time it might be right.

Then you have to step back and remember the training data - the internet.

Not peer reviewed papers, or well established practices, but generally the internet, and not the good parts.

Garbage in, garbage out - unless you have trained your own LLM.

We have. It took us seven months last year, at the time it was an idea, a foundational level, a big investment but one we believed would be worth doing.

It was an easy choice. Many of the project, change, leadership questions we put to the AI models came back with out of date, unlikely to work suggestions, they were dangerous in our field.

How many organisations (a) have the data and (b) are spending their time capturing it. Not just the outcomes, but the how they got there, the context that the outcomes need to make sense.

We did. Our clients use it for their change and leadership advice. We sleep well at night, knowing that they are getting good advice.


If you like the data, then the headline would be: People cognitively surrender to AI. A lot.


Across 1,372 participants and 9,593 reasoning trials:
→ People chose to consult AI more than 50% of the time—even though they didn’t have to.
→ When AI was right, accuracy jumped +25 points.
→ When AI was wrong, accuracy collapsed –15 points, below baseline human performance.
→ Even when AI output was confidently wrong, people followed it nearly 80% of the time once they opened the chat.

Confidence increased simply by using the AI, even when the AI was wrong. It becomes a comfort blanket.

This is the risk of cognitive surrender: your mind offloads not just the task, but your agency and scrutiny.

Previous
Previous

AI job displacement

Next
Next

Keeping a Tomato Plant Alive