AI Adoption is a behavioural problem, not a tech one
A recent article by Harvard Business Review, How Behavioural Science Can Improve the Return on AI Investments, highlights the challenges organisations have with the adoption of AI:
"The problem is that integrating new AI tools is fundamentally a behavioural challenge. Getting it right is a question of changing how people interact with and think about AI in their work practices and routines. When implementation ignores basic human needs and biases, this means employees will resist or distrust new AI tools."
Few of us would disagree, in principle.
BUT, this article, like, so many miss many key factors including:
IT IS ANOTHER TOOL
Many organisations are seeing AI as another tech tool to be deployed. When they talk about behaviour/change it is in the same context of any other tool. It isn't. Often it will replace your employees. They are smart enough to see that it threatens their job security. Better 'change management' is not the solve here.
WE PLEASE OUR BOSS
We are motivated to keep ourselves safe, to reduce threats. Our basic psychological contract is that we work to please our boss, and in return, they keep us safe.
When our leaders get excited about AI and use it, so will we, but we will also judge whether they have the ability to keep us safe, or whether we need to look elsewhere.
LEAPFROG
The big wins for AI are not the incremental or using AI as an intern, these are the low-hanging fruits. The big wins will come from rethinking your strategy, your operating model enabled by AI. To leapfrog your competitors. If you don't, someone else will.
This is your biggest behavioural challenges. For decades, you have promoted those that take action, not those with the ability to see and think strategically. Your DNA was built for the past, not the future, and in doing so you don't have the leadership strength to transform. You have work to do.
TRAINED BY HUMANS
AI has been trained by humans, it can be more responsive, empathetic and consistent than humans. It makes our lives easier, and offers us the ability to perform and compete in a tough environment.
We are naturally biased to give ourselves an advantage and secure our safety, unless we see them as a threat, in which case we might paralyse. The context and trust really matters.
UNFAMILIAR TERRITORY
Many organisations are cognitively overloaded, with little capacity to think. The mental effort to translate a tool into a practical solution is simply too great, and many are out of the habit.
When creating a new category, or idea, it needs normalising.
As a specialist in OrgBeSci, we would recommend these instead:
1. Think bigger, not incremental. Your competitors are, or will be.
2. Leaders get the outcomes they deserve. Start here.
3. Connect the dots. Solve business problems, don't sell a tool.
Source
Article: https://hbr.org/2025/11/how-behavioral-science-can-improve-the-return-on-ai-investments
BESCI AI OPINION
The strategist in me gets very frustrated when I see publications like HBR, Forbes, and others focusing on the effect, with little understanding or knowledge of the root causes.
Even more worryingly, many will read this article and nod along, without recognising the transformation to an AI native organisation is a human knowledge and willingness problem. One that can’t be solved by creating training and comms to build awareness, knowledge and desire to act.
They don't recognise the elephant in the room, the natural conflict - I automate my job = no job. The trust in our organisations is low.
Fixing the old, leaky pipe, again may not do it for you. It is time to replace the plumbing. The ability of AI to quickly automate, and do tasks will enable disruption in your industry from players who don't even exist yet. Don't be a dinosaur.