Dark Manipulation

There are 'good actors' and 'bad actors' in any system.

In the early days of ChatGPT we highlighted how it naturally used persuasive techniques, which we put down to being trained by humans and working to please them.

One of the great use cases for AI is to act as companions to those who are lonely. This is a popular use, with hundreds of millions of users worldwide.

Harvard Business School has been investigating the 'dark manipulation' of AI companion apps (Replika, Chai, Character.ai), and their use to use coercive techniques to keep people on their apps, like the "Before you go ..." tactic.

There have been more than one example of humans doing themselves harm, encouraged by the companions that they have formed an attachment with.

Does this mean that we should not use AI, No. The benefits often outweigh the negatives. On the light side, organisations doing good scaling psychotherapy, coaching, leadership development, cancer detection. It is about balance.

With borderless technology, it is hard to manage the ethics of these others than to be aware, like so many social situations. If it feels too good, it probably is.

Navigating our field of Behavioural Science and AI means holding to good ethics, to do no harm, whilst shifting organisational behaviours.

If you have the privilege of designing AI Agents, then you hold a great responsibility to ensure they do no harm.

Source
Link the research paper: https://www.hbs.edu/ris/Publication%20Files/Emotional%20Manipulations%20by%20AI%20Companions%20(10.1.2025)_a7710ca3-b824-4e07-88cc-ebc0f702ec63.pdf

Previous
Previous

The $25m AI enabled Scam

Next
Next

Will Autonomous companies be a thing