Scammers are working smarter, not harder with AI

The recent OpenAI report into malicious use of AI highlights that scamming is not new. Bad actors will use whatever they have available to scam others.

The AI isn't creating new weaknesses in humans, it is widening existing ones.

AI improves efficiency and plausibility, but the distribution, targeting, and human follow‑through determine the impact. It is a holistic system.

In each of the case studies they highlight that humans remain in the loop. That AI is used to translate, draft, create consistent personas, track and report performance and manipulate emotionally, at scale.

Think call centres, that are becoming more sophisticated and scaled. Even with the scale that AI delivers, the operations tend to have massive waste inside their highly centralised systems.

Where the scams focus is in building trust and credibility. They borrow legitimacy from others (law firms, dating agencies, consultancies), they mimic institutional language, move targets off‑platform fast, exploit emotional vulnerability and apply pressure incrementally.

It gives a fascinating insight into the power of AI adoption, albeit for bad actors Including how:
→ Systems can absorb AI
→ Bad incentives scale faster than safeguards
→ Credibility is easier to fake than rebuild
→ Human vulnerability remains the primary attack route

There is a heavy bias though. OpenAI are posititioning AI as one part of a complex 'scamming' system, not a driving force and do not take responsibility.

Although they 'banned clusters of accounts', there is little spoken of the harm caused, and accelerated by the use of AI. They argue that AI‑generated content alone wasn’t decisive; engagement varied widely and seemed driven by other factors like account popularity and ad targeting.

They include case studies on
📎 Romance Scam
📎 Scam: Operation “Date Bait"
📎 Scam: Operation “False Witness"
📎 Virtual targeting: Operation “Silver Lining Playbook"
📎 Covert IO: Operation “Trolling Stone"
📎 Covert IO: Operation “No Bell"
📎Covert IO: Operation “Fish Food"
📎 Covert IO: China’s “Cyber Special Operations"

SOURCE

https://openai.com/index/disrupting-malicious-ai-uses/

BESCI AI OPINION

Ironically, the scammers may be some of the most disciplined, behaviourally sophisticated change systems currently operating AI at scale.

So what can we learn from them?

The first thing is to understand their business model they:
→ Design end‑to‑end behavioural systems
→ Assume people are emotional, distracted, and busy
→ Optimise for nudging action (doing), not knowing
→ Measure behaviour and responses relentlessly
→ Adapt fast when something doesn’t work

They don't have silos, barriers, meetings, bureaucracy, they make things happen and iterate fast, following the money. Behavioural shifts are their business.

They quickly attract attention, create emotional commitment, then extract the desired action (money, compliance, or silence). Micro commitments.

They don't launch a big announcement, explain everything at once and hope understanding leads to action.

They lead with emotional need recognising humans are emotional first, rational second.

They personalise relentlessly dynamically adapting based on responses.

They don't have a one-size fits all approach based on hope.

They remove friction ruthlessly, assume resistance is normal, and are skilled at overcoming objections.

Imagine if your organisation could do the same.

Previous
Previous

OpenAI launches 5.4 and improves GDPVal to 83%

Next
Next

Is AI coming for my job? Not yet, but it will, by stealth