The $25m AI enabled Scam

On 3rd Feb, 2023, the South China Morning Post reported that a multinational company's Hong Kong office, had lost HK$200 million (US$25.6 million), due to a sophisticated scam using deepfake technology.

Deepfakes use AI tools to create highly convincing fake videos or audio recordings, making it hard for individuals and organizations to distinguish real from created content. With AI these become interactive, like avatars.

The scammers convincingly replicated the appearances and voices of targeted individuals using publicly available video and audio footage.

The scam started when an employee in the finance department of the company's Hong Kong branch received a message, they believed to be from the company’s UK-based chief financial officer, instructing them to execute a secret transaction.

Despite initial doubts, the employee was convinced enough by the presence of the CFO and others on a group video call to make 15 transfers totalling HK$200 million to five different Hong Kong bank accounts.

Officials only realized the scam occurred about a week later.

Advances in technology benefit those that they are intended for, and those who wish to use it for harm. As the video recreations get better, it will become harder to spot the fakes.

Scammers have been using audio deepfake technology to scam people out of money by impersonating loved ones in trouble, or pretending to be someone they aren't. There are 'scam' factories set up with these conversations as their product. The workers are set targets, like any other service product.

SO WHAT:
This adds to a worrying trend, where trust in video and audio messages, or streams becomes weakened and bridges across into the organization field.

Many of us we don't answer our phones unless we know who is calling.

Trust will become harder to establish in a technology led global marketplace, where video conferences and virtual working is normalised. New forms of trust will be developed.

Source
Original article: Deepfake scammer walks off with US$25m in first of its kind AI heist: https://arstechnica.com/information-technology/2024/02/deepfake-scammer-walks-off-with-25-million-in-first-of-its-kind-ai-heist/

BESCI AI OPINION

This really asks the question, who can we trust. Our trust radar depends on the context.

Things we see on social media may generate a healthy disrespect, or that they are 'too good to be true'.

Previous
Previous

The question every parent dreads

Next
Next

Dark Manipulation