Synthetic Realities: Managing Deepfakes and Misinformation Risks.
- askdr
- Jun 4
- 3 min read
A few days ago, I spoke on the topic of Synthetic Realities: Managing Deepfakes and Misinformation Risks. Below are some of the key takeaways from the talk.
Artificial intelligence tools are making it difficult for audiences to trust the content that they see online. As AI-generated content grows in volume and sophistication, online images, videos, and audio can be used by bad actors to spread disinformation and perpetrate fraud. Social media networks have been flooded with such content, leading to widespread scepticism and concern.
Last year the Deloitte Centre For Financial Services polled more than 1,000 executives on their experiences with deepfake attacks. Among respondents familiar with or using generative AI, 68% reported concern that synthetic content could be used to deceive or scam them, and 59% reported they have a hard time telling the difference between media created by humans and generated by AI.
It also predicted that gen AI could enable fraud losses to reach US$40 billion in the United States by 2027
Last year an unnamed company based in Hong Kong was duped into paying HK$200m (£20m) of her firm’s money to fraudsters in a deepfake video conference call.
The fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference.
The fraudster invited the employee to a video conference that would have many participants. Because the people in the video conference looked like real people, the informant … made 15 transactions as instructed to five local bank accounts.
Note: There was no hacking involved, no cyber-attack it was a simple social engineering using technology to circumvent security barriers.
What are the consequences of a deepfake scam?
· Loss of trust from employees and vendors;
· Compromise of proprietary data
· Financial loss
· Reputational damage
What are the red flags?
Exploiting hierarchy - pretending to be a senior executive targeting a junior person;
Expressing urgency and keeping it confidential, don’t tell anyone this is a special project, only you know about it.
Incident Steps
Having a dusty policy doesn’t work, you need training, and people need to know how to deal with it and what tools are available for it. There is only a finite period to respond, you need to:
· capture digital evidence;
· Stop the funds from moving
· Alert the bank immediately
Assemble:
· IT
· Legal
· Comms
· Key decision-makers
From a legal perspective, you can get a Court Order to freeze assets and it could be a worldwide freezing asset.
Third-party disclosure orders.
It is an order against organizations such as banks and internet providers to force them to provide details such as a/c opening info, addresses and even passports. This way you can build a recovery plan.
Mitigating damage
Do the basics well!
· Don’t click on unknown html links, don’t try to open attachments from sources that you do not trust.
· Have a blame-free reporting mechanism. Have a line to experts where you can ask simple questions, such as ‘hey, can I confirm this with you . . .’
· Can one junior employee instruct an accounting department to make 15 transactions?
· Companywide training is great but does the message get across? Try to break down the training to departments or even to only those in a privileged position such as finance.
Finally, Tech is used to create deepfakes and therefore you need technology to combat it. If you work in a sector that has high volume (not necessarily value) I suggest having technology in place to detect red flags in real-time could minimize the damage that may be caused.

Comments