By Khalil Garriott
The proliferation of deepfakes is no longer an emerging risk. It’s here, at our doorstep.
“It’s an absolutely frightening threat,” says John Farley, cyber practice managing director at Gallagher.
At the 2024 ABA Insurance Risk Management Forum, Farley and Future Point of View CEO Hart Brown co-presented a learning lab on deepfakes. Brown delved into the cognitive biases as well as detailed the ethical considerations and ease of creation regarding deepfakes. Deepfake AI involves manipulating facial appearances to generate images, audio clips and videos that seem convincing — but are hoaxes.
Incidents just this year, such as those surrounding Taylor Swift (images) and President Biden (audio), have grabbed big headlines. But celebrities aren’t the only victims of deepfakes. In Maryland, for example, a high school principal’s voice was manipulated to say racist comments. In Hong Kong, scammers stole more than HK$200 million when a faked audiovisual of a CFO asked an employee to conduct a secret transaction. The employee attended a purported video meeting with the CFO and other colleagues—but they were all deepfakes. Convinced that the transaction request was valid, he sent the payment, only to find out later that the video and audio were deepfakes.
“This can happen to every single one of you at any moment, and there’s very little you can do about it,” Farley says. Using free manipulation software that requires little practice or expertise, criminals have introduced a new and disturbing wrinkle in today’s cybersecurity landscape.
It’s a serious threat that everyone needs to understand, Farley and Brown emphasize. Created with the intent to make it appear that people did or said things they didn’t do or say, deepfake technology is created by bad actors to conduct financial crimes, influence political elections, launch misinformation campaigns and cause reputational harm for people and organizations.
Threat actors carry out the risk in many ways, which compounds the recent proliferation of deepfake tech. It doesn’t take much for the bad actors to strike, and after they do, removing the content is arduous (if even possible).
“It’s really not that complicated once you have a video clip of somebody,” Brown says.
The deepfake threat faces banks in multiple dimensions. Deepfakes can be used by hackers attempting to circumvent voice-based authorization system. As seen in the Hong Kong example, deepfakes can also be used to fool people. To help bankers not be duped by a deepfake, Farley and Brown offer a practical checklist: Is there cognitive dissonance? Does it look professional? Is it low quality? Can it be corroborated? Is the author a reputable person? Are other sites linking to it?
From lost funds and business interruptions to reputational harm and litigation, the short-term and long-term effects of emotionally driven deepfakes are many. “If I was a bad guy, I could deepfake a CEO the day before earnings to move the stock up or down,” Farley says.
“Once the bell has been rung, it’s really, really hard to unring it,” Brown adds.