By Paul Benda
AI has been around for a long time, and banks have been using AI for a long time — fraud detection alerts, credit scoring risk assessments, process automation, chatbots. Now, generative AI tools have upped the game. Microsoft is renaming its whole Office 365 suite to focus on its Copilot product, the AI agent that’s trying to help make your life easier and use these tools to make you more productive. Google shows AI summaries for its search results at the top now. And of course there’s ChatGPT, which has transformed digital life since it launched to the public two years ago.

Generating video used to be the expensive province of big movie studios, but now it’s available to everyone for free or at low cost. The next step is the rise of autonomous agents. A new tool on ChatGPT, called the Operator tool, allows you to assign the agent a task: “Hey, book me a hotel room three weeks from now that’s for a beachside resort in this town.”
What do all these advances have to do with fraud? Reputable companies have controls in place. But the bad guys have realized they can leverage these AI capabilities because many of them are posted as open source, which means anyone can look at that code and then build their own fraud AI agents. There’s even one called FraudGPT that allows criminals to access these types of capabilities — it’s basically AI fraud as a service. And the outcome is a wave of impersonation scams that are harder than ever to stop.
Take a classic business email compromise scam. One key method to stop this is for the target who Fraudsters use genAI to enhance old scams gets an email request for a wire transfer to call the source of the request. But now, through cheap AI-based voice cloning — a technology that’s accessible for as little as $5 per month — scammers can fake a requester’s voice and make their scams more robust.
I recently tested my own voice on one of these platforms, using recordings from ABA podcasts. It was fairly convincing. It only takes one or two minutes of recordings to get a fairly realistic voice clone. There are still some little alterations in the timing of the way people say things, but it can be refined — and scammers have the time to do this.
Another generative AI-driven tactic is faking faces using an avatar generator. One AI solution offers a “beta,” an interactive avatar that lets you join multiple Zoom calls and watch and interact with people in real time with your image and your voice. The bad guys are using these impersonation capabilities to get you to do something, to send money, to change wiring instructions. And they’re authenticating themselves now through biometrics: voice and face.
Add in the capability to generate artificial sound effects and background noise — and even voice effects and inflection that reflect fear, anger or other emotions — and you can see how this technology enhances the classic “grandparent” scam. A scammer can generate a significantly more convincing call from a grandkid in distress who needs cash to get out of a rough situation.
Generative AI gives scammers a whole new toolbox. Understanding how these tools work is the first step to knowing how to stop the frauds.
Paul Benda is EVP for fraud and operational risk policy at ABA and host of the ABA Fraudcast.
TOOLKIT — One tactic families can use to protect themselves from this kind of scam is to have a family password — an agreed-upon, hard-to-guess word that authenticates the validity of the request.