AI-generated messages and images can create realistic impersonations, which enable criminals to launch highly effective frauds at scale.
By Alex Capella and Christopher Reimann
The threat landscape for banks has changed dramatically since the start of 2023. In January, as part of a research project, I spoke to dozens of financial crime compliance officers at leading U.S. banks, and they told me that real-time digital payments, cybercrime and fraud would be the top financial crime threats for 2023/24. No one mentioned generative AI or ChatGPT.
But it is now a top threat. ChatGPT use has grown significantly since November 2022. Given the extraordinary capabilities of ChatGPT and other generative AI platforms, we can assume that criminals are already using it to:
- Create very real-looking fake profiles, documents, and transactions that get past the best trained compliance person.
- Develop bots and malware to commit cybercrime.
- Perpetrate scams to obtain people’s bank account information.
And likely, that’s just the tip of the iceberg, because one of ChatGPT’s most valuable capabilities for fraudsters and cybercriminals is the ability to make the fake look real.
For example, AI-generated messages can create highly realistic impersonations, which enables criminals to launch highly effective frauds at scale. There have already been noted criminals using ChatGPT to create legitimate-looking social media personas that gain users’ confidence in order to steal data. FTC chair Lina Kahn has warned that ChatGPT could “turbocharge” fraud and scams, making it more difficult for compliance teams to distinguish criminal from legitimate transactions.
The strong link between fraud and financial crime
For many years, fraud and financial crime were treated separately. Fraud was associated with payments, while financial crime was associated with money laundering. But in recent years, banks have integrated these functions for more holistic and effective investigations. As one financial crime compliance officer said to me: “Fraud is the criminal act, and money laundering is the moving of money from that act.”
Fraud, real-time payments, cybercrime and ChatGPT are also linked. Fraud was often the No. 1 financial crime threat for 2023 and 2024 mentioned in these recent interviews, specifically regarding payments and cybercrime. According to research conducted by KS&R, financial crimes involving digital payments, account takeover and payments associated with ransomware and cryptocurrencies are up since 2021 among large U.S. banks. Increased use of bots and synthetic identities are behind this, with ChatGPT providing a new tool for fraudsters to wreak even more havoc.
Compliance teams get overwhelmed with new criminal typologies. Real-time payments and increasingly complex sanctions require real-time screening and monitoring. Cross-border payments add complexity with KYC due diligence. As a result, running names through a screening engine on a nightly basis and checking wire and ACH transactions are now too slow and ineffective. There is a point of diminishing returns with manual compliance processes, where even the best-trained eyes miss anomalies and alerts. And with regulators now putting the onus of preventing and stopping fraudulent transactions on banks instead of customers, it’s clear that banks need to implement automation and AI to keep up with these increasingly sophisticated—and automated criminal organizations.
Financial institutions and the need to fight fire with fire
Digital identity solutions and compliance technology are essential to combating the threat of fraud powered by generative AI. AI/machine learning is particularly useful for analyzing anomalies, links to criminal activities and entities and suspicious transaction patterns at a real-time pace. Real-time payments are putting pressure on compliance teams to conduct real-time transaction and sanctions screening. Keeping up can be a challenge.
ChatGPT can support banks’ fraud detection efforts by quickly analyzing large amounts of data on a person to assess if a real-time payment is consistent with past behaviors and/or having been cleared with previous sanctions screening. Checking for behavioral anomalies and previous sanctions screening will meet the need for speed while improving accuracy and reducing false positives.
In many institutions, though, decision-makers are taking a very cautious approach to using ChatGPT or a similar generative AI platform in the workplace, a real concern since it can be misused. Security and anti-fraud professionals within banks are evaluating the application of generative AI, assessing the risks and ensuring that those charged with using and overseeing its use fully understand and follow policy. Specifically, this includes using at-large data from the Internet in models that may not be accurate and therefore result in screening and detection errors; this could also result in GDPR non-compliance as well. Using internal data or that from established third-party data sources can reduce these risks.
To fully leverage ChatGPT and generative AI, banks benefit when they complete the integration of their cybersecurity and fraud/financial crime operations if they have not already done so. Generative AI such as ChatGPT raises the threat level for financial institutions, making fraud more difficult to detect and easier for criminals to perpetrate at scale. However, even though ChatGPT is a powerful tool in the hands of scammers. It’s a double-edged sword for criminals, because when generative AI is properly implemented in banks, it can also be a great defensive weapon to root out and stop financial crime.
Alex Capella is an associate at KS&R. Christopher Reimann is formerly VP and principal of KS&R.