‘You’ve got to start thinking like a scammer, because the bad actors just got handed a huge gift’
By Elizabeth Judd
The date when ChatGPT went live, Nov. 30, 2022, marks the end of an era when screwball capitalizations and a flagrant disregard for subject-verb agreement were tipoffs that an email might be part of a phishing scam.
Algorithms such as the one driving ChatGPT that can quickly produce slick new content, whether text, images or other simulations, are now readily available at no cost, a development that should give every banker pause, says Eva Velasquez, president and CEO of the Identity Theft Resource Center, based in El Cajon, California.
“You’ve got to start thinking like a scammer because the bad actors just got handed a huge gift,” she says.
The numbers tell a compelling story. Even before the arrival of ChatGPT, phishing attacks had grown at a rate of 150 percent annually since 2019, according to the fourth quarter 2022 Phishing Activity Trends Report. The report identifies “financial institutions” as the most heavily targeted of all industries, receiving 27.7 percent of phishing attacks, an increase from 23.2 percent in the third quarter of 2022.
“Banks can’t afford to not get this,” says Barb MacLean, SVP and head of technology operations and implementation for the $3 billion asset Coastal Community Bank in Everett, Washington.
“Customers trust banks to do the best they can on their behalf. If there’s a new tool or mechanism that the nefarious actors are figuring out a way to leverage, we’ve got to be counteracting that. We can’t be blind to the reality in which we work today.”
Expect deep fakes
“These large language models have been built to give output that seems human,” explains MacLean. “People who may not be native English speakers have the potential to use these models to generate what seems like it could be coming from a native speaker—because that’s the data set [the models were] trained on.”
As fraudsters size up the possibilities, bankers and their customers are assessing risks.
“You can’t just assume that a perfectly worded email or phishing attempt—just because it has the right bank logo—is okay,” says John Buzzard, lead fraud and security analyst for Javelin Strategy and Research. “You’ll have to dig deeper than that.”
Phishing scams may already be more difficult to detect, but that’s only the beginning. In 2023 AI can unearth “a new type of golden data” that will let bad actors perpetrate more ambitious schemes, says Peter Cassidy, secretary general for the not-for-profit Anti-Phishing Working Group, in Lexington, Massachusetts.
A phishing ring might, for instance, use AI chatbots to determine which specific branch an individual patronizes. “Imagine the power of your phone ringing, and it’s a manager claiming to be from your specific branch,” Cassidy says.
Advancing from “silly-looking, badly composed emails” to phone calls with user information means that more of these scams will succeed, he predicts.
Cassidy anticipates elaborate ruses ahead, using “deep fakes,” which Merriam-Webster defines as “an image that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
Even without the skills of a James Bond, a scammer could access a bank executive’s remarks at an industry conference that were later posted on YouTube. That YouTube clip could them be sampled, and the executive’s voice cloned.
“Ten years ago, [voice cloning] would have been incredibly difficult,” says Cassidy. “Now non-experts can teach themselves how to use these AI tools very quickly.”
What Cassidy calls “the customization of phishing attacks” raises the fraud threat to new heights.
This is the “beginning of an epoch in which highly personalized phishing attacks—using deep-faked cloned voices and customers’ personal data and executives’ personal data drawn from many sources—may well be as common as Viagra spam was 20 years ago.”
The best defense
Even before 2023, when new generative AI tools hit PCs and mobile phones everywhere, fraud was on the rise.
U.S. consumers reported losing almost $8.8 billion to fraud in 2022, an increase of more than 30 percent over the previous year, according to Federal Trade Commission data. In 2022, says the FTC, banks and lenders reported 58,574 separate incidents of fraud, a 4.6 percent increase over 2021.
A 2022 study of U.S. and Canadian financial institutions found that the true cost of fraud is far higher than the face value of losses incurred. For every one dollar of fraud at a U.S. bank, the bank actually lost $4.36.
One of the best ways to foil phishing scams is to get proactive about educating employees and customers.
Identity Theft Resource Center’s Velasquez advises banks to tell their customers that unless they initiate contact, “always go to the source.” She continues: “If you get an email that looks like it’s from your bank, don’t respond to the email. Go and engage with your bank, however you normally do that.”
Not only are bankers educating customers about phishing scams, but they are teaching all bank employees to communicate in an identifiable and consistent manner.
“Informing your customers how you will interact with them, when you will contact them, and what a legitimate engagement from you looks like is very, very important,” says Velasquez.
She also would encourage customers to keep copies of legitimate bank communications for comparison purposes. When a suspect email arrives, the recipient then has something legitimate to rely upon.
APWG’s Cassidy agrees: “Customers need to be consistently instructed precisely how to trust communications with the bank. Any space or ambiguity in that trust wall will be exploited by deep faked grandmothers wrapping your customers around their little fingers.”
Thinking outside the box
Technology provides some powerful weapons against phishing scams. Tools that access devices’ IDs and pinpoint the locations of inbound calls are one example of how fraudulent activities can be automatically flagged.
MacLean also suggests that banks use the enormous stockpile of customer data they’ve amassed to help detect scams.
Bankers might, for instance, include details of a customer’s last ATM transaction within an outbound email to authenticate that the email is legitimately sent by the bank.
In addition, when communicating with customers, MacLean challenges bankers to foster greater trust by maintaining the highest ethical standards.
Banks should be completely transparent about who (or what) is serving their customers, an issue of growing importance now that AI chatbots are fielding inquiries in call centers. Trust is eroded, says MacLean, “when you’re not informing the customer that it’s not a human being on the other end of the call.”
As banks embark upon a brave new world of AI, she proposes a two-pronged approach to strengthening “the human firewall.” First, financial institutions need better technology tools to combat phishing, and then they must dedicate more time and energy to educating both customers and internal employees on how to fend off attacks.
“A skill we’ll need to reinforce in our workforce is critical thinking,” says MacLean. When behaviors seem outside the norm, she says, “teach your employees to trust their intuitions.”
In the end, says MacLean, the question is: “How do you increase knowledge of what [bank employees and customers] should be watching for now that grammatical errors may not be a very good flag?”
Keeping ahead of the scammers, she concludes, will take “a blend of technology and the human.”
Elizabeth Judd is a freelance writer based in Chevy Chase, Maryland.