Social and synthetic media are spawning new fraud, feds warn

While social media fraud is a problem for all ages, younger people are particularly susceptible, reflecting their outsized use of these platforms, the FTC said.

Every time we turn around, it seems that fraud has a new face. Old school check fraud is still with us and causing grief, but fresh threats are emerging constantly. Recent reports from the Federal Trade Commission and U.S. intelligence agencies have highlighted two new sources of risk: social media and artificial intelligence “synthetic media” tools.

This article originally appeared in the November-December 2023 issue of ABA Banking Journal Directors Briefing. Subscribe now.
More fraud was reported to originate from social media than any other method of contact, the FTC said in a new report. One in four people who reported losing money to fraud between January 2021 and June 2023 said the scam began on social media, accounting for $2.7 billion in reported losses, the FTC found.

While social media fraud is a problem for all ages, younger people are particularly susceptible, reflecting their outsized use of these platforms, the FTC said. Social media was the contact method used 38 percent of the time for people ages 20 to 29 who lost money to fraud. That rose to 47 percent of the time for 18 and 19 year olds.

In the first half of 2023, online shopping scams generated 44 percent of all social media fraud reports, mostly related to undelivered goods. But in dollar terms, fake investment opportunities — often involving cryptocurrency — were costliest, making up 53 percent of total reported losses.

Meanwhile, intelligence agencies have been sounding the alarm bells about a new threat called “deepfake,” which uses synthetic media to convincingly impersonate voices and images. The National Security Agency, FBI, and Cybersecurity and Infrastructure Security Agency in September issued a paper urging organizations to prepare for this risk.

“Many organizations are attractive targets for advanced actors and criminals interested in executive impersonation, financial fraud and illegitimate access to internal communications and operations,” the agencies said.

The agencies are advising organizations to consider implementing several technologies to detect deepfakes and determine the provenance of multimedia, including real time verification capabilities, passive detection techniques and protection of high priority officers and their communications.

A recent New York Times article recounted how an investor in Florida called his bank to discuss a large money transfer. Then he called again, but the second call came from a software program that intercepted his information. It artificially generated his voice and tried to trick his banker into moving the money elsewhere, the Times reported.

Share.