The dangers of mobile remote deposit capture fraud

Bad actors are continually tweaking their strategies in an effort to bypass defense systems.

By Jesse Barbour

“Dear sir or ma’am, I represent Company X. If you deposit this check for $2,000 into your account using your phone, and wire $1,800 to this other account, you may keep the $200 difference for your services.”

Help remind consumers that #BanksNeverAskThat. Sign up for ABA’s free, anti-phishing education campaign at Available in Spanish, too.
Sound familiar? It’s a common fraud perpetrated by criminals on unsuspecting consumers every day using mobile remote deposit capture, or mRDC—a mechanism by which a person uses the camera on a mobile device to take a picture of a check and deposit the funds digitally.

mRDC fraud is typically executed as a social engineering-style attack, where a bad actor manipulates end users into depositing fraudulent checks.

Payment fraud activity at organizations has been on the rise since 2013, with record levels in 2018 and 2019—more than 80 percent of organizations have been victims of payments fraud attacks in those two years. Though there was a slight decline in 2020 and 2021 (74 percent and 71 percent, respectively), payment fraud is still affecting a large majority of organizations.

One reason the fraud is so prevalent is that bank checks are still the most popular method for B2B payments in the U.S., with 81 percent of businesses using paper checks to pay other firms. (It’s estimated that 40 percent of all B2B payments in the U.S. are made by check.)

And attacks are becoming increasingly sophisticated. One common tactic is impersonating executives at a company to gain the target’s confidence—often using deepfakes, where bad actors use neural networks to manipulate video or audio communication to appear to be someone the target trusts.

One variation on the scheme dupes targets into doing actual work. The person is then paid more than the agreed fee for the work and told to return the remaining funds.

These schemes work because bad actors know just enough to counterfeit a check so it appears to be legitimate to all the available security and fraud systems. Oftentimes, the fraudster has already done reconnaissance work on the target, often using social engineering to get the person’s bank account username and password, which they use to find out how much money is in the account. Going back to our original example, if the target deposits a $2,000 check, even though the check hasn’t been cleared, the fraudster knows that this person has $1,800 in their account, so they can get that $1,800 out before the check is identified as fraudulent.

The bad actor gets the target to either wire the money downstream or cash the money out via some other mechanism, such as a gift card.

Unfortunately, the end user is often on the hook for that money because he was inadvertently complicit.

Why do people fall for this ruse? It’s a numbers game. Bad actors cast a broad net because they know that, statistically, people are not likely to fall victim to these kinds of schemes. (Still, the fraud is incredibly profitable. In 2021, the Federal Trade Commission received nearly 8,500 complaints of check fraud, with a total loss of $153.4 million.)

Why would a person accept the clear risk of doing business with someone they don’t know? One answer is financial vulnerability. These are people for whom the $200 they think they could make is significant.

Three ways to tackle the problem

The first challenge in combating crime is identifying it. Bad actors know we’re doing everything we can to stop them, and they’re continually tweaking their behavior in an effort to bypass defense systems. It’s a dance that goes on in perpetuity.

Second, the space is fluid. Fraud looks different today than it looked 18 months ago or will look 18 months from now. That fluidity is exacerbated by open banking and the rise of fintechs, which have largely been driven by consumer demand. Just as innovation presents value and opportunity to legitimate end users, it also presents opportunity to bad actors who are constantly looking for ways to exploit those innovations.

Digital check fraud, specifically mRDC fraud, is also hard to stop because it requires an expertly trained eye to identify fraudulent checks. There are experts around the country who specialize in identifying these checks, but they mostly work independently and are isolated from each other. Since they don’t collaborate, gaining access to the institutional knowledge of these individual experts is extremely difficult.

You might think an obvious solution is to harness the power of AI and machine learning. The problem with that is that there’s unstructured information (data) contained in the visual contents of a check image. To understand the problem, let’s first look at structured data.

For example, a user ID is structured because you can put it in a column and put the user’s phone number in another column. If certain phone numbers are associated with bad actors, we can easily build a system that knows how to find fraudulent activity based upon observing the phone numbers. That data is pretty easy to get at because it’s inherent to the structure itself.

Conversely, image data is unstructured; it’s composed of tens of thousands of pixels arranged in a certain way and that data (the pixels) need to be turned into structured data. Humans have all of the required faculties needed to take the visual information in an image and map it down onto structured data. We do that naturally, but it’s incredibly difficult to get a computer to replicate the process.

mRDC check fraud is so difficult to stop because it’s fundamentally hard to mathematically model data that’s as unstructured as image data and because it takes a carefully trained eye to identify the subtle features that mark a check as fraudulent.

The onus is on financial institutions to educate their customers about the dangers of mRDC and other types of fraud. And because the fraudsters are constantly evolving their methods, the messages to consumers need to be continually updated.

Jesse Barbour is chief data scientist at Q2, which ABA endorses for its virtual banking platform.