Capturing These Three Data Types Can Transform Your Fraud Monitoring

By Matthew Van Buskirk

When we think of the work done by anti-fraud and AML teams, we automatically view it from the bank’s perspective. We know that bad actors are trying to commit fraud and launder money through the financial industry, and we take steps to stop it. We think in terms of how much it costs to keep the bad guys out.

But we rarely think about this from the bad guys’ perspective and how much it costs them to get in. Viewing things through their eyes is the key to understanding how to design modern AML programs—don’t try to block them outright. Instead, make it too expensive for them to bother trying.

Bad actors are changing their tactics quickly, and keeping up is difficult for banks.

Compromised Data and Synthetic Identities

Security firm Norton reported that 4.1 billion consumer records were compromised in 2019. We have reached a point where a fraudster may be more likely to pass standard KYC/CIP checks than a legitimate customer. This is possible because the fraudster can buy a full set of compromised identity data on the dark web and enter completely accurate customer information when signing up for an account. Since that information is entered via a script, the fraudster won’t make any mistakes where a real person may fat finger a digit in their Social Security number.

Compromised data sets are bad, but there is still a chance that the consumer will notice unexplained accounts on their credit report. Synthetic identities remove that risk for the bad actor. The FTC identified synthetic fraud as the fastest growing form of fraud in the U.S.

This approach is even harder to detect since the identities are manufactured to appear real. Bad actors combine pieces of different individuals’ personal information into a synthetic persona then patiently build a history for that persona, often including financial accounts, on-time loan payments and an online social media presence. In the fraud context, the bad actors are looking to build trust to allow access to large credit lines before “busting out” and disappearing. Most of the focus on synthetic identities is on their potential for fraud, but the more nefarious use case may be in money laundering where the manufactured identity keeps operating normally with no fraud occurring.

If the only tools at the banks’ disposal are credit checks, validation of CIP data fields, and rules-based transaction monitoring, it will be nigh-on impossible to differentiate between the good customers and the wolves in sheep’s clothing.

So, how should a bank deal with these evolving threats?

In short, look to capabilities developed in the fintech space that center on gathering data beyond the scope of traditional KYC/AML programs.

In a fintech firm, the customer’s primary if not only means of interaction with the product is through a smartphone. They never meet their customers face to face and may only rarely speak with them on the phone. A bank’s face-to-face interaction with its customers is often viewed as a positive since it allows for some certainty that the person is real, but that is a false sense of confidence. The various channels a customer can use to interact with a bank mean that the bank needs to spread its risk controls more widely. By contrast, fintech companies invest more deeply in digital capabilities. That investment mainly focuses on capturing additional data signals that can paint a more complete picture of customer activity to determine whether something feels off.

Three categories of data are mattering more than ever:

1. IP intelligence—Bad actors take steps to hide their internet tracks, making it difficult to trace the activity back to them. Legitimate customers may use tools such as VPNs to protect themselves from identity theft, but more sophisticated tools such as TOR are more often than not a mark of something suspicious going on. IP intelligence monitoring can give compliance teams insight into how the customer connects to the bank’s platform and prime them to ask the customer to reconnect without any masking techniques to validate who they are. Of course, this signal alone isn’t enough for the most sophisticated bad actors as they may be working with a network of compromised home computers and can route their activity through a customer’s IP address without the customer knowing.

2. Device fingerprinting goes a step beyond simple IP intelligence to capture additional device attributes such as its operating system, web browser, hardware properties, languages installed, etc. Each element makes the bad actor’s job more demanding since they either need to figure out how to fake everything or literally go out and buy a new device for each account that they open. Adding device fingerprinting capabilities can suddenly surface connections across accounts that may look wholly unrelated and otherwise completely normal, allowing you to ask some pointed questions about why they all appear to be connecting through the same device.

The prior two categories of data add technical complexity to any effort to circumvent a bank’s controls. That complexity requires an investment of time and money, but it is still possible for more sophisticated bad actors to find their way through.

3. Behavioral signals are the final, and perhaps most potent, category of data to capture. Behavioral analytics tools have become more sophisticated in recent years as tech companies sought to understand how their customers interact with their products. Knowing where the customer was tapping the screen was incredibly valuable for designers seeking to provide the best possible experience and for advertisers who wanted to know the best placement for an ad. Conveniently for bank AML teams, those same signals are clear indicators of abnormal customer behavior.

Considering bad actors’ perspective once again, it is important to remember that they also have daily lives to live. They do not want to sit at a desk and manually manage every compromised account, so they design software bots to help them. From a tech-savvy bank perspective, a bot interacting with its platform will look very different from a real person. Even if there are no bots in use, there is still a good chance that the account will show signs of abnormal behavior in terms of when customers interact with the product, how long they are in the product, and what aspects of the product were used. It is also likely that they will be checking in on all of their accounts simultaneously, so the bank may see spikes in activity across many accounts that don’t look to be related to one another.

Where does this leave the bad actors? When faced with a bank that has invested in augmenting its technology stack with the ability to gather all of this additional data, they are likely to take their “business” elsewhere.

Matthew Van Buskirk is co-founder and co-CEO of Hummingbird.