The Financial Crimes Enforcement Network on Wednesday issued an alert with recommendations for financial institutions on how to detect deepfake identity frauds created using generative artificial intelligence.
Over the past two years, FinCEN has witnessed an increase in suspicious activity reporting by financial institutions describing the suspected use of deepfakes in fraud schemes targeting both institutions and customers, according to the alert. An agency analysis of Bank Secrecy Act data suggests that financial institutions often detect GenAI and synthetic content in identity documents by conducting re-reviews of a customer’s account opening documents. Some indicators that additional security may be warranted during account openings include inconsistencies among multiple identity documents submitted by the customer; a customer’s inability to satisfactorily authenticate their identity, source of income or another aspect of their profile; and inconsistencies between the identity document and other aspects of the customer’s profile.
Beyond account openings, financial institutions detected deepfake identity documents through enhanced due diligence on accounts that exhibited separate indicators of suspicious activity, FinCEN said. Those indicators include access to an account from an IP address that is inconsistent with the customer’s profile; patterns of apparent coordinated activity among multiple similar accounts; high volumes of chargebacks or rejected payments; and patterns of rapid transactions by a newly opened account or an account with little prior transaction history.