By Sarah F. Hutchins and Alison L. Lamb
Personal finance journeys. Fraud detection. Determining a customer’s credit risk.
The use of artificial intelligence is a game changer for banks to provide the services their customers are increasingly seeking online.
Machine-learning models and other tools can process and evaluate data at speeds much quicker than their human counterparts. Automation through the use of AI means that a tedious process, such as bank account openings or risk assessments, can be done more efficiently.
While there are clear benefits, AI tools also bring risk. AI is increasingly coming under the watchful eye of state and federal regulators. Data privacy laws are being enacted across the country as lawmakers seek to protect sensitive personal information both from improper use and also from unannounced use in profiling. The White House’s executive order on AI and the European Union’s adoption of a comprehensive AI law are the latest examples of AI-focused regulation. One focus of these regulations is to guard against bias in AI output.
With this evolving patchwork of laws around data privacy and an emerging technology, financial institutions should be mindful about mitigating their risk.
Here is a look at the continued use of AI in the financial services industry; what regulators and lawmakers are paying attention to; and best practices for institutions to reduce risk.
AI applications happening today
AI has opened up new opportunities for banks as both traditional and technology-focused companies meet their customers in the digital space.
Credit card companies, for example, are using machine learning models for fraud detection. The models can analyze large datasets in a short period of time to track a person’s spending patterns to better understand habits. That way, the models can more efficiently and effectively flag suspicious transactions.
Financial institutions also find AI useful when it comes to predicting personal finance journeys and determining someone’s credit risk. A 25-year-old customer from North Carolina can tell her bank or lender that she wants to get married by 28, purchase a house by 29 and have her first child by 30. Machine learning models can look at her finances and produce a report that tells her how much she should have saved and by when to meet those life goals. It will also take just a moment to replan her financial life story if she decides to change a date or event.
Similarly, financial institutions can determine her credit risk. The AI models can look at loans or debt she’s carrying, her income and where she lives (among other information) and determine how much of a risk she would present if the institution wrote her a loan.
AI has other helpful applications, too. It can help verify a customer who’s making a payment and predict trades on the stock market.
The state and federal laws impacting use of AI
As AI is used to analyze customer data in the financial services industry, the regulatory system has been keeping a close eye.
Institutions utilizing AI should first be mindful of how that use fits into longstanding regulations on the financial industry, including through laws like the Gramm-Leach-Bliley Act and regulations already promulgated by governing regulators, such as the Office of the Comptroller of the Currency. The financial industry has long been required to disclose its information-sharing practices to customers as well as to safeguard sensitive data. Use of AI tools and output must fit into these existing rules and regulations.
In 2021, the OCC, the Federal Reserve System, the FDIC, the CFPB and the National Credit Union Administration requested information related to financial institutions’ use of AI. The agencies were focused on cybersecurity risk, oversight of third parties and risks from broader data processing, among other topics. Last year, four federal agencies including the Civil Rights Division of the Department of Justice and the CFPB outlined efforts to prevent discriminatory outcomes from the use of automated systems, including AI. Their joint statement called out banks, among other institutions such as social media platforms and landlords.
In a separate announcement, Deputy Attorney General Lisa Monaco said the agency would be seeking “stiffer sentences” for those who commit white collar crimes such as market manipulation and fraud using AI.
Other recent efforts include President Biden’s executive order that establishes a framework for regulating and governing the development and deployment of AI tools. The order encompasses a wide range of areas, from safety to security to privacy, equity and civil rights. Earlier this month, lawmakers with the European Union adopted the Artificial Intelligence Act to regulate the ever-evolving technology.
These developments around the regulation of AI make clear that businesses and financial institutions should prepare for a significant increase in rulemaking in the coming years.
Not waiting for federal lawmakers, states have moved forward on their own governance of data privacy and AI, both in the financial industry and beyond. California, Montana, Oregon, Florida and New York are a few examples of states that have passed or are considering comprehensive data privacy laws or regulations focused on AI.
There are questions around how many of these state-specific regulations could be limited or displaced by federal law that governs data privacy.
How banks can best mitigate risk
It can feel like the AI landscape changes every day with new uses and new regulations. Financial institutions, therefore, should consider a few keys steps to help mitigate their risk when it comes to using their own AI tools:
- Recognize the value of data privacy and encryption. Understand what data the AI program is collecting and how the program is storing that data. Encryption will better protect a customer’s data when the data is being moved into machine learning models and to third parties.
- Ensure your public disclosures regarding the treatment of data and how it is used are accurate and not deceptive. Provide and honor opt-out choices where required.
- Build the AI product into your policy. Banks and other financial institutions are highly regulated industries. It makes sense to build AI usage into the design of products and how the company is using the AI versus trying to capture the AI usage after it has spread throughout the entire system.
- Regularly monitor the ever-changing regulatory landscape and assess current laws governing collected data.
- Establish standards to have employees monitor the results of the machine learning model. Ensure the information the model is gathering is relevant to your processes and procedures.
Sarah Hutchins leads Parker Poe’s cybersecurity and data privacy team. Alison Lamb is a registered patent attorney who advises Parker Poe clients on patent portfolio strategy.