ABA Banking Journal
No Result
View All Result
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive
SUBSCRIBE
ABA Banking Journal
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive
No Result
View All Result
No Result
View All Result
ADVERTISEMENT
Home Compliance and Risk

Compliance meets AI: A banking love story

March 14, 2024
Reading Time: 8 mins read
Compliance meets AI: A banking love story
ADVERTISEMENT

Banks are at preliminary stages in evaluating opportunities to use generative AI. But areas of early adoption are customer services, analyzing data and reading large regulatory documents.

By Sam Holle and Kris Stewart

We decided to ask ChatGPT to help us write an introduction to this article.

Q: We’re writing an article for a banking risk and compliance magazine on large language models. Tell us a fun fact about you that would encourage compliance officers to want to use you.

This article is the cover story for the March-April 2024 edition of ABA Risk and Compliance magazine.
A: Fun fact: As a large language model, I can significantly streamline and enhance the compliance process for banking professionals. Compliance officers can leverage my capabilities …

Not what we hoped for, so, we tried again.

Q: Can you be a little more succinct and fun?

A: Certainly! Fun fact: I’m your compliance co-pilot, navigating the complex terrain of banking regulations with lightning speed and precision. I’m the Swiss Army knife for compliance officers, simplifying searches, drafting documents and keeping you on the straight and narrow in the exciting world of banking compliance!

Better, right? And we could not leave well enough alone. So:

Q: That’s good. How about a catchy title for the article?

A: “Compliance revolution: How large language models are transforming banking regulation.”

Q: Too long and not fun. Do you have something shorter and pithier?

A: “Compliance meets AI: A banking love story.”

Nailed it! We promise, from here on out, the writing is all human!

Generative AI: What you need to know

Rarely does a day go by without our news feeds reporting on the rapidly evolving world of generative AI, or GAI—artificial intelligence that can generate text, images or other media in response to prompts. GAI models, and more specifically large language models (LLMs), a type of GAI, are being deployed across industries and financial institutions are thinking of ways they can do so, too—a challenge made all the greater by being highly regulated entities. Larger financial institutions are developing their own solutions, and of course vendors are rapidly figuring out ways to use this technology in their solutions.

So, what exactly are LLMs and why should you become familiar with them? LLMs, like ChatGPT and Bard, are models that are trained on vast amounts of textual data such as books, articles, white papers—anything you can find on the internet. The models recognize patterns and relationships in human language. They take that learning to generate text in response to your question (called a prompt in the AI world). Based on the vast data upon which they are trained; they answer your prompt based on predictive analytics—determining the most logical word or phrases in response to your question.

Risks associated with GAI technology

If you like to keep an eye on technology news, you have likely seen reports of some spectacular “fails” when it comes to the use of LLMs. Take for example, the attorney who submitted a brief to a court, citing cases, complete with quotes, citations and judge’s names. The only problem was that ChatGPT made them up. This type of inaccuracy is known in the AI world as “hallucination.”

Hallucination is a tendency of LLMs to “make up facts,” drawn from the many terabytes of data they have been trained with, to produce very convincing answers to prompts. Hallucination can result because LLMs are trained to be predictive, based on word patterns—with the goal of being conversational versus strictly knowledgeable.

The good news is that while hallucinations cannot be eliminated, they can be minimized and mitigated. LLM developers have been quick to respond to reported epic failures in their models to provide more safeguards in the results produced. OpenAI, the company behind ChatGPT, has initiated a new methodology that it refers to as “process supervision” that is intended to evaluate each logic step in the model’s method for answering a prompt and, in doing so, to help detect and mitigate a model’s logical mistakes. Also, controlling the data sources the model is trained on by using domain-specific, trusted data sources can provide better results. Ultimately, there is no substitute for expert (human) verification of the results.

Better answers through better questions

Another way to mitigate hallucination is to provide better context via prompt engineering. Prompt engineering refers to the practice of composing your question such that the system provides a more accurate answer. Using traditional search engines, we are used to asking very sparse questions and getting reasonable answers; those same queries do not work well as LLM prompts.

Researchers have identified best practices that produce much more accurate results. A couple of techniques that are useful are roleplaying and chain of thought. Roleplaying refers to explicitly assigning a role to the chatbot: “You are an experienced compliance officer writing policies and procedures for a bank” or “Analyze this case like Harvard Law Professor Cass Sunstein.” The idea is to give the chatbot a lot of context. It is helpful to think of chatbots as new college interns: very smart but requiring guidance to produce quality work.

Another useful technique is called chain-of-thought prompting. All you need to do is add the phrase “think step by step” to the end of a prompt. When you do so, the bot will explain the steps in its reasoning, which is essential for double-checking the work. It also leads to better results, which will not be any surprise to math teachers who have been admonishing students to show their work for centuries.

Roleplaying and chain-of-thought promptings are just two prompt engineering tips. There are dozens of other good ones just a quick web search away. While AI is rapidly improving at discerning what the questioner is looking for, there is still value in experimenting with learning how to write better prompts.

Opportunities abound

Most financial institutions are at preliminary stages in evaluating opportunities to use generative AI in their operations. Some of the areas where we are seeing the anticipated use of LLMs are in customer services. Large language models can interact with a bank’s customers in very natural conversations. Depending on the data that the bank trains the LLM on, the chat bots can answer questions about customer accounts and even provide recommended product offerings and investment advice. Several large banks are working with internal LLM models to capture call center notes, organize information for investment advisors and organize other product data for customer service reps, with plans to roll out to more customer-facing uses as extensive testing addresses potential risks.

Banks are also assessing opportunities to improve internal operations. Generative AI capabilities enable new ways to analyze data. One practical use case for most organizations is to train LLMs on all the pockets of organizational information that employees need to access to do their jobs. Imagine spending less time looking for things and more time solving problems, with the information you need, quickly presented to you via an interactive chat with your friendly LLM bot.

LLMs also lend themselves to reading large regulatory documents and providing structured responses to aid a compliance or risk professional in digesting complex regulations and locating practical implementation guidance. The recent publication of the Small Business Lending rule by the CFPB illustrates these challenges. Between the regulation (complete with preamble and commentary) and the supplementary documents, there are over 1,200 pages of material to read and digest. A large language model, with structuring guidance from a legal expert, can digest all this material and provide useful responses to staff charged with understanding the regulation and producing policies, procedures, and implementation plans.

Yes, there is risk

The continuing, rapid evolution of technology means that the opportunities for new and creative uses will continue to grow. According to a study from McKinsey, across the banking industry, GAI could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented. With that kind of economic potential in the industry, compliance and risk professionals need to start now to learn more about this technology and how to help manage risk for their organizations.

Regulators are also raising concerns that are important to address. Concerns around privacy of customer data, bias in algorithms and other UDAP/UDAAP concerns are on their minds. CFPB Director Chopra has reiterated that there are a wide range of potential threats to consumers’ civil rights, and he reminds us that “there is no exemption in our nation’s civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.”

Other institutional risks can result from employees using public versions of LLMs. When ChatGPT burst on the scene in late 2022, many of us logged on and were amazed at how it could author poems, suggest birthday party ideas for a 9-year-old, and devise creative holiday cocktail names. Then we started thinking about how it could help us with work. Helping draft emails, write a performance review or suggest some creative marketing ideas for our yet-to-be-launched product. While it might seem harmless on the surface, you are exposing trade secrets and creating privacy concerns for your organization. It is important to read the fine print. Many existing chatbots have terms of service that allow the company to reuse user data to “develop and improve” their services.

A critical early step is to develop and implement an acceptable use policy. While many of your existing policies should apply to employee use of public LLMs, with the rapid evolution of this technology and the attraction it generates, it is better to be direct on what actions are permissible. It may not be obvious to those using a public chat bot that they are exposing confidential information. So, an explicit policy that clearly states what your staff can and cannot do with LLMs is a prudent early step. Having employees acknowledge the policy and taking training courses on it are possible next steps.

Regulation on the horizon

Nearly 200 AI-related bills were introduced in 31 different states in 2023 according to the Business Software Alliance. It is expected that 2024 will see even more activity. These bills respond to concerns about misuse of AI—including its ability to create fake images and audio “deepfakes”—and seek to define and put safeguards around AI’s role in employment and credit decisioning. California has taken the lead with two agencies, the California Civil Rights Council and the California Privacy Protection Agency, in seeking to regulate automated decision-making. Reviewing AI through the lens of privacy may seem surprising, but it is part of a long-running trend of technological change prompting expanded conceptions of privacy. There was not a need for a “right of publicity” until photography was invented. Nobody conceived of a “right to be forgotten” until the cost of documenting and archiving every youthful indiscretion became essentially free.

At the federal level, Sen. Charles Schumer of New York held the first of nine A.I. Insight Forums in September. These are closed-door events where tech leaders, such as Sam Altman, Bill Gates and Elon Musk, together with tech ethicists such as Tristan Harris and Dr. Rumman Chowdhury, addressed the assembled senators to explain the technology and their concerns for the future. Notably, when Schumer asked the panel whether the government should regulate AI, every hand was raised.

While there may be broad agreement that regulation is needed, the federal government’s recent track record for regulating technology does not inspire confidence that D.C. will get the job done. “We’ve yet to pass a meaningful bill, for example, protecting people’s privacy on social media,” says Sen. John Kennedy of Louisiana. “If we can’t do it for an important … but less difficult issue like protecting privacy on the internet, I think artificial intelligence will be tougher.

And finally, on Oct. 30, President Biden issued Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order lays out eight guiding principles and priorities, and tasks federal regulatory agencies to use these to advance and govern AI development. The overarching goal is to encourage responsible AI development while still creating a governing environment to mitigate the substantial risk. Financial services, as part of the critical infrastructure as defined by the Patriot Act, will be impacted by the regulatory activities that will emerge from this order.

Conclusion

How do you wrap up a banking love story? (You remember the title, right?) Generative AI will change so much about how we function in the world in the months and years to come. Like many great love stories, only time will tell how the relationship between compliance and AI evolves. However, there is no time like the present to learn about generative AI and the opportunities and risks that will need to be managed. Enjoy the journey.

ABA Member Resources: ABA staff analysis on executive order on artificial intelligence.

Sam Holle, JD, CRCM, is an associate director with the regulatory compliance analysis team at Wolters Kluwer. Reach him at linkedin. Kris Stewart, JD, CRCM, is a senior director in the decision science team at Wolters Kluwer. Reach her at linkedin.

Tags: Artificial intelligenceRisk managementTechnology
ShareTweetPin

Related Posts

CFPB claims ‘complex’ pricing drives up cost of financial products

CFPB rescinds dozens of guidance documents

Compliance and Risk
May 9, 2025

The CFPB announced it is rescinding dozens of guidance documents on topics such as fair lending, overdraft fees, disclosure policies and consumer information requests to large banks and credit unions.

Former NCUA chair named acting OCC head

Acting Comptroller Hood outlines OCC priorities

Compliance and Risk
May 9, 2025

Acting Comptroller Rodney Hood outlined his top priorities for the OCC, including efforts to promote financial inclusion and expand the banks’ ability to provide cryptocurrency services.

White paper: Banks have clear legal authority to issue stablecoins

Stablecoin legislation hits roadblock in Senate

Newsbytes
May 8, 2025

A bill to create a regulatory framework for stablecoins stalled in the Senate after it failed to generate enough votes to advance. Afterward, Senate Republicans vowed to continue working to pass the legislation.

Former NCUA chair named acting OCC head

OCC rolls back controversial bank merger review rule

Community Banking
May 8, 2025

OCC issued an interim final rule restoring its streamlined process for reviewing bank merger applications and rescinding other changes criticized by banks and lawmakers.

FDIC: Interactive teller machines not considered bank branches

Report: Debit card skimming declined in 2024

Compliance and Risk
May 8, 2025

While the majority of debit card compromises still occur at nonbank ATMs, bank ATMs currently represent 27% of compromise locations, according to FICO.

Fed, FDIC withdraw statements on managing risks for crypto

OCC clarifies bank authority to engage in crypto custody

Newsbytes
May 7, 2025

The OCC said banks are free to engage in cryptoasset custody and execution services as long as such activities are carried out in a safe, sound and legal manner.

NEWSBYTES

CFPB rescinds dozens of guidance documents

May 9, 2025

ABA, plaintiffs push back against state claims in Illinois interchange lawsuit

May 9, 2025

Trump nominates McKernan for Treasury Department role

May 9, 2025

SPONSORED CONTENT

Choosing the Right Account Opening Platform: 10 Key Considerations for Long-Term Success

Choosing the Right Account Opening Platform: 10 Key Considerations for Long-Term Success

April 25, 2025
Outsourcing: Getting to Go/No-Go

Outsourcing: Getting to Go/No-Go

April 5, 2025
Six Payments Trends Driving the Future of Transactions

Six Payments Trends Driving the Future of Transactions

March 15, 2025
AI for Banks: A Starter Guide for Community and Regional Institutions

AI for Banks: A Starter Guide for Community and Regional Institutions

March 1, 2025

PODCASTS

Podcast: Accelerating banking for quick-service restaurants

May 8, 2025

How a Georgia community bank supports government-guaranteed lending nationwide

May 1, 2025

Podcast: Quantum computing’s shakeup in payments, cybersecurity

April 24, 2025
ADVERTISEMENT

American Bankers Association
1333 New Hampshire Ave NW
Washington, DC 20036
1-800-BANKERS (800-226-5377)
www.aba.com
About ABA
Privacy Policy
Contact ABA

ABA Banking Journal
About ABA Banking Journal
Media Kit
Advertising
Subscribe

© 2025 American Bankers Association. All rights reserved.

No Result
View All Result
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive

© 2025 American Bankers Association. All rights reserved.