Any new technology utilized by a financial institution—including artificial intelligence—must be done so in a way that complies with existing law, top officials from four federal agencies said today. Speaking at a symposium on responsible AI use hosted by the National Fair Housing Alliance, officials from the Federal Reserve, FDIC, the Office of the Comptroller of the Currency and the Consumer Financial Protection Bureau stressed that banks are ultimately responsible for how the technology is deployed, even if they are contracting with third parties to provide AI-powered products and services. They also said their agencies already have the statutory authority to regulate the emerging technology.
“You want to be cautious thinking about legislation in this area without first considering what will be the impact on our existing statutory authorities,” FDIC Chairman Martin Gruenberg said. “And a good first rule here, in legislation and the utilization of the technology, is to do no harm. We want to be sure to preserve our existing authorities and hold institutions accountable for the utilization of technology.”
“It doesn’t matter what label you put on it and what the underlying technique is,” Fed Vice Chairman for Supervision Michael Barr said. “Financial institutions and banks understand what model risk management is and how they’re expected to conduct it. If they began to use newer techniques of artificial intelligence, including language learning models, then they need to make sure that those comply with model risk management expectations.”
Acting Comptroller of the Currency Michael Hsu also said his agency doesn’t need any new laws regarding AI. At the same time, the “newness” of the technologies means banks and regulators need to engage to best understand how they can best achieve supervisory objectives, he added. “The ‘how’ is important and I think this does require quite a bit of engagement.”
All four officials said they are building staff resources on AI governance. CFPB Director Rohit Chopra said his agency is putting more effort into developing whistleblowers with the technological expertise to spot violations. “They’re going to be a huge source of really good, high-quality investigative information about lackadaisical use of modeling [and]things that they have said were discriminatory but an institution has turned a blind eye or gone ahead with it,” he said.