For banks interested in adopting artificial intelligence, establishing clear and effective controls between each phase of implementation of the technology could help ensure that innovations are helpful and not dangerous, Acting Comptroller of the Currency Michael Hsu said. Speaking last week at a conference on AI and financial stability, Hsu pointed to the development of electronic trading technology as a useful mirror for charting the growing use of AI by banks. In both cases, the technology is first used to produce inputs to human decision-making, then as a co-pilot to enhance human actions, and finally as an agent executing its own decisions on behalf of humans, he said.
“The risks and negative consequences of weak controls increase steeply as one moves from AI as input to AI as co-pilot to AI as agent.… Before opening a gate and pursuing the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established,” Hsu said.
Hsu also spoke about the need for “shared responsibility” for problems resulting from faulty implementation of the technology, saying most of the responsibility currently is on the companies using AI rather than the companies that provide it. He said the shared responsibility model presently used for cloud computing could provide a framework for AI responsibility. In cloud computing, responsibility for operations, maintenance and security is divided among customers and cloud service providers depending on the level of service a customer selects.