By John Carlson
In response to the Biden administration’s sweeping Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the Treasury Department released on March 27 Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.
The 51-page report focuses on the current state of AI-specific cybersecurity risks in financial services, including an overview of current uses cases, best practices recommendations, challenges and opportunities given the current environment.
It is based on 42 in depth interviews with industry experts at financial institutions, information technology, anti-fraud/anti-money laundering companies and industry associations (including American Bankers Association staff). The appendix includes a six-page paper developed by the Financial Services Sector Coordinating Council’s Research and Development Committee titled Artificial Intelligence in the Financial Sector: Cybersecurity and Fraud Use Cases and Risks. ABA organized a series of meetings with financial sector experts, Treasury and other government officials in the fall of 2023 in support of the FSSCC’s R&D Committee. The FSSCC R&D Committee paper examined the current and anticipated use cases of cybersecurity and fraud AI solutions within the financial sector, how adversaries are utilizing AI to introduce risk to the sector and how firms are managing AI-related risks.
The report outlines ways cyber threat actors can use AI, including social engineering, malware/code generation, vulnerability discovery and disinformation. The report notes that “AI allows bad actors to impersonate individuals, such as employees and customers of financial institutions, in ways that were previously much more difficult.” These include deepfakes to mimic voice and videos of real people as well as create synthetic identities.
The report adds: “Financial institutions have used AI systems in connection with their operations, and specifically to support their cybersecurity and anti-fraud operations, for years.” The report zeros in on the impact of generative AI, adding that financial institutions “are proceeding with caution on generative AI and are trying to address generative AI risks by providing guardrails and developing internal policies for the acceptable use of this technology.” The report identifies the importance of high quality and vast quantities of data in AI to train, test and refine good artificial intelligence models.
The report emphasizes the importance of third-party risk management and data integrity. It adds: “It is very likely that often overlooked third-party risk considerations such as data integrity and data provenance will emerge as significant concerns for third-party risk management.” The report also cautions that AI will increase dependency on major service providers.
The report notes that the financial services sector is a highly regulated industry and offers a model of responsible artificial intelligence governance at a time when risk management of artificial intelligence remains an unresolved issue across all industries. The report includes an overview of how financial sector regulatory agencies rely on model risk management, technology risk management, data management, compliance and consumer/investor protection, third-party risk management, securities market access risk management and insurance.
While the report states that financial institutions understand the expectations of their US regulators and can have a productive dialogue with regulators on artificial intelligence issues, there are concerns over future regulation and regulatory fragmentation internationally.
The report points out that financial institutions are increasing information sharing around fraud given concerns that AI will be used to perpetrate more sophisticated phishing emails and fraud impersonation. The report highlights private sector efforts to address fraud, including the Bank Policy Institute and ABA “both making efforts to close the fraud information-sharing gap across the banking sector. ABA’s initiative is specifically aimed at closing the fraud data gap for smaller financial institutions.” It adds, “ABA is working to design, develop, and pilot a new information-sharing exchange focused on fraud and other illicit finance activities.” It adds: “The U.S. Government, with its collection of historical fraud reports, may be able to assist with this effort to contribute to a data lake of fraud data that would be available to train AI, with appropriate and necessary safeguards. Treasury can be a leader in this space and will work with the financial sector, including ABA and FS-ISAC, to improve fraud data sharing from Treasury.”
The paper lays out several best practices for managing AI-specific cybersecurity risks, including:
- Situate AI risk within enterprise risk management programs.
- Develop and implement an AI framework.
- Integrate risk management functions for AI.
- Evolve the chief data officer role and map the data supply chain.
- Ask the right questions of vendors.
- Survey NIST’s cybersecurity framework for AI opportunities.
- Implement risk-tiered multifactor authentication mechanisms.
- Pick the right tool for the job and risk tolerance.
The paper also highlights several next steps and opportunities, including:
- Need for common AI lexicon.
- Address the growing capability gap between the largest and smallest financial institutions.
- Narrow the fraud data divide.
- Clarify how AI will be regulated in the future.
- Expand the NIST AI Risk Management Framework.
- Develop best practices for data supply chain mapping disclosures (aka “nutrition labels”).
- Decipher explainability for black box AI solutions.
- Address gaps in human capital.
- Untangle digital identity solutions.
- Coordinate with international authorities.
Last year, Treasury launched a public-private sector collaboration to address challenges in the expanding use of cloud computing. The AI report references this effort and how Treasury leveraged the Cloud Executive Steering Group, which is chaired by leaders in the financial sector with expertise in financial sector cybersecurity, in developing the AI report. Treasury could leverage this public private-sector collaboration model to advance some of the next steps and opportunities outlined in the report.
John Carlson is SVP for cybersecurity regulation and resilience at ABA.