Financial institutions should continuously refine their cybersecurity strategies to address threats from artificial intelligence, and they shouldn’t forget the human element when adopting new technologies, according to a white paper released today by the Financial Services Sector Coordinating Council. The recommendations by FSSCC’s research and development committee were released as part of the U.S. Treasury Department’s report on cybersecurity risks in the financial sector and based on a series of discussions organized by American Bankers Association staff in the fall of 2023.
Implementing cutting-edge AI tools to detect and respond to threats is imperative, according to FSSCC. However, it is equally vital to maintain skilled human oversight to interpret AI data accurately and mitigate potential AI inaccuracies or biases, it added. The sector must continue to prioritize the adoption of AI models for fraud prevention, but it also must not forget the human element and prepare for complex phishing and social engineering tactics enabled by AI.
Aligning with approaches like the National Institute of Standards and Technology’s AI Risk Management Framework is critical, according to FSSCC. “Financial institutions must strengthen their risk management protocols, focusing on emerging risks from the increased availability of AI, especially GenAI models, which includes data positioning and model biases,” it said. At the same time, the financial sector should collaborate to develop standardized strategies for managing AI-related risk. Individually, financial institutions should recognize the value of human judgment in AI models and invest in thier workforces.
Regulators also have a role to play, according to FSSCC. “Regulators should identify clear regulatory outcomes and objectives, while enabling regulated entities the ability to deploy effective risk management techniques based on common standards and best practices,” it said.