ABA Banking Journal
No Result
View All Result
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive
SUBSCRIBE
ABA Banking Journal
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive
No Result
View All Result
No Result
View All Result
Home Compliance and Risk

Are we sleepwalking into an agentic AI crisis?

Governance of autonomous AI agents may not be keeping up with the power of the technology.

December 9, 2025
Reading Time: 5 mins read
Is deepfake technology shifting the gold standard of authentication?

By Siddharth Damle

In early 2025, a healthtech firm disclosed a breach that compromised records of more than 483,000 patients. The cause was a semi-autonomous AI agent that, in trying to streamline operations, pushed confidential data into unsecured workflows. What does this mean for the rollout of agentic AI in finance?

ABA will host a free members-only webinar 2 p.m. Dec. 16 titled, Deepfake Defense: Protecting ID and Authentication in the Age of Gen AI. Register here.
Financial institutions are racing to adopt so-called agentic AI, which describes systems that can pursue goals, make decisions and act with limited human oversight. But autonomy comes with a price. Agentic AI introduces layers of unpredictability: emergent behaviors, misaligned objectives and even the potential for agents to collude or evolve strategies unintended by their designers.

Unless boards and regulators act now, the financial services sector could face its own “737 Max moment,” where over-reliance on automation collides with public trust and regulatory accountability.

Not just another chatbot

Until recently, most corporate AI use cases looked like digital assistants: customer service chatbots, predictive models or workflow optimizers. They were narrow, reactive and tightly governed by their training data.

Agentic AI is different. These systems aren’t just answering questions — they’re taking initiative, adapting and autonomously performing workflow tasks. An agent might book travel, negotiate a supplier contract or manage a multi-step cyber-defense routine. In more advanced deployments, multi-agent systems work together, adapting to shifting conditions, and making decisions faster than human managers can intervene.

The promise is enormous: smarter automation, fewer bottlenecks, and cost savings at scale. Gartner has described agentic AI as “standing on the shoulders of generative AI,” poised to transform industries by carrying out tasks that once required skilled human oversight.

But that very autonomy is what creates new risks.

When autonomy backfires

According to recent research published in the HIPAA Journal, attackers are already exploiting agentic AI to automate every stage of an intrusion.

Autonomous systems can be designed to handle reconnaissance, probing networks for weaknesses. They can generate tailored phishing campaigns that adapt in real time to the victim’s responses, and even coordinate lateral movement to extract valuable data — often without triggering alarms.

But AI that is non-factual, invents information or makes its own decisions can also be costly for businesses. These are not hypothetical scenarios: real cases show how the same autonomy that makes AI powerful can make it dangerously disruptive. For example, Replit’s AI coding assistant reportedly went rogue during a code freeze at startup SaaStr, wiping the production database. To cover its tracks, the agent generated fake data — including 4,000 phantom users — fabricated reports and falsified unit test results.

McDonald’s has ended its three-year AI drive-through experiment with IBM after repeated ordering errors led to frustrated customers. Viral videos, including one showing the AI adding 260 Chicken McNuggets to an order, highlighted the system’s failures.

One of the most notable cases highlighting corporate liability for AI occurred when Air Canada was ordered to pay CA$812.02 to a passenger after its chatbot provided incorrect information about bereavement fares. The passenger followed the assistant’s guidance and applied for a retroactive refund, only to have his refund claim denied. A Canadian tribunal ruled the airline failed to ensure the chatbot’s accuracy, holding it responsible for the misinformation.

Incremental risks posed by agentic AI applications

While agentic AI has promising applications in business context, the technology can go off-script in subtle but damaging ways.

  • Error propagation. A single hallucination — such as an agent misclassifying a transaction — can cascade across linked systems and other agents, leading to compliance violations or financial misstatements.
  • Unbounded execution. An AI agent tasked with executing a business process can enter a recursive loop, consuming massive computing resources and drive cloud service provider bills into six figures.
  • Opaque reasoning. As agents make decisions based on probabilistic models, executives often cannot explain why a decision was made. This lack of transparency is increasingly unacceptable to supervisors in highly regulated industries like finance and healthcare.
  • Collusion. Multi-agent environments may lead to “unintended teamwork.” Researchers have shown that when agents interact, they can develop novel strategies — sometimes working at cross-purposes with the organization’s goals.

These risks amplify known AI  threats — bias, data breaches or IP theft — raising the stakes for businesses. A hallucination in a chatbot might annoy a customer, but a self-directed financial agent’s mistake could trigger millions in erroneous trades.

The governance imperative

There is an inherent temptation to delegate ownership of AI oversight to the technology department. That strategy can prove to be myopic. Agentic AI risk is not purely a technology issue. It’s a broader systemic risk issue, requiring oversight from multiple departments spanning legal, privacy, data, compliance, enterprise architecture, information security and more.

Institutions must start with fundamentals: inventory every AI tool in use, whether embedded in vendor platforms or introduced informally by staff. Without a clear map of what agents exist, leadership cannot effectively govern them.

Governance must also move beyond high-level “AI ethics principles” to concrete, enforceable practices:

  • Policies for testing, monitoring, and retiring AI agents.
  • Resource caps to prevent runaway execution.
  • Isolation protocols to limit unintended collusion among agents.
  • Recurring oversight, not one-time audits, since autonomous systems evolve over time.

Gartner’s recent AI Agent Assessment Framework offers one useful model. By categorizing agent capabilities — perception, decisioning, actioning, adaptability — organizations can determine whether a given use case truly requires agentic AI, or whether traditional automation would be safer and cheaper.

When not to use agentic AI

It’s tempting to apply the latest technology everywhere. But not every task benefits from autonomy. Stable, predictable workflows — payroll processing, for example — are often better served by robotic process automation or deterministic scripts. Overengineering these processes with agentic AI introduces needless cost and risk.

Certain domains remain too complex or high-stakes for delegation. In consumer lending, for instance, handing over full credit approval authority to an opaque AI system could  be reckless. In healthcare, allowing autonomous agents to manage treatment protocols without human oversight is equally unacceptable. Finding the sweet spot for agentic AI adoption requires discipline: identifying where adaptability and autonomy genuinely add value, and where human judgment or traditional tools remain indispensable.

The shift to agentic AI mirrors earlier technological revolutions. Just as the internet expanded both opportunity and exposure, autonomous AI promises to streamline industries even as it creates new vulnerabilities. According to a recent MIT study, 95% of enterprise AI pilots fail. Among the root causes are poor integration with existing workflows, reliance on generic tools that don’t adapt to enterprise needs and slow scaling within large organizations.

Companies that treat agentic AI as a shortcut to efficiency may soon find themselves explaining to shareholders and regulators why they let machines take the wheel. Industry leaders have a window to act — to build governance strong enough to keep autonomy in check, well before the first major agentic AI crisis hits the balance sheet.

Siddharth Damle  is a financial and AI risk management expert based in the tri-state area. Opinions expressed in this article are the author’s own and do not represent those of any company or organization.

Tags: Artificial intelligence
ShareTweetPin

Related Posts

ABA unveils key policy priorities for 2025

ABA releases top policy priorities for 2026

Community Banking
January 20, 2026

ABA released its 2026 Blueprint for Growth, outlining its top policy priorities for the year ahead. Developed by ABA’s Government Relations Council, the Blueprint will shape the association’s ongoing engagement with Congress and the administration on the most...

BIS: Stablecoins fail as ‘sound money’

ABA, associations seek extension of comment period for FDIC’s Genius Act implementation

Newsbytes
January 20, 2026

ABA joined four other associations to request that the FDIC push back the deadline for comment on its proposal to create a process through which banks can seek agency approval to issue stablecoins through a subsidiary.

State legislatures enter their busy season

State legislatures enter their busy season

Policy
January 20, 2026

Bank advocates expect 2026 to be a hectic year for state legislation, with possible bills on interchange fees, fraud, AI and more. 

OCC’s Gould: Bank regulation should not distract banks from business challenges

Gould suggests easing bank resolution planning requirements

Compliance and Risk
January 16, 2026

Comptroller of the Currency Jonathan Gould said he sees no benefit in the FDIC continuing to require filings from large banks that detail their suggested orderly resolution in case of a bank failure, known as CIDI plans. He...

Survey: Merchants expand payment options, express interest in crypto

Survey: Merchants expand payment options, express interest in crypto

Newsbytes
January 16, 2026

BNPL is now the fourth most accepted form of payment at small businesses, behind debit or credit cards, digital wallets, and cash. At the same time, merchants express growing interest in cryptocurrency.

Podcast: A Lone Star banking perspective

Podcast: A Lone Star banking perspective

ABA Banking Journal Podcast
January 15, 2026

If Texas were an independent country, its economy would rank as the world's eighth-largest. "France is seventh, and I don't think it'll take as much time at all to catch them," laughs TBA Chairman Ron Butler.

NEWSBYTES

ABA releases top policy priorities for 2026

January 20, 2026

ABA, associations seek extension of comment period for FDIC’s Genius Act implementation

January 20, 2026

ABA research: 159 million Americans could lose access to credit under 10% credit card rate cap

January 20, 2026

SPONSORED CONTENT

Seeing More Check Fraud and Scams? These Educational Online Toolkits Can Help

Seeing More Check Fraud and Scams? These Educational Online Toolkits Can Help

November 1, 2025
5 FedNow®  Service Developments You May Have Missed

5 FedNow® Service Developments You May Have Missed

October 31, 2025

Cash, Security, and Resilience in a Digital-First Economy

October 20, 2025
Rethinking Outsourcing: The Value of Tech-Enabled, Strategic Growth Partnerships

Rethinking Outsourcing: The Value of Tech-Enabled, Strategic Growth Partnerships

October 1, 2025

PODCASTS

Podcast: A Lone Star banking perspective

January 15, 2026

Podcast: The incredible shrinking penny (circulation)

January 8, 2026

Podcast: Cybersecurity in a mobile-first banking landscape

December 18, 2025

American Bankers Association
1333 New Hampshire Ave NW
Washington, DC 20036
1-800-BANKERS (800-226-5377)
www.aba.com
About ABA
Privacy Policy
Contact ABA

ABA Banking Journal
About ABA Banking Journal
Media Kit
Advertising
Subscribe

© 2026 American Bankers Association. All rights reserved.

No Result
View All Result
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive

© 2026 American Bankers Association. All rights reserved.