ABA Banking Journal
No Result
View All Result
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive
SUBSCRIBE
ABA Banking Journal
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive
No Result
View All Result
No Result
View All Result
Home Compliance and Risk

Are we sleepwalking into an agentic AI crisis?

Governance of autonomous AI agents may not be keeping up with the power of the technology.

December 9, 2025
Reading Time: 5 mins read
Is deepfake technology shifting the gold standard of authentication?

By Siddharth Damle

In early 2025, a healthtech firm disclosed a breach that compromised records of more than 483,000 patients. The cause was a semi-autonomous AI agent that, in trying to streamline operations, pushed confidential data into unsecured workflows. What does this mean for the rollout of agentic AI in finance?

ABA will host a free members-only webinar 2 p.m. Dec. 16 titled, Deepfake Defense: Protecting ID and Authentication in the Age of Gen AI. Register here.
Financial institutions are racing to adopt so-called agentic AI, which describes systems that can pursue goals, make decisions and act with limited human oversight. But autonomy comes with a price. Agentic AI introduces layers of unpredictability: emergent behaviors, misaligned objectives and even the potential for agents to collude or evolve strategies unintended by their designers.

Unless boards and regulators act now, the financial services sector could face its own “737 Max moment,” where over-reliance on automation collides with public trust and regulatory accountability.

Not just another chatbot

Until recently, most corporate AI use cases looked like digital assistants: customer service chatbots, predictive models or workflow optimizers. They were narrow, reactive and tightly governed by their training data.

Agentic AI is different. These systems aren’t just answering questions — they’re taking initiative, adapting and autonomously performing workflow tasks. An agent might book travel, negotiate a supplier contract or manage a multi-step cyber-defense routine. In more advanced deployments, multi-agent systems work together, adapting to shifting conditions, and making decisions faster than human managers can intervene.

The promise is enormous: smarter automation, fewer bottlenecks, and cost savings at scale. Gartner has described agentic AI as “standing on the shoulders of generative AI,” poised to transform industries by carrying out tasks that once required skilled human oversight.

But that very autonomy is what creates new risks.

When autonomy backfires

According to recent research published in the HIPAA Journal, attackers are already exploiting agentic AI to automate every stage of an intrusion.

Autonomous systems can be designed to handle reconnaissance, probing networks for weaknesses. They can generate tailored phishing campaigns that adapt in real time to the victim’s responses, and even coordinate lateral movement to extract valuable data — often without triggering alarms.

But AI that is non-factual, invents information or makes its own decisions can also be costly for businesses. These are not hypothetical scenarios: real cases show how the same autonomy that makes AI powerful can make it dangerously disruptive. For example, Replit’s AI coding assistant reportedly went rogue during a code freeze at startup SaaStr, wiping the production database. To cover its tracks, the agent generated fake data — including 4,000 phantom users — fabricated reports and falsified unit test results.

McDonald’s has ended its three-year AI drive-through experiment with IBM after repeated ordering errors led to frustrated customers. Viral videos, including one showing the AI adding 260 Chicken McNuggets to an order, highlighted the system’s failures.

One of the most notable cases highlighting corporate liability for AI occurred when Air Canada was ordered to pay CA$812.02 to a passenger after its chatbot provided incorrect information about bereavement fares. The passenger followed the assistant’s guidance and applied for a retroactive refund, only to have his refund claim denied. A Canadian tribunal ruled the airline failed to ensure the chatbot’s accuracy, holding it responsible for the misinformation.

Incremental risks posed by agentic AI applications

While agentic AI has promising applications in business context, the technology can go off-script in subtle but damaging ways.

  • Error propagation. A single hallucination — such as an agent misclassifying a transaction — can cascade across linked systems and other agents, leading to compliance violations or financial misstatements.
  • Unbounded execution. An AI agent tasked with executing a business process can enter a recursive loop, consuming massive computing resources and drive cloud service provider bills into six figures.
  • Opaque reasoning. As agents make decisions based on probabilistic models, executives often cannot explain why a decision was made. This lack of transparency is increasingly unacceptable to supervisors in highly regulated industries like finance and healthcare.
  • Collusion. Multi-agent environments may lead to “unintended teamwork.” Researchers have shown that when agents interact, they can develop novel strategies — sometimes working at cross-purposes with the organization’s goals.

These risks amplify known AI  threats — bias, data breaches or IP theft — raising the stakes for businesses. A hallucination in a chatbot might annoy a customer, but a self-directed financial agent’s mistake could trigger millions in erroneous trades.

The governance imperative

There is an inherent temptation to delegate ownership of AI oversight to the technology department. That strategy can prove to be myopic. Agentic AI risk is not purely a technology issue. It’s a broader systemic risk issue, requiring oversight from multiple departments spanning legal, privacy, data, compliance, enterprise architecture, information security and more.

Institutions must start with fundamentals: inventory every AI tool in use, whether embedded in vendor platforms or introduced informally by staff. Without a clear map of what agents exist, leadership cannot effectively govern them.

Governance must also move beyond high-level “AI ethics principles” to concrete, enforceable practices:

  • Policies for testing, monitoring, and retiring AI agents.
  • Resource caps to prevent runaway execution.
  • Isolation protocols to limit unintended collusion among agents.
  • Recurring oversight, not one-time audits, since autonomous systems evolve over time.

Gartner’s recent AI Agent Assessment Framework offers one useful model. By categorizing agent capabilities — perception, decisioning, actioning, adaptability — organizations can determine whether a given use case truly requires agentic AI, or whether traditional automation would be safer and cheaper.

When not to use agentic AI

It’s tempting to apply the latest technology everywhere. But not every task benefits from autonomy. Stable, predictable workflows — payroll processing, for example — are often better served by robotic process automation or deterministic scripts. Overengineering these processes with agentic AI introduces needless cost and risk.

Certain domains remain too complex or high-stakes for delegation. In consumer lending, for instance, handing over full credit approval authority to an opaque AI system could  be reckless. In healthcare, allowing autonomous agents to manage treatment protocols without human oversight is equally unacceptable. Finding the sweet spot for agentic AI adoption requires discipline: identifying where adaptability and autonomy genuinely add value, and where human judgment or traditional tools remain indispensable.

The shift to agentic AI mirrors earlier technological revolutions. Just as the internet expanded both opportunity and exposure, autonomous AI promises to streamline industries even as it creates new vulnerabilities. According to a recent MIT study, 95% of enterprise AI pilots fail. Among the root causes are poor integration with existing workflows, reliance on generic tools that don’t adapt to enterprise needs and slow scaling within large organizations.

Companies that treat agentic AI as a shortcut to efficiency may soon find themselves explaining to shareholders and regulators why they let machines take the wheel. Industry leaders have a window to act — to build governance strong enough to keep autonomy in check, well before the first major agentic AI crisis hits the balance sheet.

Siddharth Damle  is a financial and AI risk management expert based in the tri-state area. Opinions expressed in this article are the author’s own and do not represent those of any company or organization.

Tags: Artificial intelligence
ShareTweetPin

Related Posts

New task force to tackle financial fraud, scams

GAO reports little progress in federal coordination in fighting scams

Compliance and Risk
March 26, 2026

Nearly a year after the Government Accountability Office proposed a government-wide strategy for countering scams, most agencies have either not implemented its recommendations or said they disagreed with some of the conclusions, according to a recent update to...

FSSCC releases additional AI resources for financial institutions

FSSCC releases additional AI resources for financial institutions

Compliance and Risk
March 26, 2026

The Financial Services Sector Coordinating Council has released the final four of six resources to help the financial services sector safely deploy artificial intelligence.

How AI and personalized guidance can help build credit resilience

How AI and personalized guidance can help build credit resilience

Community Banking
March 26, 2026

Digital tools can help tailor financial guidance so confronting consumer debt does not have to feel intimidating.

FSB: Global economic recovery ‘losing momentum’

FSOC proposes new guidelines for determining nonbank risks to financial stability

Compliance and Risk
March 25, 2026

The Financial Stability Oversight Council proposed new guidance to walk back recent changes for determining whether nonbanks should be subject to Federal Reserve supervision.

Republican lawmakers urge Trump officials to preserve CDFI Fund

ABA outlines national strategies for fighting fraud, scams

Compliance and Risk
March 25, 2026

ABA presented lawmakers with a national blueprint for fighting fraud and scams, which included requiring telecommunications and social media companies to do their part and giving states and local governments the resources to target financial crimes.

FOMC minutes: Persistent inflation clouds path forward

Report: Fed not collecting data needed to improve bank application processing

Community Banking
March 24, 2026

The Federal Reserve is not tracking the information needed to improve the efficiency and timeliness of processing applications for community bank mergers and acquisitions, the Fed Office of Inspector General concluded in a new report.

NEWSBYTES

FTC issues ‘debanking’ warnings to payment companies

March 26, 2026

GAO reports little progress in federal coordination in fighting scams

March 26, 2026

Mortgage rates rise

March 26, 2026

SPONSORED CONTENT

How top agricultural lenders are approaching AI, automation and innovation in 2026

How top agricultural lenders are approaching AI, automation and innovation in 2026

March 2, 2026
Top 7 FP&A Trends in Banking for 2026

Top 7 FP&A Trends in Banking for 2026

March 1, 2026
How Instant Payments Can Accelerate B2B Payments Modernization

How Instant Payments Can Accelerate B2B Payments Modernization

February 3, 2026
Digital Banking: The Gateway to Customer Growth and Competitive Differentiation

Digital Banking: The Gateway to Customer Growth and Competitive Differentiation

February 1, 2026

PODCASTS

Podcast: Risk and strategy in sponsor banking

March 19, 2026

Podcast: From stablecoin to fraud, top takeaways from the 2026 ABA Summit

March 13, 2026

Podcast: How the SCAM Act would encourage platforms to go after scammers

February 4, 2026

American Bankers Association
1333 New Hampshire Ave NW
Washington, DC 20036
1-800-BANKERS (800-226-5377)
www.aba.com
About ABA
Privacy Policy
Contact ABA

ABA Banking Journal
About ABA Banking Journal
Media Kit
Advertising
Subscribe

© 2026 American Bankers Association. All rights reserved.

No Result
View All Result
  • Topics
    • Ag Banking
    • Commercial Lending
    • Community Banking
    • Compliance and Risk
    • Cybersecurity
    • Economy
    • Human Resources
    • Insurance
    • Legal
    • Mortgage
    • Mutual Funds
    • Payments
    • Policy
    • Retail and Marketing
    • Tax and Accounting
    • Technology
    • Wealth Management
  • Newsbytes
  • Podcasts
  • Magazine
    • Subscribe
    • Advertise
    • Magazine Archive
    • Newsletter Archive
    • Podcast Archive
    • Sponsored Content Archive

© 2026 American Bankers Association. All rights reserved.