By Steve Morgan
Imagine being able to improve service levels by 50%, increase quality levels to nearly 100% and reduce workloads by 20 to 80% across all areas of customer service and operations. That opportunity isn’t on the horizon. It’s here right now. But with that kind of promise comes real risk that without the right approach, the same tools that drive progress can just as easily create new problems.
Phrases like “AI slop,” “hallucinations” and “confabulation” have entered the conversation. At the end of the day, they all point to the same thing which is the potential for mistakes or poor content when using AI. “AI slop,” in particular, refers to low-quality or generic output, typically on the creative side.
Now, making mistakes with technology is nothing new, especially in banking. Our industry has long been an early adopter of innovation, from ATMs and tap payments to digital wallets and automated operations. I’ve seen technology implementations go wrong firsthand. So, what’s different now with AI, and how can we avoid those same pitfalls while delivering true value for customers and the bank?
What’s new . . . what isn’t
Let’s start with what hasn’t changed. Solid project management and effective change management are still essential. In fact, they’re even more critical now, given the significant impact AI and automation can have on roles, some of which may be partially or fully transformed. Managing these impacts requires real human expertise and empathy.
What is different is the scale and speed of what’s possible. Generative AI opens the door to revising entire processes end-to-end. GenAI excels in creative and ideation spaces, helping us visualize new process flows, redesign customer journeys, and accelerate innovation.
For example, you can use genAI to generate a new process flow by uploading existing diagrams or even a video walkthrough of a system. But while AI can accelerate design, it also increases the need for human expertise to guide and challenge the results, such as testing whether each step truly adds value, where automation makes sense and when human interaction is still essential.
AI can’t answer all these questions because not everything is digitized or accurate. When I ran lending operations, the only way to get service-level details across all areas was to ask me or my team directly. AI can accelerate design time, but it still needs people to interpret, refine, and validate.
Avoiding slop and managing risk
Avoiding AI slop starts long before an output is generated. It requires looking back at how we’ve built and governed models and content in the past. Banking has been using AI and algorithms for years in credit and lending models, trading, and risk management. These systems work because they’re governed by checks and balances, including human oversight. Reports are reviewed by internal audit, risk, compliance and regulators. This structure doesn’t disappear with the introduction of more advanced AI. In fact, it becomes even more important.
And it’s not just about reviewing the output. It really begins with how we design the instructions, context and constraints that shape the AI’s behavior. Designing those instructions is now a critical human responsibility. When AI is left to “reason” freely in real time, its responses can be inconsistent, unpredictable and difficult to govern. That level of improvisation might be fine during early design or brainstorming, but it’s not something banks can comfortably rely on in live customer interactions. In production, AI needs to follow structured, governed workflows.
Processes, policies and escalation paths must still be followed. Even when an AI agent is the one that triggers workflows or automations, those processes need to be auditable, explainable and trusted. That’s especially critical in banking, where trust is the foundation of every relationship, and where it can be broken by something as simple as a service error or as complex as a faulty credit decision.
To avoid AI slop and mistakes, there must be expert review. And no, not from another AI agent. While we’re now developing specialized AI agents, such as those trained on regulatory frameworks or sanctions, there must always be a point where human expertise enters the loop.
Think of it like lending decisions. All credit card applications could be processed 100% straight through. But at certain thresholds covering, for example, affordability or default risk, a person steps in to review the decision or sample-check the algorithm’s performance. Teams also monitor approval and declination variation rates across channels to ensure expected outcomes. The same logic must apply to AI agents. Humans define the task, set the boundaries, determine the workflow and step in wherever nuance or elevated risk demands it.
Keeping humans in the loop
A practical example of successful change and achieving this balance between AI and human expertise can be seen at Santander in Brazil, one of the country’s largest banks with more than 70 million customers. Like most institutions, Santander’s legal operations team manages high-stakes processes that demand both precision and speed through responding to courts, managing case outcomes, and ensuring compliance with strict timelines. Errors can result not only in fines or regulatory issues, but also in lasting damage to client relationships.
To improve both efficiency and accuracy, Santander applied genAI and workflow automation across its legal operations, completely re-imagining the process. Over the course of three months, the bank’s teams worked closely with internal and external legal experts to train and refine the models.
The results were impressive: a77% reduction in workload, 99% accuracy in interpreting and responding to 200-plus-page legal documents and a95% service-level attainment.
The key to success wasn’t just the technology itself, but a disciplined approach to training and change management. The bank improved model performance from roughly 66% accuracy to over 99%, while carefully managing the people side of transformation as hundreds of roles evolved.
Delivering on AI’s promise
We have a tremendous opportunity to fundamentally re-engineer and re-imagine how banking operates. But to deliver on that promise, we must apply the right balance of human expertise alongside AI and automation.
When that balance is achieved, we can truly realize the promise of AI: driving better outcomes for customers, employees and the industry as a whole.
Steve Morgan is global banking industry lead at Pega.










