The rapid integration of artificial intelligence into financial services is creating a governance gap that existing frameworks are ill-equipped to handle. While AI promises to revolutionize transactions, risk management, and financial access, its pace of evolution has outstripped the traditional guardrails designed for a human-driven economy. This disconnect is most starkly illustrated by the unresolved question of accountability when an AI agent errs.
The Accountability Conundrum in AI Failures
Chad Gerhardstein, Trulioo Chief Risk and Strategy Officer, writing in the PYMNTS eBook “AI Runs Payments. Governance Decides What Happens Next,” highlights that established liability frameworks for human fraud and misuse do not neatly apply to AI agent failures. When an AI agent acts outside its mandate, it may simply be following its code, yet the real-time consequences across millions of global transactions can result in significant risk that current governance structures cannot adequately address.
Organizations are increasingly investing in sophisticated AI models and automation to mitigate these risks. However, Gerhardstein argues that this focus often overlooks the most critical component of AI governance: humans. The widespread adoption of AI in financial institutions has not been matched by a commensurate investment in human understanding and oversight.
A Critical Gap in AI Literacy
A significant portion of financial professionals possess only a superficial understanding of AI. Many are adept at prompting large language models (LLMs) but lack the deeper knowledge required to explain how these models arrive at their conclusions. Gerhardstein states bluntly, “if the humans responsible for governing artificial intelligence do not know how it works, then governance becomes performative rather than effective.”
At Trulioo, the company observes this challenge firsthand as organizations scale identity verification and transaction monitoring processes globally. Closing this governance gap necessitates more than basic prompt training. It requires comprehensive AI literacy that spans risk, compliance, and operations. This means equipping professionals not only with the skills to use AI tools but also with the ability to critically assess and de-risk their outputs.
Stewardship and Training for AI Overseers
Gerhardstein emphasizes that organizations developing verification, authentication, and fraud-prevention systems have a responsibility to act as stewards of these governance frameworks. This stewardship involves providing the necessary levels of training for the human overseers who will be responsible for monitoring AI agents. As agentic transactions become more prevalent, these architectures must function effectively at both the issuer and consumer levels, ensuring that every AI agent operates within the boundaries set by its principal.
The task of establishing robust AI governance is substantial, particularly in financial services and payments, where a single authorization can impact large numbers of people by moving money, extending credit, or denying access. A robust trust framework for AI is therefore imperative.
The Imperative of ‘The Right Humans’
Ultimately, meeting the demands of AI-driven financial operations will require more than simply having humans involved; it will demand “the right humans in the loop.” With considerable attention focused on the potential of AI, it is vital not to neglect the human operators who provide the essential intelligence and oversight necessary for effective and responsible AI deployment. The future of AI governance in finance depends on cultivating a workforce that possesses the deep understanding and critical judgment to guide these powerful technologies.


