The rapid rise of agentic artificial intelligence (AI) is set to transform the financial services industry in Asia-Pacific, but experts warn that banks must proceed carefully to manage the risks that come with it. Moneythor’s Chief Product Officer, Vivek Seetharaman, has cautioned that while the technology holds exponential promise, its potential pitfalls are equally significant if not implemented with proper safeguards.
Exponential opportunities and risks
A recent study found that 70% of organisations in Asia-Pacific expect agentic AI to disrupt their business models within the next 18 months. In the banking sector, its impact on productivity and efficiency is projected to exceed expectations over the next three to five years.
Seetharaman explained that agentic AI could help banks achieve what Moneythor calls “Deep Banking”, offering personalised services that anticipate customer needs and extend beyond traditional banking. “Agentic AI will help banks realise the true potential of what we call ‘Deep Banking’; personalised experiences, that anticipate customer requirements, and have the potential to extend beyond the domain of traditional banking services. In this sense, the impact could certainly be described as ‘exponential’,” he said.
However, he warned that the risks are just as great. These include data errors, such as misclassified financial information, and an increased chance of security breaches as AI agents interact across multiple proprietary and third-party systems. Because these agents learn and make decisions iteratively, mistakes can multiply quickly if the underlying data is flawed.
The need for strong guardrails
Unlike rule-based systems that rely on fixed instructions, agentic AI learns continuously, adapting in real time. Seetharaman described its adoption in banking as the “antithesis of plug and play” and emphasised that banks must put clear principles in place to ensure success.
He outlined three key areas for focus: guardrails around language and context that apply across large language models, documented governance procedures covering data use across different jurisdictions, and human oversight built into processes at regular intervals. Without these, banks risk undermining customer trust and exposing themselves to serious financial and reputational harm.
Moneythor’s AI suite
Seetharaman’s comments come as Moneythor launches its new AI Suite, which aims to help banks deliver on the promise of Deep Banking. The suite combines several AI technologies, including predictive AI for cash flow forecasting and real-time recommendations, generative AI for creating and adapting customer content, and conversational AI based on the Model Context Protocol, which allows banks to deliver meaningful interactions across different large language models without retraining each one.
“Moneythor has been deploying AI to deliver personalised banking services for more than 13 years; this experience has taught us to acknowledge ‘both sides’ of the exponential equation, particularly when it comes to Agentic AI,” Seetharaman said.
He added that agentic AI has particular importance for customer relationships, as it can scale personalised engagement in a natural and adaptive way. But he concluded that this potential must be matched with careful implementation to avoid unintended consequences.