Need Help? Talk to us at +91 7604906337
FinTech

From Chatbot to Conversational AI: How Learners Automated 60% of Support for a FinTech Company

Learners developed an LLM-powered conversational AI for a FinTech company, automating 60% of customer queries and reducing response time by 3x — with compliance guardrails built in from day one.

Learners: Sneha Iyer, Rahul Desai, Ananya Reddy


The Challenge

A fast-growing FinTech platform with 180,000 active users had a customer support team of 25 agents handling 3,500 queries per day — everything from “how do I reset my password?” to complex questions about transaction failures, tax implications of investments, and regulatory compliance of their savings products.

The problem wasn’t volume alone. It was the mix. Simple questions (password resets, balance checks, basic FAQ) made up roughly 60% of inbound queries but consumed a disproportionate amount of agent time because they had to be answered one by one. Meanwhile, complex financial questions sat in the queue for 4+ hours, frustrating users who needed urgent help.

The company had a basic rule-based chatbot. It handled password resets and nothing else. Users hated it — the moment they asked anything slightly complex, they got “I’m sorry, I didn’t understand that” and were dumped into the queue. Support satisfaction scores were dropping.

They needed a conversational AI that could actually understand financial queries, give accurate answers grounded in their product documentation, and know when to hand off to a human agent.

Our Approach

Three learners — Sneha, Rahul, and Ananya — tackled this over 10 weeks. The core architecture combined retrieval-augmented generation (RAG) with strict guardrails, because in FinTech, a confident wrong answer isn’t just annoying — it’s a compliance risk.

The RAG Layer. They indexed the company’s entire knowledge base — product docs, FAQ pages, regulatory guidelines, and 12 months of resolved support tickets — into a vector database using chunked embeddings. When a user asks a question, the system retrieves the most relevant document chunks and feeds them as context to the LLM.

The Guardrail System. This was the critical piece. They built a three-layer safety system:

  1. Intent classification — a lightweight classifier determines the query type before any LLM is invoked. Simple FAQs get fast-tracked to pre-written responses without burning LLM tokens.
  2. Compliance filter — any query touching regulated topics (investment advice, tax implications, insurance claims) gets flagged. The system can provide factual information from official docs but explicitly refuses to give personalized financial advice.
  3. Confidence threshold — if the RAG retrieval returns low-confidence matches (similarity score below 0.72), the system escalates to a human agent with the conversation context pre-filled, so the user doesn’t have to repeat themselves.

Response Quality. Every AI response includes source citations — clickable references to the exact document or FAQ it pulled the answer from. This built user trust and made it trivial for the support team to audit AI responses.

Key Metrics

60%
Queries Automated
3x
Faster Response Time
89%
User Satisfaction (CSAT)
Zero
Compliance Incidents

Results & Impact

Within the first month of deployment, the conversational AI was handling 60% of all incoming queries without human intervention. The support team went from drowning in 3,500 daily queries to focusing on the 40% that actually required human judgment — complex disputes, edge cases, and emotionally sensitive situations.

Response time for simple queries dropped from an average of 12 minutes to under 30 seconds. The AI responded instantly. For complex queries that were escalated, the average time actually improved too — from 4 hours to 45 minutes — because agents no longer had to waste time on the simple stuff first.

The compliance guardrails proved their worth immediately. In the first two weeks, the system correctly identified and refused to answer 47 queries that asked for personalized financial advice, routing them to certified financial advisors instead. Not a single compliance incident was reported in the first 6 months.

The CSAT score for AI-handled conversations landed at 89% — higher than the previous human-only score of 82%. Users appreciated the instant responses and the fact that they could see exactly which document the AI was referencing.

The biggest win for the bottom line: the company was able to handle 40% more user growth without adding a single support agent.

What the Learners Say

"In FinTech, you can't just slap an LLM on a problem and call it done. A wrong answer about someone's transaction could trigger a compliance investigation. Building the guardrail system taught me that production AI is 20% model and 80% safety infrastructure."

— Sneha Iyer

"The compliance filter was the hardest part. We had to define the boundary between 'factual information from our docs' and 'personalized financial advice' so precisely that even the LLM wouldn't cross it. It took 3 weeks of testing with the company's legal team to get it right."

— Rahul Desai

Contact us

Email: tribeofprogrammers@gmail.com Call: +91 7604906337
© 2025 top