Grasshopper Bank: A Case Study in Exploring Trust in AI-Driven Banking
In an exclusive Q&A, Pete Chapman, CTO of digital bank Grasshopper, shares how the company is using AI to build trust with customers.
-
Category:
The all-digital Grasshopper Bank, based in New York City, offers an example of how financial institutions are beginning to use AI in ways that seek to balance innovation with trust. The bank recently garnered coverage in American Banker for its connection to Anthropic’s Claude (and more recently OpenAI’s ChatGPT) enabled by its Modern Context Protocol (MCP) Server. While many banks are experimenting behind the scenes, Grasshopper has taken a transparent, client-centered approach—inviting its business banking clients to explicitly interact with their financial data with their own outside AI tools.
Our own data from the Financial Health Pulse 2025 has shown that AI usage for financial advice still skews largely toward early adopters who are younger and more Financially Healthy. However, across all populations, trust is still low—data showed only 12% trust AI compared to other sources of financial advice. Yet banks are actively working to bridge this gap. At our latest member roundtable, the majority of the participants representing credit unions, banks, or financial service companies said their companies are in the pilot or exploration phase of AI implementation.
Grasshopper’s approach demonstrates how a digital-first bank can use AI to explore new forms of transparency and empower clients with greater control over their financial data.
We spoke with Pete Chapman, Grasshopper’s Chief Technology Officer, about the bank’s strategy, lessons learned, and what the future of AI-assisted banking might look like.
Q&A With Pete Chapman
FHN: What is Grasshopper’s big idea with AI?
Chapman: Our goal is to operate with the efficiency and scale of a large institution while maintaining the personalized touch of a community bank. We see AI as an amplifier for our people—not a replacement. Our strategy is built around three pillars: Workforce Enablement, Operational Excellence, and Client Experience & Growth. We started by using AI internally to make our team smarter and faster, and are now extending those lessons to help clients interact with their own financial data safely and meaningfully.
FHN: How is your use of AI tools different from typical banking AI chatbots?
Chapman: Most bank chatbots are closed systems owned by the institution. We took the opposite approach. Our MCP server lets business banking clients securely connect the AI assistant of their choice—such as Claude or ChatGPT—to their Grasshopper accounts in a read-only environment. In practice, that means clients can surface information by asking natural language questions like, “What were my top recurring vendor payments last month?” and get instant, personalized insights. It’s about giving clients more control, transparency, and value while maintaining the robust, bank-grade security that defines leading digital banks.
FHN: Trust is one of the biggest barriers to AI adoption in banking. How are you addressing it?
Chapman: Trust is foundational to our AI strategy. We use human-in-the-loop governance, meaning AI assists but never makes final credit or risk decisions. We publish clear, easily accessible disclosures about risks, roles, and responsibilities, and our AI policy aligns with the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF). The MCP connection is read-only, and all data is encrypted in transit and at rest. Clients opt in, control access, and can revoke it anytime. Transparency, shared responsibility, and proactive safeguards are the cornerstones of how we build and maintain trust, ensuring that clients always feel in control of their financial decisions and data.
FHN: What role do partnerships play in your AI work?
Chapman: Partnerships are essential in driving real innovation. We collaborated with Narmi to launch the first MCP Server by a U.S. bank. They built and managed the MCP server, allowing us to innovate quickly and focus on delivering value to clients without overextending internal resources. The project proved that meaningful progress in AI requires close collaboration between fintech partners and financial institutions. The future of banking innovation is collaborative, not siloed.
FHN: What advice do you have for other banks building or adopting consumer-facing AI tools?
Chapman: Start with governance and strategy before experimentation. Develop a clear AI policy and establish oversight through your risk or ethics committee. Align early with frameworks like the NIST AI RMF.
Once governance is in place, begin small—pilot, learn, and iterate. Many banks try to go too big too soon, which slows innovation. Our decision to connect with Claude and Chat GPT was grounded in a clear business case: serving AI-first companies that value both innovation and trust. As we continue to refine our approach, we plan to expand support to other leading large language models (LLMs), ensuring our clients have choice and flexibility in how they leverage AI for their financial insights.
That combination—strong governance and targeted experimentation—is the key to moving fast responsibly.
FHN: How do you respond to questions about data safety and privacy?
Chapman: Data security is non-negotiable. The MCP server was designed with bank-grade protections from the start. The connection is read-only, meaning the AI assistant cannot initiate transactions or alter account details.
All data is encrypted in transit and at rest, and clients must explicitly opt in through a secure authentication process. They can revoke access at any time. Our client agreement makes responsibilities clear—Grasshopper secures the banking systems, and clients maintain control over their chosen AI tools.
We’re building innovation on a foundation of transparency, security, and client empowerment.
FHN: What’s next for AI at Grasshopper?
Chapman: We’re expanding our internal use of Google’s Gemini Enterprise to deepen productivity and insight across teams. For clients, we’ll continue refining secure, conversational access to financial data—making banking feel less like a dashboard and more like a dialogue. Our goal is to combine the efficiency and precision of AI with the empathy of human service—building trust one interaction at a time.
At Financial Health Network, we’re closely watching how financial institutions test and deploy AI and what emerging models can teach us about building trust, transparency, and consumer benefit into this next generation of financial tools. Do you have a use case or case study we should learn more about? We’d love to hear from you: info@finhealthnetwork.org.