From Conversation to Commitment: 3 Things We Learned at Our Responsible AI Workshop
Financial services leaders are defining what responsible AI looks like in real time. Here’s how it can advance financial health.
By Jennifer Tescher, Megan Coffey
-
Category:
How can AI tools be designed and deployed to genuinely improve people’s financial health—especially those already under strain? In December 2025, the Financial Health Network convened leaders from financial services, fintech, financial counseling organizations, nonprofits, and social impact partners for a workshop focused on answering this increasingly urgent question.
The workshop was intentionally practical. Rather than debating AI in the abstract, participants zeroed in on the real financial moments where people struggle most—and where AI, if built responsibly, can make a meaningful difference. The goal was not to chase the newest model or feature, but to align the capabilities of new technology with lived experience, equity, and measurable outcomes. Here’s what we learned.
AI Must Start With Real Financial Needs
Across breakout groups and readouts, one major theme surfaced repeatedly: AI will only advance financial health if it is grounded in the realities of people’s lives and designed to earn their trust, particularly those who are Financially Vulnerable.
To better illustrate this, participants identified a concrete set of “jobs to be done,” or specific moments where people need support navigating complexity, stress, or high-stakes financial decisions. These ranged from managing debt and smoothing cash flow to recovering from financial shocks, preparing for tax time, or understanding trade-offs between financial products. Pinpointing these moments kept the discussion focused on how AI can support real financial needs.
Just as importantly, the group pushed back on the idea that simply offering more information alone leads to better outcomes. Many noted that households today already feel overwhelmed by financial tools and advice. AI has the potential to reduce that friction—but only if it is designed to be clear, personalized, and accountable.
Our 2025 Financial Health Pulse® U.S. Trends Report reinforces the urgency for AI solutions grounded in trust, especially with adoption on the rise. Between 2024 and 2025, the share of people who reported using a generative AI chatbot such as ChatGPT or Gemini for financial advice more than doubled, rising from 3% to 7%. Yet consumer trust in AI chatbots for financial advice remains strikingly low: 43% of respondents reported distrusting AI chatbots, while just 12% said they trust them.
“The question is whether AI will reinforce today’s inequities or help build a more stable, navigable system for households.”
Responsible From the Start
Centering consumers requires more than good intentions. It requires prioritizing responsibility in every decision from the very start. Workshop participants emphasized that issues like bias, explainability, and data governance must be addressed during product design, not retrofitted after deployment. Without guardrails, AI risks amplifying the same inequities we already see across income, race, and geography, only at greater speed and scale.
Participants also agreed that voluntary standards, shared expectations, and a clearer definition of what “responsible” looks like are essential. Without these common benchmarks for quality, transparency, and consumer outcomes, it becomes difficult for regulators, funders, or even providers to distinguish between tools that genuinely drive financial health and those that simply optimize engagement or cost savings. Well-defined frameworks for what responsible AI design looks like can help organizations weigh trade-offs and mitigate risks—challenges that exist in all AI tools, no matter how well-intentioned.
Data Gaps Are a Critical Barrier
Fragmented data remains a major barrier to designing and deploying AI tools that truly support financial health. Today, no single dataset captures the full picture of a person’s financial life. Transaction histories, credit information, income volatility, benefits usage, and household context often sit in separate silos or are missing entirely, making it difficult for tools to reflect real-world complexity.
Participants emphasized the need to distinguish between data that is necessary for delivering helpful guidance, and data that introduces privacy or bias risk without meaningful benefit. This conversation underscored the importance of purpose-driven data use: collecting and sharing information only when it demonstrably improves consumer outcomes, and doing so with clear consent and strong protections.
Perhaps most crucially, the group stressed that any effort to build responsible AI must be grounded in data and insights drawn from a broad and diverse set of consumers, not only those who are already financially stable.
Looking Ahead: The Future of Responsible AI
For the Financial Health Network, the workshop marked a shift from exploration to action.
In the months ahead, we look forward to building on these insights in four concrete ways:
-
- Deepening consumer-informed research. We’ll continue studying how people are already using AI for financial decisions, where trust breaks down, and what design choices matter most for those who are Financially Vulnerable.
- Testing what works in practice. Through partnerships and pilots, we will support responsible experimentation to learn not just what AI can do, but what actually drives measurable financial health outcomes.
- Clarifying data needs and safeguards. We will advance work to identify which data inputs are most valuable for effective financial guidance, and explore models for accessing them responsibly.
- Advancing shared standards. Drawing on our experience developing the FinHealth Standards, we will work with stakeholders to define what “good” looks like for AI-driven financial tools—so that values like equity, transparency, and accountability are measurable, not aspirational.
An Open Invitation for Collaboration
AI is already reshaping financial services. The question is whether AI will reinforce today’s inequities or help build a more stable, navigable system for households. The workshop made clear that the answer depends less on technology itself and more on the commitments we make together.
We see this work as inherently collaborative. Progress will require technologists, financial institutions, policymakers, and advocates moving in concert, guided by evidence, grounded in lived experience, and accountable to outcomes.
This is the path we are committed to advancing in 2026 and beyond. We invite partners across sectors to help shape what comes next—so that AI becomes not just more powerful, but more worthy of people’s trust.
Join us at EMERGE 2026 as we explore how to keep moving toward our shared vision of financial health for all—and dive deep into new strategies for leaders to embrace AI as a tool for scalable, sustainable impact.
Explore More
What’s Holding Back Consumers From Using AI Financial Tools?
Financial institutions face multiple challenges driving customer adoption of AI financial tools.
Grasshopper Bank: A Case Study in Exploring Trust in AI-Driven Banking
In an exclusive Q&A, Pete Chapman, CTO of digital bank Grasshopper, shares how the company is using AI to build trust with customers.
The Early Adopters: Who’s Using AI for Financial Advice?
AI is already transforming money management, but we have work to do to ensure it truly supports people’s financial decision-making and advances their financial health.