Essential Benefits for the Workplace are Here! Read the Report.
Blog

Blind Spots in the Code: Centering Financial Health in AI’s Next Chapter

AI is already transforming money management, but we have work to do to ensure it truly supports people’s financial decision-making and advances their financial health.

By Marisa Walster, Taylor C. Nelms

Thursday, August 7, 2025
 Blind Spots in the Code: Centering Financial Health in AI’s Next Chapter

Artificial intelligence (AI) is already reshaping how people manage their money — from budgeting apps with AI chatbots, to credit-building platforms using machine learning (ML), to virtual assistants that help with bill reminders, spending alerts, or goal tracking. There are even some emerging or more experimental use cases: AI “companions” that provide emotional support to reduce anxieties about money or AI-driven tools for debt negotiation and long-term financial planning. Ask ChatGPT and the platform will tell you that personal finance is “consistently among the top 5 topics users ask about.” (At least, this is what it told us!)

But beneath the hype lies a quieter reality: When it comes to real-life financial decisions, today’s AI tools often fall short.

Don’t believe it? Try prompting your favorite AI:

“Should I pay off my credit card or save for an emergency?”

Or: “How much should I contribute to my 401(k)?”

These questions aren’t abstract hypotheticals. They reflect everyday financial stressors — and people are increasingly turning to AI to make sense of them. In early 2025, OpenAI reported over 200 million weekly users of ChatGPT, many of whom are likely asking questions about their financial lives. New tools like Cleo, a purpose-built AI assistant that uses OpenAI’s o3 model, promise to “make conversations about money smarter and more personal,” “reason proactively about your finances,” and “deliver financial coaching that feels genuinely human.”

Why? Generative AI tools are fast. They are seemingly private and judgement free. And for some, they may feel more accessible than turning to a financial advisor or calling a bank.

But here’s the catch: AI doesn’t know enough yet. Tools like ChatGPT or other financial bots aren’t built to understand the nuance of real-life financial decisions or tradeoffs. They often provide generic or even misleading answers to deeply personal decisions. That’s the danger for consumers — and the opportunity for institutions.

AI Isn’t Ready for Real Financial Lives

A recent analysis by Vals AI tested 22 AI models across more than 500 finance-related questions to assess skills like market research or projections. The results?

    • Not one scored above 50% accuracy.
    • Many failed at simple tasks like retrieving financial information, leading to inaccurate answers.
    • Even more advanced models had trouble with tasks like comparing financial trends over time and cost as much as $3.86 per query.

There’s no peer review. No standard benchmarks. No transparency in training or evaluation. Real financial lives are complex. Financial decisions cannot be made in a vacuum but must be based not only on the details of your budget but your hopes, aspirations, worries, and the contours of your work and family life.

AI tools can only work with what they know, which is to say the data they’ve been trained on and the inputs their users provide. As one researcher put it in The Washington Post, today’s AI financial tools are built on “evaluation by vibes.” For life-altering financial decisions, that’s not enough.

We see a different future for AI in personal finance. 

While today’s conversation often frames AI as a standalone tool — something you ask for advice or use to complete a task — what’s coming is much bigger. AI is rapidly becoming embedded in the platforms, systems, and institutions that shape financial lives: underwriting engines, customer service bots, benefits systems, budgeting apps, fraud detection, and more. In other words, AI won’t just be a website or app you open up to ask questions about your finances — it will become a part of the environment in which people experience money and banking and through which their financial health is shaped. 

That makes it even more essential to get this right.

Trust Is Low, But the Potential Is Real

At Financial Health Network, we’ve started to look into drivers and roadblocks of AI adoption by consumers when it comes to personal finance. Earlier this year, we shared early findings showing that trust remains low, slowing uptake. Among respondents to our nationally representative Financial Health Pulse® survey who reported having heard of chatbots, 31% are not using these tools because they don’t trust the information (vs. only 20% that aren’t using them because they don’t know how).

We’re not the only ones seeing this trend. 2024 data from Morningstar’s Voice of the Investor indicates that only about one-third of U.S. investors trust AI to provide sound financial advice. The top concerns? Privacy, judgment, and empathy. And according to 2025 Morningstar research, even when AI is being used by professional financial advisors, consumers and investors want to see safeguards: data protections, transparency as to how AI is being used, human oversight and agency, and assurances that the AI is unbiased.

Still, those who use it do see benefits — especially in speed, access, and objectivity. There’s clearly potential, but the gap between promise and reality remains wide. That’s the paradox: AI could become a powerful tool for delivering trusted financial guidance but only if we build it with that outcome in mind.

Can AI Actually Advance Financial Health?

Yes! If we build it with intention.

Today, too many consumers face a financial system that’s complex, fragmented, confusing, and inequitable. Access to trustworthy, personalized financial guidance is increasingly challenging to find and evaluate – especially for those navigating low incomes, gig work, burdensome debt, or public benefits. Human advisors can be costly, aren’t always available when people need help most, and especially for the underserved, are often seen as bringing with them unconscious bias.

AI could help fill these gaps. Think of how robo-advisors revolutionized investment management over the past decade – offering lower-cost, automated portfolio guidance that made wealth-building tools more accessible to the average investor. 

Imagine what’s possible if we bring a similar approach – or one that’s even more committed to transparency, trust, and shared opportunity and prosperity – to everyday financial decisions like managing bills, building savings, navigating public benefits, or planning for retirement. AI has the potential to deliver consistent, scalable, and judgment-free guidance, reaching people who have long been left out of traditional systems. It can work 24/7. It can personalize support. And with the right design, it might even be more fair than the systems we rely on today – and thus worthy of our trust.

But that outcome isn’t guaranteed. Right now, most financial AI tools are trained on incomplete data, shaped by outdated assumptions, judged by vague internal metrics, and built by a range of institutions with varying incentives – not all of them with consumer well-being front and center. That’s not innovation – it’s automation without accountability.

To realize AI’s potential as a tool for financial health, we need to flip the script:

    • Define clear, measurable standards for AI-driven financial guidance
    • Train AI tools on data that reflects diverse financial realities
    • Test AI tools in real-world contexts with real people
    • Evaluate it based on its ability to improve real outcomes – not just generate confident answers
    • Center consumer well-being – not just efficiency or profitability – in every design decision
    • Build in transparency, equity, and accountability from the start

This is how we turn AI into a force for good in consumer finance – not through vague intentions, but through concrete action grounded in people’s real financial lives. AI can’t fix financial health on its own. But it can become part of the solution – if we choose to build it that way.

The Real Risk: Scaling Harm

The communities most at risk of financial instability and predatory products are also most vulnerable to flawed AI.

Families with lower incomes. Communities of color. Households managing volatile income, benefits eligibility, or debt. If tools are built on incomplete or biased data – or offer guidance that fails to account for real-life context or is simply hallucinated based on ghostly patterns in training data – they’ll reinforce existing disparities rather than reducing them. Without intervention, we risk codifying confusion, bias, and misinformation at scale.

This isn’t a distant risk. We expect that it’s already happening. That’s why we need to act now to build the infrastructure that will make AI not only safer, but truly effective in advancing financial health.

We have a choice. We can let AI evolve without direction and attempt to embed financial health into systems that are opaque, biased, or incomplete. Or we can act now to ensure that as AI becomes the very infrastructure of financial life, it is oriented to trust and reflects the principles of transparency, inclusion, and well-being.

Financial health isn’t a side goal — it should be the core outcome we build for. The future of financial health depends on building AI that works for everyone – not just because people will use it, but because it will soon be everywhere.

Written by

  • Marisa Walster
    Vice President, Financial Services Solutions
    Financial Health Network
  • Taylor C. Nelms
    Vice President of Research and Insights
    Financial Health Network