How Much Should AI Decide Without Asking You?
Abi Malin
In response to my recent piece on AI-powered embedded finance, a reader raised a sharp question about the governance and trust layer:
As AI agents start making financial decisions autonomously, who defines what an agent is authorized to do, and how do organizations build frameworks to oversee that at scale? I've been thinking about it since.
The question surfaced a tension I didn't fully address in the original piece. I was so focused on the "invisible" value proposition of embedded finance that I undersold the reality that invisibility only works for certain types of decisions.
Where consequential financial decisions require explicit user understanding versus autonomous action is one of the more important product questions in this space — and it's making me rethink where the real differentiation will come from.
I'd frame the decision visibility spectrum around three factors: Consequence × Reversibility × Sophistication.
Routine payments? Invisible. A $50,000 working capital decision? That needs user confirmation, even if the AI pre-qualifies and structures the offer.
The sweet spot isn't binary invisibility — it's contextual autonomy, where the system knows when it has enough trust and information to act versus when it should surface options.
The founders who find the most success will build progressive disclosure models. Early in the relationship, the AI surfaces everything: "Here's why I'm recommending this payment method" or "I noticed your cash flow dips on the 15th — should I automatically move funds?" As the user builds trust and the AI learns preferences, more decisions become autonomous.
Ramp does this particularly well with expense management — starting with suggested categorizations and spend policies that require approval, then gradually automating routine decisions as the system learns company-specific patterns.
In the short run, I expect companies will over-index to automation out of excitement and competitive pressure. It seems like every week there's a new headline about tech layoffs, and I suspect business leaders are overestimating what AI can replace today. In the medium term, the pendulum swings back when the limits become clear.
The pattern tends to repeat: early adopters automate too much, trust breaks somewhere visible, and the market corrects toward more selective, thoughtful automation.
The regulatory layer will accelerate this in embedded finance — especially around credit decisions and significant fund movements — forcing companies to build in decision explainability and user consent mechanisms before they're voluntarily ready to.
The real competitive moat will belong to companies that build felt control today, where users trust the automation precisely because they can inspect it when they want to. Invisible can't mean opaque. That only works until something goes wrong — an unexpected fee, a declined transaction, a credit decision the user doesn't understand — and trust evaporates instantly.
The best embedded finance products will feel like having a CFO who handles everything but explains their reasoning when asked.
What I'm watching: how quickly users' tolerance for financial automation grows as they see positive outcomes. The trust threshold isn't fixed — it's trainable. Companies building that trust gradient into their UX today are the ones I want to back.
What's the financial decision you'd never let an AI make without asking you first? I have a hunch the answers will be more revealing than any benchmark. Let's discuss below.