There is a quiet but consequential shift happening inside the world’s payment infrastructure — and most people have no idea it’s coming. Visa, one of the most powerful financial networks on the planet, is actively redesigning how transactions work for a future where the entity initiating a payment isn’t a human being at all. It’s software. And that changes almost everything about how we think about money, identity, and trust in financial systems.
The Payment Model We’ve Always Known Is About to Break
Every card transaction today follows the same basic logic: a person decides to buy something, confirms their identity, and a network processes the payment. That sequence — human intent, human authentication, human authorization — is baked into the foundation of modern banking. It’s why you get a fraud alert when a purchase looks unusual. The system is designed to detect when something doesn’t match expected human behavior.
Now consider what happens when the buyer isn’t human. Visa’s new “Agentic Ready” programme, currently being tested across Europe with partners including Commerzbank and DZ Bank, is exploring exactly that scenario. The goal is to build payment infrastructure that can handle transactions initiated entirely by AI agents acting on behalf of users — without a person clicking “confirm” at checkout.
What an AI-Initiated Transaction Actually Looks Like
Think of it like setting up a very sophisticated standing order. You tell an AI agent: “Reorder office supplies when stock drops below a certain level, and only buy from vendors who can deliver within 48 hours at a price under X.” The agent then monitors inventory, compares prices across suppliers, and completes the purchase when the conditions are met — no human intervention required.
This isn’t science fiction. It’s closer to what enterprise procurement teams already do manually, compressed into an automated loop. The difference is that the AI agent is the one pressing “buy.” From the payment network’s perspective, that raises an immediate and serious question: how do you verify the identity and intent of a piece of software?
Why Identity and Authentication Are the Real Problem
Current payment systems are built around proving that a real, authorized person is behind a transaction. Chip-and-PIN, biometrics, two-factor authentication — all of these exist to confirm human presence and consent. When an AI agent initiates a transaction, that entire verification layer needs to be reimagined at the system level.
Visa’s programme is focused on defining exactly this: how an agent proves it is acting within the boundaries set by a user, and how much autonomy it can exercise before human oversight kicks in. This is less a technology problem than a trust architecture problem. The financial system needs a new language for expressing delegated authority — one that regulators, banks, and consumers can all understand and verify.
The Compliance Wall Banks Must Climb
Banking is among the most regulated industries on earth, and for good reason. Audit trails, fraud detection, customer consent frameworks, anti-money laundering checks — all of these were designed with human actors in mind. Introducing autonomous software into the transaction chain doesn’t eliminate those requirements; it makes them significantly harder to satisfy.
A recent RepRisk report underscores the stakes: AI-related incidents in banking are already leading to multi-million-dollar losses. Commerzbank and DZ Bank, both involved in Visa’s early trials, are working through how AI agents can be integrated without creating compliance blind spots. That includes figuring out what an audit trail looks like when the decision-maker is an algorithm, and how disputes are resolved when no human made the call.
AI Agents in Enterprise Purchasing: Efficiency With Sharp Edges
For large organizations, procurement is a slow, approval-heavy process. A single purchase order can pass through multiple departments before it’s authorized. AI agents could dramatically compress that workflow by handling routine purchases within pre-defined spending limits — reducing administrative overhead and accelerating supply chains.
But efficiency without guardrails is a liability. Without clear rules about what an agent is permitted to buy, at what price, and from which vendors, the risk of errors, manipulation, or unauthorized spending grows quickly. This is why the most important work happening right now isn’t in the AI models themselves — it’s in the policy and rule-setting frameworks that govern what those models are allowed to do.
| Dimension | Traditional Payment Model | Agentic Payment Model |
|---|---|---|
| Transaction Initiator | Human customer | AI software agent |
| Authorization Method | PIN, biometrics, 2FA | Rule-based delegation + system authentication |
| Human Oversight | Real-time confirmation | Pre-set rules, post-event review |
| Fraud Detection | Behavioral anomaly vs. human patterns | Must redefine “normal” for agent behavior |
| Compliance Challenge | Customer consent tracking | Delegated authority + audit trail for AI decisions |
| Key Players Testing | Established card networks | Visa, Commerzbank, DZ Bank (Europe, 2025–26) |
How Big Is This, Really? A Historical Comparison That Helps
Visa itself has compared the shift to the early days of online payments — a moment when banks had to adapt their entire transaction infrastructure to handle a completely new type of payment flow. That analogy is worth sitting with. When e-commerce emerged, the question wasn’t just “can the technology work?” It was “can the regulatory, fraud, and identity systems keep up?” They eventually did, but it took years of iteration, failed experiments, and new legal frameworks.
Agentic payments are likely to follow a similar arc. The technology is moving faster than the governance structures around it. Visa’s programme is, in part, an attempt to get ahead of that gap — to build the infrastructure scaffolding before AI agents become widespread consumer and enterprise tools.
What the Next 12–24 Months Will Reveal
The immediate horizon is about controlled experimentation. European banks are the testing ground precisely because the EU’s regulatory environment — particularly around AI accountability and financial oversight — forces rigorous documentation. What works here will likely become the template for global rollout.
Within the next two years, I expect we’ll see the first standardized frameworks for “agent identity” emerge from financial regulators, much like we saw payment security standards codify after the early e-commerce era. We’ll also see the first significant fraud cases involving AI agents, which will accelerate that regulatory urgency. The banks and networks that have already done this groundwork — defining agent permissions, building audit-ready systems, and stress-testing dispute resolution — will be positioned to move fast when the broader market opens up.
Payments are about to become one of the first real-world proving grounds for agentic AI at scale. If you want to understand where AI autonomy is actually heading — not in theory, but in practice — watch what happens inside the banking system over the next 24 months. It will tell you more than any tech conference keynote ever could. If this kind of deep-dive analysis is useful to you, explore more of our coverage on agentic AI and enterprise automation right here on STI2.