The most expensive sentence in your AI agent contract is "you only pay for resolutions."

It sounds like a guarantee. The vendor takes the risk, you only pay when the agent works, and your CFO gets to write a clean line item that scales with usage. Three months later you discover that "resolution" was defined by the vendor, that the agent counts a customer hanging up in frustration as a successful close, and that the same agent now handles 80% of your volume at a marginal cost higher than your offshore BPO.

That is the trap at the center of Sierra AI's pricing model. Sierra is a serious product built by serious people, and the pitch is genuinely compelling. But the contract terms matter more than the demo, and the public reporting on what Sierra actually charges has become detailed enough that buyers can finally do real math before they sign.

Here is what those numbers say, what the hidden risks look like, and when a different approach makes more financial sense.

What Sierra AI Actually Costs in 2026

Sierra does not publish pricing. Every number in this section comes from reverse-engineering reported deals, vendor analyses, and customer disclosures.

Cost Component Typical Range Notes
Annual platform minimum $150,000 - $200,000 Floor for any production deployment
One-time implementation $25,000 - $75,000 Discovery, integration, agent design
Per-resolution fee $1.00 - $2.50 Decreases with volume tier
Year-one total (typical) $180,000 - $250,000 Mid-market consumer brand
Year-three total (typical) $450,000 - $750,000 Assuming 25% volume growth

Several public analyses confirm this shape. Ringg AI's pricing breakdown and Lorikeet's competitive analysis both put the entry point near $150K with implementation fees on top. Quiq's 2026 explainer and CallBotics' review reach similar conclusions independently. The consistency across sources makes the floor reliable, even if individual deals vary.

The interesting number is not the headline. It is the per-resolution fee multiplied by your real ticket volume. A consumer brand handling 40,000 monthly support tickets at a 65% deflection rate is paying for roughly 26,000 resolutions per month. At $1.50 per resolution, that is $39,000 monthly, or $468,000 annually, on top of the platform minimum and implementation fee. The "you only pay for what works" framing dissolves quickly at scale.

The Resolution Definition Problem

Every pricing analysis I have seen flags the same issue: how Sierra defines a resolution is the single most consequential clause in the contract.

A resolution can mean any of these things, and which one applies depends on what your procurement team negotiates:

  • A conversation closed without human handoff
  • A customer issue solved end-to-end with confirmed satisfaction
  • Any interaction that does not escalate within a defined window
  • A successful action taken (refund issued, order modified, account updated)

The first definition is generous to the vendor. A frustrated customer who gives up counts as a resolution. So does a confused customer who asks the same question three times in different sessions. The fourth definition is generous to the buyer, but vendors resist it because measurable outcomes are harder to count than absence of escalation.

MyAskAI's Sierra guide puts it bluntly: the resolution definition is where the actual margin lives. Get this wrong and you are paying outcome-based pricing for what is effectively a chatbot deflection metric.

Why Outcome-Based Pricing Sounds Better Than It Performs

The pitch for outcome-based pricing leans on a real economic insight. Software pricing has always struggled to align vendor incentives with buyer success. Per-seat pricing rewards the vendor whether or not the product works. Per-resolution pricing seems to fix this by tying revenue directly to value delivered.

The problem is that the model only works cleanly when three conditions hold:

  1. The unit of value is unambiguous and externally verifiable.
  2. The buyer's marginal cost without the vendor is well understood.
  3. The vendor cannot influence the volume of units billed.

None of these conditions reliably hold in customer support automation. Resolution definitions are negotiated, not externally verified. Most companies do not know their true marginal ticket cost (it varies by channel, time of day, and complexity tier). And vendors absolutely influence volume, because the agent itself decides whether to escalate or close.

Pickaxe's analysis of AI agent pricing models walks through the structural issues across the category. Their conclusion mirrors what I see in client engagements: outcome-based pricing tends to overpay buyers in the first six months (when the agent is still learning) and underpay them in years two and three (when the agent is good enough that volume scales faster than expected).

The cruelest part of the curve is that success makes you poorer. Every percentage point of deflection improvement increases your bill. Compare this to a flat-fee custom build, where improving the agent reduces your effective per-resolution cost. The incentives invert.

When Outcome-Based Actually Wins

Outcome-based pricing is not a scam. It is a tool with a narrow window of fit. It works when:

  • You are early in your AI deployment and unsure of volume
  • Your support flows are generic enough that the vendor's prebuilt patterns fit cleanly
  • Your ticket volume sits in the middle range (10,000 to 50,000 per month)
  • You do not have engineering capacity to build or maintain custom systems
  • Your CFO needs variable-cost framing for board reporting

Outside this window, the model usually loses to either flat-fee custom builds (above 50,000 monthly tickets) or to per-seat platforms with included AI (below 10,000).

The Real Comparison: Sierra vs. Decagon vs. Intercom Fin vs. Custom

Pricing comparisons get distorted when vendors are measured at different scale points. The fairest test is total cost of ownership at three years for a representative mid-market deployment: 30,000 monthly tickets, 60% target deflection, 25% annual volume growth.

Vendor Pricing Model 3-Year TCO Estimate Best For
Sierra AI Outcome-based + minimums $550K - $750K High-volume consumer support with generic flows
Decagon Outcome-based, more transparent $400K - $600K Mid-market with negotiating leverage
Intercom Fin Per-resolution within Intercom $250K - $450K Existing Intercom customers, lighter complexity
Zendesk AI Per-resolution add-on $300K - $500K Existing Zendesk customers
Lorikeet Outcome-based, niche-focused $350K - $500K Fintech and regulated verticals
Custom build Flat fee + API costs $200K - $400K Complex workflows, regulated industries, or strong engineering teams

The numbers will move with your specific volume curve, but the rank order tends to hold. Sierra commands a premium because its prebuilt agent patterns, voice capabilities, and brand reputation reduce time-to-value for the right buyer. The premium is real money, and whether it is worth paying depends on what you would otherwise spend building or integrating.

For technical context on what custom actually requires, Anthropic's guide to building effective agents is the clearest playbook published. Most production support agents map to one of five workflow patterns documented there, none of which require Sierra-scale tooling to implement.

The Hidden Risks Beyond Headline Pricing

Three risks rarely show up in vendor demos but reliably surface in year two of Sierra deployments.

Lock-In Through Conversation Data

The agent learns from your customer interactions. That learning lives inside Sierra's platform, not yours. When you decide to switch vendors or move to custom, you do not get the trained agent. You get conversation logs at best. The transition cost is effectively a second implementation, which is why churn from outcome-based vendors tends to lag dissatisfaction by 12 to 18 months. Customers stay longer than they want to because leaving is expensive.

Definition Drift in Renewals

Resolution definitions are renegotiated at renewal. If your agent has been performing well, the vendor has data showing how to tighten the definition in their favor. If it has been performing poorly, the vendor has leverage to bundle additional services as a "fix." Either way, your year-two pricing rarely matches your year-one pricing in real per-unit terms.

Compliance and Audit Gaps

For regulated industries (healthcare, financial services, legal), the vendor's training data handling, model selection, and audit logging are critical. Sierra is sophisticated here, but the standard contract pushes most data residency and audit trail decisions to enterprise tier add-ons. If you need SOC 2 evidence specific to your AI agent's decisions, expect that to cost extra and require legal review. Simon Willison's writing on the lethal trifecta is required reading before signing any agent vendor contract; the data exfiltration risks he describes apply directly to support agents handling PII.

When Sierra AI Is Actually the Right Call

I want to be fair here. Sierra is the right answer for some buyers, and I have recommended it to clients who fit the profile.

You should seriously evaluate Sierra if:

  • You run a consumer brand with 50,000+ monthly support tickets
  • Your support flows are largely standardized (returns, order status, account questions)
  • You have voice channel volume that justifies their investment in voice agents
  • You do not have or want engineering capacity to maintain a custom system
  • Your finance team specifically wants variable-cost AI spending
  • You can negotiate hard on the resolution definition

If you check four or more of those boxes, the premium is probably worth it. The product is genuinely good, and the alternative (building it yourself badly) is worse.

You should look elsewhere if:

  • Your volume sits below 25,000 monthly tickets
  • Your workflows are complex, regulated, or industry-specific
  • You need full ownership of the agent and its training data
  • Your existing platform (Intercom, Zendesk, Salesforce) has competent native AI
  • You have engineering capacity and want compounding leverage from custom systems

How OpenNash Can Help

For buyers in the second category, OpenNash builds custom AI agents that you own end-to-end. The model is flat-fee implementation plus your own API costs, which inverts the outcome-based curve: as the agent improves, your effective cost per resolution drops instead of rising.

Our process maps to the audit, design, build, and deployment stages most buyers expect from a serious implementation partner. We define resolution criteria up front based on your business outcomes, not vendor metrics. We build human-in-the-loop guardrails into the workflow, not bolted on as an afterthought. And the agent, the prompts, the tooling, and the eval harness all become your IP after deployment.

This is not always the right answer. If you fit the Sierra profile, we will tell you. If you would be better served by Intercom Fin's native pricing or by a transparent platform like Lorikeet, we will say that too. The credible angle for custom is ownership, predictable cost curves, workflow fit, and auditability, not "we are cheaper than Sierra."

Book a call if you want to map your specific volume, complexity, and regulatory profile against the real three-year cost of each option. We have run this comparison enough times that the answer usually becomes obvious within an hour.

The right pricing model is the one where your incentives and the vendor's incentives point in the same direction. With Sierra, that alignment exists for a specific buyer profile. With custom builds, it exists for a different one. The expensive mistake is assuming outcome-based pricing automatically aligns incentives when it often does the opposite.