
Most B2B deals die in the same place. The champion agrees there is pain. Finance asks for proof. Nobody can quantify the cost of waiting.
That gap is avoidable. You do not need a 40-tab spreadsheet. You need a tight model with explicit assumptions, fast sensitivity checks, and numbers a CFO can audit in five minutes. Here is the framework I use across every AI automation deal I scope.
Build a do-nothing model finance can audit
Start with one rule: model current-state loss first, vendor impact second.
The temptation is to lead with what your product does. Finance does not care about features. Finance cares about what inaction costs the business per quarter. Build the baseline loss first, then show improvement as a delta.
Four inputs that cover 90% of B2B SaaS deals:
- Labor drag: hours per week × loaded hourly cost × team size. If a 6-person ops team spends 11 hours a week on manual reconciliation at $65/hour loaded, that is $22,230 per year before any automation is applied.
- Error cost: incidents per month × average remediation cost. A single misrouted support ticket that requires 3 hours of engineer time costs $195 at that same loaded rate. At 40 incidents a month, that is $93,600 per year.
- Revenue leakage: missed conversions × average contract value. If inbound leads are going stale because there is no automated follow-up and 5% of qualified leads are timing out per month, model that against ACV.
- Delay penalty: forecasted initiative slip × monthly opportunity value. Delayed product releases, slow onboarding, or manual compliance reviews all have a measurable clock cost.
Use annualized values so stakeholders can compare with budget cycles. A $7,800/month drag is real but invisible. A $93,600/year drag is a board-level number.
Stress-test assumptions before procurement does it for you
Weak ROI models fail because they hide uncertainty. Strong models expose it deliberately.
Run three scenarios and put them in a table. This signals to finance that you have already done the sensitivity analysis they would ask for:
| Scenario | Annual Current-State Cost | Expected Improvement | Net Annual Gain | Payback |
|---|---|---|---|---|
| Conservative | $220,000 | 18% | $39,600 | 13 months |
| Expected | $220,000 | 32% | $70,400 | 8 months |
| Aggressive | $220,000 | 45% | $99,000 | 6 months |
If your conservative case cannot clear payback targets, the deal is probably not real yet — or the contract value needs restructuring. I have walked away from proposals where the conservative scenario yielded a 26-month payback on a 12-month contract. The numbers were true, but they were not useful.
The table format also changes the dynamic in procurement meetings. Instead of defending a single number, you are presenting a range and letting finance choose which scenario fits their risk tolerance.
Tie numbers to discovery evidence, not generic benchmarks
A model is only as credible as its source inputs. Generic industry benchmarks ("companies waste 30% of their time on manual tasks") get challenged immediately. Discovery evidence does not.
Anchor each input to evidence the buyer already owns:
- CRM exports for cycle time and conversion lag
- Support ticket history for error frequency
- Team interviews for manual handoff effort
- Existing BI dashboards for baseline throughput
Document every assumption in one line. Example:
- Assumption: SDR handoff rework averages 2.2 hours per qualified lead
- Source: RevOps sample of 40 leads from Q1
- Confidence: medium (single-quarter sample, may understate peak-period load)
This format does three things. It makes legal and finance less likely to block on "unverifiable claims." It shows the buyer you did actual discovery rather than plugging in guesses. And it gives you a place to revisit assumptions if the champion needs to revise inputs after the fact.
Where models break and what to do about it
The most common failure mode: the champion builds the model in isolation and then presents it to finance cold. Finance was not part of the discovery. They have no context for the inputs. They challenge the $65/hour loaded rate because HR uses $52 for budgeting. The whole conversation unravels.
Fix this by involving a finance stakeholder in the assumption-setting stage, not the presentation stage. Even a 20-minute working session to align on the loaded rate and the incident definition eliminates 80% of pushback.
Second common failure: mixing one-time costs with recurring costs without labeling them. Implementation costs and ongoing license fees have different payback math. Keep them separate in your model and label each row clearly.
Running the model live in calls
The fastest way to move a stuck deal is to pull up the model on a shared screen and adjust one assumption at a time with the stakeholder in the room. Ask: "If we used your Q1 incident count instead of this estimate, what does that do to the payback period?"
That question turns a presentation into a collaboration. The stakeholder starts to own the numbers. When they own the numbers, they can defend them internally.
Use the B2B ROI Calculator to run scenarios live without managing a spreadsheet. It handles the conservative/expected/aggressive split automatically, so you can adjust inputs and show updated payback in real time without a formula error derailing the call.
The do-nothing cost conversation is not a sales tactic. It is an honest quantification of what inaction actually costs. When you lead with that number — and it is defensible — procurement has nothing to push back on except the improvement assumptions. That is a much easier conversation than defending the price of your product from scratch.