Most founders test demand. Far fewer pressure-test the customer-acquisition math that says demand will stay paid for at a price the company can survive on.
That gap kills more startups than the "no demand" gap kills. A working demand signal with a broken CAC-payback model is a slower failure mode than no demand at all — and the math is harder to read because it shows up months after the launch, when the ad spend has already trained the org to believe the channel works.
The order matters. Demand validation tells you whether anyone wants the thing. CAC-payback validation tells you whether the company that delivers it can keep buying customers without running out of cash. Both questions matter. The second one is the load-bearing question for any plan that requires paid acquisition or compounding-channel scaling — which is most plans.
The CAC-payback gap most founders skip
In a typical pre-build deck, the CAC-payback math gets a single line: "projected CAC of $25, LTV of $200, 8x LTV/CAC ratio." That sentence is the assumption — not the model.
The assumption is built on three sub-assumptions, each of which is independently checkable from public data, and each of which the deck rarely interrogates:
The CAC figure is from your category's actual blended cost, not aspirational. Most early decks pull from a friend's post or a SaaS-benchmark blog. Comparable public companies in your category disclose blended-CAC ranges in S-1s and earnings calls — those numbers are usually 2-4x the figure decks assume.
The LTV figure is from your category's actual retention curve, not best-case. Same public-data check applies. If your category's leaders churn 50% in six months, your "5-year LTV" assumption is structurally optimistic by an order of magnitude.
The CAC-payback window is short enough to fund organic growth. Twelve-month payback works when you have venture capital. Twenty-four-month payback collapses bootstrap math. The delta is often the difference between an idea that funds itself off cash flow and an idea that requires raising every twelve months indefinitely.
Each one is checkable from public sources. Each one is independently flaggable. Each one is the kind of finding an honest analyst friend would surface in an afternoon — if you knew where to look.
Three signals from public data
You don't need a research budget to do this work. Three sources cover most of the surface area for any consumer or SaaS category.
S-1 filings and IPO prospectuses. Casper's S-1 disclosed blended-CAC of $310 in their 2019 filing — for a $1,000 mattress with category retention curves that didn't carry. Wayfair's earlier earnings calls broke out repeat-customer-orders contribution; the gap between first-purchase CAC and 12-month LTV was the load-bearing question on whether the model worked. Blue Apron's S-1 named a CAC of $147 against a six-month retention floor that put LTV in the ~$400 range — a payback window incompatible with the marketing spend the IPO-deck implied.
If your category has a public comparable, the working CAC-payback range is publicly readable. If it doesn't, that absence is itself a signal — a category with no public comparable usually has no public-market validation that the unit economics work.
Earnings calls and investor decks. Comparable-company quarterly reports publish CAC ranges, channel-mix economics, and LTV-to-CAC ratio bands. HelloFresh's investor decks segment CAC by acquisition-channel; the gap between paid-search CAC and referral CAC is often 3-5x. If your assumption doesn't model that mix realistically, your blended figure is fiction.
Industry benchmarks from VC firms. a16z, Bessemer, and SaaStr publish category-class CAC-payback bands annually. If your assumption sits below the bottom decile of the band, the most likely explanation is that the assumption is wrong, not that you've discovered a structural advantage.
The retroactive precedent
Blue Apron raised $278M, IPO'd at a $2B valuation, and lost 90%+ of that valuation within four years. The CAC-payback math was readable from the S-1. The category's six-month retention floor sat around 50%, the blended CAC was $147, and the AOV-and-frequency math required multi-year retention to clear payback. The company never reached a steady-state where new-customer CAC was funded by repeat-customer cash flow.
Every signal was public from the S-1 onward. The post-mortems were written contemporaneously. The lesson didn't compound — Munchery raised $125M into a similar AOV-vs-frequency math gap and shut down in early 2018.
The full retroactive on Munchery's unit-economics gap → (companion autopsy on a structurally adjacent failure)
The pattern across both: the math was readable from public data. Pre-buildable. Pre-fundable. Pre-everything-they-spent-the-cash-on.
How DimeADozen surfaces this
A research-backed validation report does the CAC-payback work in two sections of every report. The Financial Model section pulls comparable-company blended-CAC disclosures and surfaces the working band for your category. The Risk Analysis section names where your assumption sits relative to the band — and which sub-assumptions would have to all hit best-in-category simultaneously for the model to work.
If three sub-assumptions need to all hit best-in-category at the same time — that's the finding.
The full frame on the broader unit-economics half:
Validation Unit Economics: How to Pressure-Test the Math Before You Build →
How CAC-payback compounds with the other three signals
CAC-payback isn't a standalone signal. It compounds with three sibling signals:
- Comp-set retention floor — if retention is below category leaders, payback windows extend; the LTV assumption breaks. (Validation Repeat Rate →)
- Density math vs zip-code reality — if density required for unit-economic break-even is above category-record, paid-acquisition has to overpay for early customers; CAC stays elevated. (Validation Density Math →)
- Capex per geography — if per-unit capex outpaces payback window per market, paid-acquisition can't compound; each new market demands fresh capital. (Validation Capex per Geography →)
A clean CAC-payback model checks all four signals against comparable-company disclosures. If any of the four breaks, the model breaks.
A structured downloadable decision document
DimeADozen.AI was built for this specific job: a research-backed validation report that gives founders a build/don't-build read on whether their idea has legs — before they write a line of code or raise a dollar.
Not a chatbot to argue with. Not a course to work through. A structured downloadable decision document you take into a Saturday morning with coffee, and at the end of it you have a sharper sense of whether the math can work in your category, full stop.
If you have an idea where CAC-payback is a load-bearing assumption — most ideas — pressure-test the premise from public data before the market does it for you.
$59 once. No subscription. Credits don't expire. 1 credit = 1 full validation report.
Pressure-test the idea before the market does → dimeadozen.ai