Validation Capex-per-Geography: How to Pressure-Test Per-Location Investment Before You Build
Most founders test demand. Far fewer test whether the capex per new geographic unit their model assumes is achievable, and recoverable in the time the model assumes. Demand answers "do people want it." Capex-per-geography answers "can you stand up the next city, store, depot, route, or kitchen at the marginal cost your plan budgets for, and earn it back inside the break-even window your equity story depends on." The first is a survey question. The second is a research question with public-data inputs, and it quietly kills delivery-kitchen, dark-store, automated-warehouse, and franchise-style rollouts long after the seed round closes.
This post extends the broader frame on validation unit economics, where capex-per-geography sits as Signal #3 of three. It is also the sister piece on density math and the sister piece on repeat rate — same cluster, same teaching shape, different signal. Here we go deeper on capex-per-geography specifically — how to pressure-test the premise that your assumed marginal cost per new unit, and your assumed payback window per unit, will hold against named comps, before you write a line of code or raise a dollar.
What "validation capex-per-geography" means
Validation capex-per-geography is not the same exercise as building a CapEx spreadsheet. A CapEx model takes your assumptions as given — $400K to fit out the next kitchen, 9 months to break even — and projects forward. Validation capex-per-geography does the opposite: it takes your assumed marginal cost per new unit and stress-tests it against public comparables — incumbent S-1s, franchise disclosure documents, shut-down post-mortems — to ask whether any operator has ever stood up a comparable unit at that price, in a comparable market, inside the window your model assumes.
It is research, not forecasting. The deliverable is a build/don't-build read on whether your capex-per-unit floor is reachable and your payback window is precedented, supported by named comps.
Three public-data sources for capex-per-geography signals
You do not need proprietary data to do this work. Three sources cover most of the surface area.
S-1 filings and IPO prospectuses. Sweetgreen's S-1 disclosed average new-restaurant build costs and target store-level payback. Cava's S-1 broke out cash investment per new restaurant and the contribution-margin curve a new unit needs to clear. Chipotle's earlier filings and Domino's franchise disclosures both name a typical-new-unit investment band. If your concept assumes $600K to open a unit in a category where the closest comp set discloses $1.4–1.6M, your model has a capex gap that is pre-build-flaggable from public data.
Franchise Disclosure Documents. FDDs are the SEC-equivalent disclosure regime for franchise systems and are required to publish typical-investment ranges, build-out cost, and royalty structure. They are filed publicly with state regulators and indexed by services that surface them for free. An FDD tells you, in named ranges, what it actually costs a third-party operator to open the next unit of an established brand in the same category. If your model sits below the lower bound of every comparable FDD, that gap is the read.
Shut-down post-mortems on capex-heavy failures. The most underused source. Munchery commissary kitchens, Webvan automated warehouses, Beepi inspection lots, Take Eat Easy depots — each had a usable autopsy that named the capex-per-geography assumption that broke. Free lessons paid for by other founders. Read three before you build.
The Munchery case, compressed
Munchery raised roughly $125M and stood up commissary kitchens in San Francisco, Seattle, Los Angeles, and New York. The model required roughly $1.5–2M of capex per city kitchen and a 6–9 month break-even window per unit. The kitchen had to be production-grade, food-safety-compliant, and fitted with cold-chain and routing infrastructure — none of which scaled down at the second, third, and fourth city. Capex held flat or rose with each new geography (real estate in NYC and LA was not SF), while order density structurally thinned outside the original SF zip codes. Same per-unit capex, weaker per-unit revenue, longer-than-modeled payback, 2019 wind-down. The longer worked example lives in the Munchery autopsy.
The point is not that Munchery was a bad idea. The capex-per-geography floor and the per-city payback window were both knowable from comparable filings — before the second-, third-, and fourth-city kitchens were built.
Two more that tell the same story
Sweetgreen. Sweetgreen's S-1 disclosed average new-restaurant build costs in the roughly $1.5M range and a target store-level payback the equity story depended on. A founder modeling a 12-location urban rollout at $700K per build, in the same category, has a capex gap visible in a single public filing — and a payback assumption that has to be checked against the comp curve, not against a deck.
Webvan. Webvan stood up automated grocery warehouses at roughly $30M+ each in the late 1990s, anchored to a household-orders-per-week density the suburban markets never produced. The capex was real and irreversible; the density to amortize it was not. The capex-density-mismatch was knowable from grocery-incumbent disclosures and census-tract data — a capex-per-geography read on the public record now for any founder considering an automated-warehouse rollout.
Common founder mistakes
Two patterns show up repeatedly when capex-per-geography assumptions go unexamined.
The first is assuming linear capex scaling. Founders model unit-1 capex and copy-paste it across units 2, 3, and 4. In practice, real estate in market 2 is not market 1, zoning costs in market 3 can be 2–3x market 1, and labor varies by metro. The right move is to name a capex band — low, mid, high — pulled from comparable FDDs and S-1s, and to fund only if even the high band clears the unit-economic floor.
The second is treating real estate as a commodity input. Lease cost, build-out cost, and zoning friction are location-specific, not category-specific. A delivery-kitchen concept that pencils in a Phoenix industrial park does not pencil in a Brooklyn light-industrial zone, and the gap is calculable from public lease-comp data before you sign the first lease.
How DimeADozen surfaces this
A DimeADozen.AI research-backed validation report does the capex-per-geography work in two sections. The Operational and Scaling section pulls comp-set capex-per-unit disclosures and FDD investment ranges for the named category and geographies. The Risk Analysis section flags capex-density-mismatch when the assumed payback window sits below the comp-set median, and names the analog. The output is a structured downloadable decision document a founder can hand to a co-founder or an investor and use to pressure-test the build/don't-build read together — not a chat session you re-create from scratch every time.
When to run this
Run validation capex-per-geography twice. Once before you write a line of code or raise a dollar — to confirm the per-unit capex and payback window your category supports can clear the rollout your model requires. And again before each new-market expansion, because capex does not generalize: the SF number is not the NYC number.
A DimeADozen.AI report is shape-different from a chatbot subscription: $59 once. No subscription. Credits don't expire. 1 credit = 1 full validation report. A structured downloadable decision document, not a chat session. If your model depends on per-city capex, per-store build-out, per-warehouse investment, or per-route hardware, the capex-per-geography math belongs in the report you read before the wire — not the lessons-learned deck after.
For the canonical frame on the question every founder gets wrong about validation, start with the JTBD anchor.