Companion to the paper

Product-Customer Coupling: Why Product-Market Fit Happens When It Happens

One mechanism — the customer’s regulator — explains switching cost, timing, iteration, and pivot survival. Four phenomena, one structure.

The customer is already running a regulator that keeps the business alive. Your product proposes to alter it. PMF selects on whether the alteration is worth its cost.

Tamas Babel · BME InnoLab · Wigner Research Centre for Physics
The mapping between Ashby’s regulator and PMF emerged from cofounding two deep-tech ventures and running into requisite variety the hard way. Ex-BCG. PhD, MBA.

Read on SSRN · DOI

The customer is already a regulator

Any customer still in business is already regulating itself. Regulator in Ashby’s sense — any mechanism that holds essential variables within bounds — not a government authority. The customer has things that must stay within bounds — revenue, retention, compliance, uptime, whatever the business counts as survival — and it faces a landscape of disturbances that would knock those things outside bounds if nothing absorbed them. Pipeline variance. Regulatory change. Competitive moves. Technology shifts. The ordinary weather of its market.

Picture this as a coverage matrix. Each row is a kind of disturbance the customer must absorb. Each column is one of the regulators absorbing it. Each cell asks whether a given regulator covers a given disturbance. (The same coverage matrix illustrates every section that follows.)

Against that landscape, the customer runs an incumbent regulator: the working combination of software, processes, staff roles, heuristics and institutional memory it uses to stay alive. That regulator is imperfect. It’s probably strained at the edges. But it exists, it works well enough for the business to still be here, and it has weight.

Your product doesn’t enter a vacancy. It enters alongside that machinery — displacing some of it, augmenting it where it has gaps, or both. What the customer actually couples with isn’t the artifact you ship. It’s the artifact plus the vendor apparatus around it: sales, onboarding, support, SLA, parallel-run infrastructure. All of it is part of what has to fit into the customer’s running system.

The cost of that rewiring is what kills most deals.

Switching cost isn’t one thing

Adoption literature treats switching cost as a single number — “lock-in,” “risk aversion,” “inertia.” The conflation makes resistance look arbitrary: two similar customers display very different resistance and nobody can say why.

It’s not arbitrary. Two components, scaling on different things.

switching cost = destruction cost + exposure cost

Destruction cost is the variety lost when your product displaces parts of the customer’s incumbent toolkit — staff retrained, workflows rewritten, decade-accumulated configuration dismantled. It scales with how much of the incumbent you displace.

Exposure cost is the harm incurred while essential variables sit transiently unregulated during the transition — old tools half gone, new product not fully ramped. It scales with the stakes of what’s exposed and how long the window stays open.

The two run on different clocks. The customer’s essential variables are buffered — cash, redundancy, routine absorb shocks before they reach the viability boundary. The vendor’s are not. The customer can defer; the vendor’s runway cannot. The venture runs out of cash while the customer is still deciding. The paper’s name for this failure mode is death by pilot — not death by rejection.

Founders consistently price destruction — feature parity, data migration, integration. They under-price exposure. Pilots rarely die on feature gaps; they die when exposure outruns customer tolerance before the product ramps.

The operational consequence is to budget the two separately. Destruction is a roadmap problem — features, migration, integration. Exposure is a pilot-design problem — cutover length, parallel-run scope, which essential variables stay covered during the transition. Teams that conflate them ship features into a pilot that needed a structural redesign.

Falsifier: holding destruction constant, higher-stakes essential variables should produce stronger switching resistance. If they don’t, the decomposition fails.

Doors open and close

Practitioners know timing matters. Two identical products, months apart, can land very differently. The folklore answer — right place, right time, read the tea leaves — gestures at the phenomenon without naming the mechanism.

The mechanism is structural. A door opens when the customer’s disturbance landscape shifts and the incumbent can’t cover the newly load-bearing rows at viable cost. The opening is the gap between what the incumbent still handles and what the world now demands. If your product closes that gap, you have structural entry. Absolute quality on rows the incumbent already covers won’t carry you through.

Enterprise IT through the cloud transition is the cleanest case. On-premises fit a landscape of stable workloads and batch jobs. As spiky demand, geo-distributed access, and elastic analytics became load-bearing, the incumbent ran out of viable coverage. Public cloud won by closing the new rows, not by beating on-premises on what on-premises already did well.

The diagnostic is simple. Look where incumbents are straining at the edges of their coverage, not where they’re executing poorly. Execution noise is competitive weather. Strain at the edges is structural opening.

The framework doesn’t forecast which shifts will happen — that needs outside knowledge of technology, regulation, and markets. It gives you the condition an entrant has to satisfy once the shift is visible. Falsifier: once absolute quality is controlled, entrant success across cohorts should track whether the new product closes the cells the shift opened. If it doesn’t, the claim fails.

The operational consequence is concentration over speed. In an opening window, build narrowly into the row the incumbent now under-serves; don’t spread effort across rows the incumbent still covers. Speed compounds only after you’ve identified the right cell.

Doors are read, not waited for.

Iteration is the method; convergence is the question. Why does build-measure-learn tighten the loop in some cases and run forever in others?

The mechanism is model-based search. Teams that converge are refining an internal map of the customer’s disturbance landscape — which rows the incumbent regulator fails on. Each iteration is a probe with a hypothesis attached: this feature closes this row. The map updates; the next probe tightens. Iteration without a map is undirected probing, and undirected probing burns the exposure budget before it covers the space.

This is also why some pivots land and others don’t. A technology pivot — same customers, new architecture — preserves the team’s map. A segment pivot — same product, new customers — discards it, because the prior map was calibrated to a different landscape.

Slack is the canonical instance. Tiny Speck built real-time messaging to coordinate Glitch, a multiplayer game that failed. Artifact and segment both changed at the pivot; what carried across was a hard-won map of one narrow disturbance landscape — the cost of threading shared work through email — which generalised to engineering teams and then wider knowledge work.

What survives a pivot, or doesn’t, is the coverage-structure map: the team’s read on which rows the incumbent is failing in the first place. Falsifier: if pre-scored model preservation has no bearing — or the reverse bearing — on post-pivot convergence time, the claim fails.

The paper

PMF, on this account, is the segment-level reproducibility of a viable product-customer coupling, sustained by enough revenue density to keep the vendor alive.

Four claims at the customer-fit level. The customer is an incumbent regulator with institutional mass. Switching cost decomposes into destruction and exposure. Entry windows open as structural deficits in the incumbent’s coverage. Convergence is model-based search under an exposure budget. With the urgency asymmetry between buffered customer essential variables and fragile vendor runway, the four yield five falsifiable predictions.

The same mechanics constrain design. The vendor apparatus is part of the product the customer actually couples with. Pricing absorbs destruction and exposure where the vendor can do so more cheaply than the customer. Onboarding, customer success, and iteration operations are structural components of regulation — not generic overhead.

PCC sits in the cybernetic-management tradition: Ashby’s regulator, Beer’s Viable System Model. VSM characterises intra-organisational viability; PCC characterises the inter-party equilibrium two viable parties have to reach for adoption to occur. Complementary, not competing.

The full argument — the apparatus, the predictions, the design moves — is in the paper.