Hook
After burying the "lines × day-rate with AI discount" method for my 91,000-line ERP, I found myself facing a harder question than the first one. What do I replace it with? I spent two days testing every single-dimension model available. None held. Here is the graveyard of my attempts, and what eventually worked: four dimensions, summed, not weighted, with a SQL deduplication rule that took me a week to find.
If you have 30 seconds. No single metric holds for valuing an internal ERP: ARR, SaaS replacement cost alone, usage value alone — all of them flatten reality. This article proposes a method of four summed dimensions (SaaS equivalent, usage, data, strategic), with the TypeScript consolidation function, the SQL deduplication query between dimensions, and the weekly cron that produces a defensible snapshot. Useful if you have to justify the value of software without a market price.
The graveyard of single-dimension models
First attempt, ARR. The preferred method of SaaS pitches, clean, universally understood. I opened a sheet, typed three formulas, and understood within fifteen minutes that I had no recurring revenue to put in it. Rembrandt is an internal tool, not a product I sell. ARR dead in fifteen minutes.
Second attempt, SaaS replacement cost. The idea: "what L'Atelier Palissy would pay in subscriptions if Rembrandt didn't exist." I listed Odoo, Brevo, Monday, Digiforma, Edusign. Monthly total around 425 euros, or roughly a hundred thousand over five years, discounted. Defensible in front of an accountant. But wrong the other way: no market SaaS covers the 10,000 lines of logic for four catch-up periods per year across six sites with Qualiopi rules. The counterfactual under-values what is precisely what makes the tool specific. Model dead one Sunday night.
Third attempt, usage value alone. "How many hours Rembrandt saves Françoise, Hélène, Catherine, Claire." The measure most aligned with real economics, by far. But inapplicable without friction: timing someone at the office is socially costly, and quarterly self-reporting is biased by the mood of the day. I started drafting a Google form. I closed it. Model dead before birth.
The diagnosis fits in one sentence. Rembrandt is a composite asset that mixes code, data, avoided labor, and strategic optionality. Applying a single metric to it flattens at least three of those dimensions. They all four have to be recognized.
The four dimensions
| Dimension | What it measures | Method | Fragility point |
|---|---|---|---|
| 1. SaaS replacement cost | The market counterfactual | Σ equivalent subscriptions × 12 × NPV 5 years at 8% | Undervalues the non-substitutable |
| 2. Usage value | Human productivity freed | Hours/quarter × loaded hourly cost × NPV 5 years | Not instrumentable without friction |
| 3. Data patrimony | The non-regeneratable intangible asset | Volumes × market unit price + ADR capital | Market unit price unstable |
| 4. Strategic value | Optionality and sovereignty | Scores 1–10 on 4 axes (velocity, lock-in, AI…) | Opinion-driven, assumed |
Each measures something real that the other three don't see. Dimension 1 sees the market but not the singular. Dimension 2 sees human time but not data. Dimension 3 sees patrimony but not usage. Dimension 4 sees possible futures but nothing of what exists today. None is reducible to another. That's precisely what forces having four.
The aggregation rule: sum rather than max
This is the central methodological point, and it's the one that gets debated the most.
Why not the maximum. Taking the max ("the dimension that values the most wins") is a rhetorical move, not a method. It hides three of the four dimensions and produces a number that doesn't say what it measures. You fall back into single-dimension through the back door.
Why not the average. The average dissolves the difference in nature between dimensions. To say that Rembrandt's valuation is on average SaaS, human time, data and strategic is to claim they are commensurable when they precisely are not.
Why no weighted coefficients. Weighting introduces an additional arbitrariness that is no more defensible than the initial choice of dimensions. You solve one problem by creating another, less visible one.
That leaves the sum. It is not a natural aggregate, it is an explicit choice. It asserts that each dimension exists independently, that the system is worth their cumulation, and that every displayed euro is traceable to a source and a method.
In code, the heart of the calculation fits in twelve lines:
// lib/valorisation/compute.ts
type Dim = { id: DimId; low: number | null; high: number | null }
export function consolidate(dims: Dim[]) {
const present = dims.filter(d => d.low !== null && d.high !== null)
return {
value_low: present.reduce((a, d) => a + (d.low ?? 0), 0),
value_high: present.reduce((a, d) => a + (d.high ?? 0), 0),
dims_used: present.map(d => d.id),
}
}
Three decisions are written there. We sum without weighting. A dimension not yet instrumented doesn't break the calculation — it steps aside and that's noted. The caller always knows which dimensions participated, which makes every snapshot auditable three months later.
The over-addition trap
This is the most delicate technical passage of the model, and I didn't see it at first. A historicized contact contributes both to the data patrimony (dimension 3) and to the replacement cost of a CRM SaaS (dimension 1). Without correction, the same asset gets counted twice. It took me a week to understand that my intervals were inflating because I was doing exactly the error that a naive sum allows.
The fix sits in a column scope_couvert_pct on the SaaS-equivalents table: each SaaS only counts for the share of the scope it would actually cover, and the share already captured by dimension 3 is subtracted. The dim.1 query, on wakeup, is this:
-- dim 1: SaaS replacement cost, with dim 3 deduplication
SELECT
SUM(
cout_mensuel_eur * 12
* (scope_couvert_pct / 100.0)
* (1 - 0.30) -- share already captured by dim 3 (CRM data)
* 3.9927 -- 5-year annuity NPV at 8%
) AS saas_centre
FROM valorisation_saas_equivalents
WHERE actif = true;
The 1 - 0.30 didn't come from thin air. It's the estimated share of CRM scope that would already be paid once as part of the data patrimony (3,000 contacts × market unit price). It is documented, dated, revisable. Nothing is magical. Everything is justifiable.
The weekly rerun
A valuation model that lives in a spreadsheet won't hold a quarter. So I put the calculation in a weekly cron that does three things at 4 AM on Monday:
// app/api/cron/compute-valorisation-donnees/route.ts (excerpt)
const centre =
(contacts + leads + inscriptions) * PRIX_UNITAIRE_CONTACT_EUR +
ANNEES_HISTORIQUE_FINANCE * VALEUR_ANNEE_HISTORIQUE_EUR +
adrsCount * VALEUR_ADR_EUR
const donnees_low = Math.round(centre * 0.75)
const donnees_high = Math.round(centre * 1.25)
await admin.from('valorisation_snapshots').upsert({
snapshot_date: today,
donnees_low, donnees_high,
saas_low, saas_high,
usage_low, usage_high,
strategique_low, strategique_high,
value_low: sumLow,
value_high: sumHigh,
}, { onConflict: 'snapshot_date' })
The cron counts contacts, leads and enrollments live on Supabase, recomputes dim 3, re-reads dims 1, 2 and 4 (less volatile), sums the four, and writes a dated snapshot. Each snapshot is a defensible point. Rembrandt's value on April 14th is a row in my database; the one on April 21st will be another row, with its own explanation. You can go back, compare, explain breaks.
Four limits to accept
Dimension 4 is opinion-driven. Scored 1–10 on four axes, with written justification. Subjective, and displayed as such. The fix: frame the assumptions explicitly, date each score, force a quarterly review.
Dimension 2 has an optimistic bias. Quarterly self-reporting by Françoise, Hélène, Catherine, Claire remains imperfect. It's corrected by trend, not point by point.
The frame imposes a maintenance cost. Hour entries, revision of reference SaaS prices, update of strategic scoring, re-reading of the scope_couvert_pct of each SaaS. One to two hours per quarter, no more. That's the price of defensibility.
Dim. 1 can drift with the SaaS market. A competitor dropping its price, an open-source tool gaining maturity, and the valorisation_saas_equivalents table silently goes stale. The guardrail is human: annual review, dated notes in each row.
The conversation with Étienne
I needed a contradicting voice, and I had the right one on hand. Étienne Brossard is my majority partner — sixty percent of Palissy. During the week, he works at an investment fund that buys software publishers. His daily professional thesis is precisely to put a price on software assets whose sellers would prefer he put a different one. When he looks at a valuation table, he doesn't read it as an accountant — he reads it as an acquirer.
We met one Saturday morning at a café halfway between our two offices. I pulled up the four-dimension table on my laptop. He looked at it for thirty seconds without touching his coffee, then asked his first question, the one he always asks. « Sur quoi c'est calé, la dimension 1 ? » — What is dimension 1 grounded on? I replied with the source: the valorisation_saas_equivalents table, eleven rows, public prices refreshed in March, documented scope_couvert_pct column. Nod. He moved to dimension 3. I cited the Monday 4 AM cron, the direct Supabase counts, the constant PRIX_UNITAIRE_CONTACT_EUR. Nod.
Dimension 2 stopped him longer. « Comment tu chiffres les heures ? » — How do you count the hours? I explained the quarterly self-reporting, four people, loaded hourly cost reconstituted from the average payslip. « Défendable si tu affiches la méthode. Pas si tu affiches le chiffre seul. Une auto-déclaration sans cadre, c'est ce qui fait sauter une transaction en data room. » — Defensible if you show the method. Not if you only show the number. An ungated self-report is what kills a deal in a data room. On dimension 4, he was quicker. « Ça c'est de l'opinion, pas du chiffre. Affiche-le comme tel, et ça tient. Si tu déguises ça en multiple, on te démonte en quinze minutes. » — That's opinion, not a figure. Show it as such, and it holds. If you dress it up as a multiple, you get torn apart in fifteen minutes.
Then he closed his notebook and said the line I wanted to hear, without my reaching for it. « Il faut bien chorégraphier pour éviter les frictions financières. » — You have to choreograph well to avoid financial frictions. « What you're showing me here, I've seen at fifteen companies this quarter. Three survived their due diligence. You have one advantage over the others: you wrote your model before anyone asked you for it. »
What mattered in that conversation wasn't that Étienne validated the table. It was that he could ask the same question four times — the question he asks every week to founders trying to sell him their company — and receive four answers that held. He wasn't evaluating a module. He was evaluating, under his breath, what he had bought when he came onto my cap table.
What counts isn't the correctness of the model
A valuation model is not true or false. It is defensible or it is not. The four dimensions are defensible because each can stand independently before an accountant, a buyer, or an auditor. The sum is defensible because it assumes its status as an explicit convention rather than hiding it behind a supposedly objective figure.
The most important pivot, therefore, is not moving from one dimension to four. It is accepting that any valuation of a software asset is a construction, and that a well-documented construction, with its fragilities named and its code versioned, weighs more in a discussion than a mechanically-looking calculation that hides its assumptions.
For anyone coding with an AI and wondering how to talk about the value of their work, the real question isn't "how much is it worth". It's "across how many distinct regimes of value am I prepared to defend the number I put forward". With one, we get torn apart by the first serious critic. With four, independently argued and honestly summed, we stand a chance.
What you can copy into your project
Four directly reusable elements, in order of implementation ease:
-
The
consolidate(dims)function (twelve TS lines) — sums without weighting, null-safe, tracks which dimensions were used in each calculation -
The
scope_couvert_pctpattern — each value source declares explicitly what percentage of the scope it covers, which makes deduplication mechanical rather than eyeballed -
The
valorisation_snapshotsschema withsnapshot_date UNIQUE— one dated point per week, auditable later - The weekly cron that aggregates contacts/leads/enrollments live and writes Monday morning's snapshot
And one rule: any aggregated valuation is a convention, never a fact. Own it in your dashboard (visible methodology note), document every assumption, date every score. An honest model with named fragilities weighs more than a precise number that hides its assumptions.
And you — which dimensions are missing from your current measure? I read the comments.
This article is part of a series on building a 91,000-line ERP in four weeks with Claude Code for L'Atelier Palissy, an art school. The previous article explained why I abandoned the "lines × day-rate" method. The next one will detail the instrumentation trajectory of the /admin/valorisation module in three waves (schema migration, UI overhaul, monthly cron).
Companion code: rembrandt-samples/valorisation/ — the consolidate(dims) pattern, the schema, and the LOC guardrail, MIT, copy-pastable.
This article was originally published by DEV Community and written by Michel Faure .
Read original article on DEV Community