A Conversation With Murray Cantor (Part 2)

In Part 1, Murray Cantor reframed technical debt as economic liability and argued that uncertainty is not an annoyance to be minimized, but the core feature of IT investment. Once you accept that premise, a deeper question emerges: how should organizations reason — and decide — under uncertainty?


“You Are Inexorably Drawn to Bayesian Thinking”

Murray Cantor: If you believe these three principles — think like an investor, embrace uncertainty, take a systems perspective — you are inexorably drawn to Bayesian thinking. Because what are you doing? Your initial estimates are priors, and then you update them with information. It’s exactly Bayesian thinking.

Betz: That’s been the missing link for me — how to handle the initialization parameters when I don’t have 30 clean data points.

Murray Cantor: The whole point of Bayesian reasoning is that it’s all about what you believe, and you use data to update your beliefs. But you can’t have just the data without beliefs and really learn anything.

Your model, your doom‑loop diagram [Charlie: see Is Your IT Organization A Ponzi Scheme?], your conjectures — don’t discount those. In a Bayesian framework, that’s your prior belief. And a prior is where all reasoning begins.

What’s changed is that the computation is now feasible. I can run beautiful Bayesian analysis on my MacBook Air in a couple of minutes. Monte Carlo in Python is a few lines of code. The math was never the barrier — the computing was. That barrier is gone.


Sparse Data Is Not a Showstopper

Betz: I’ve been collecting sparse data points from clients — one company says their technical debt is 12% of the IT budget, another says 17%. These aren’t going to add up to a frequentist sample.

Murray Cantor: Do you know what a Bayesian net is? This is what you need. It models the relationships between uncertain quantities using conditional probabilities. I have examples where a frequentist approach takes 100 observations and I can get the same answer in five.

You should look at Agena Risk — it’s a tool built by colleagues of Judea Pearl, who won the Turing Award. His book The Book of Why is the conceptual scaffolding. The tool ships with examples of exactly this kind of sparse‑observation reasoning.


Forecast It Like a Hurricane

I’ve often compared IT portfolio management to meteorology. Murray explained exactly why the analogy works.

Betz: I use the comparison to meteorology. Meteorologists constantly run forecasts based on models. This is how they see hurricanes approaching. Technical debt, in worst‑case scenarios, can be the economic equivalent of a hurricane.

Murray Cantor: Do you know how they actually run those models? The problem is that the models aren’t very stable with respect to initial conditions. Small changes in the initial condition can lead to very different outcomes two weeks later.

Betz: The butterfly effect.

Murray Cantor: Exactly the same math. So what they do is they run the same model a whole bunch of times, varying the initial conditions just a little. This is called ensemble methods. And what they get is a set of data, which they turn into a probability distribution. That’s how they build those hurricane forecast cones.

It’s the same principle. Every time you have uncertainty, you should think about probabilities. The question is how you derive the probability distribution.

And what this model should capture is scale. If you have 100 users, a defect may never show up. If you have 10,000 users, it’ll show up occasionally. If you have a million users, it becomes urgent. There’s a one‑in‑10,000 chance this happens, and you run it 10 million times — the unlikely defect becomes inevitable.


Agile’s Unfinished Business

The agile movement correctly identified uncertainty as the core problem of software development. Murray’s critique is that it stopped just short of formalizing it.

Murray Cantor: The agile movement did respond to some really important issues. I get that. But then what’s funny is that they themselves stopped being agile. Follow the process.

Betz: They went down the same road as ITIL. SAFe and ITIL look exactly alike when viewed through the right lenses.

Murray Cantor: Exactly. And they didn’t follow their own philosophy. What is agility, really? It’s the ability to respond well to changes in your environment. And that still hasn’t changed.

The irony is that agile correctly identified uncertainty as the core challenge of software development — and then refused to formalize it. “No estimates” is a response to bad estimates, which I understand. But the answer isn’t no estimates. It’s better estimates.

Murray Cantor: The naive no‑estimates people say you can’t have point estimates five months out. And of course you can’t — that’s the whole point of working with uncertainties and variances. The answer is you don’t do point estimates, but you still need some kind of estimate.

Then the other camp — apparently the no‑estimates people are fans of Reinertsen and flow measures. And the reason is that they have data, so they can take a frequentist point of view. But if you’re doing a brand new program and there are no events yet, that perspective doesn’t work. So you need Bayesian priors.

Betz: In different environments, different levels of certainty are acceptable. I believe Marine Corps training teaches a ‘70% solution’ mindset: don’t wait for perfect certainty; decide and act quickly enough to keep the initiative, because if you try for more certainty, you’ll almost certainly lose the initiative to the enemy.

Murray Cantor: Exactly. Perfect example. And Eisenhower said it: planning is essential, but all plans are useless once the battle is started.


The Point of the Book — and the Moment We’re In

Murray Cantor: Look, only later did I realize what I’d been doing intuitively was Bayesian reasoning. At the beginning of a project, you’re uncertain. At the end, if you’re going to ship the next day, you’re entirely certain. So why aren’t we measuring uncertainty?

You can’t manage what you don’t measure. And how do you measure uncertainty? That’s probability theory. It’s variances, it’s distributions.

The whole point of the book is to get people more comfortable with these concepts. You treat development efforts as an investment. You look at net present value and return on investment using random variables, because the risk of an investment isn’t the discount rate — it’s the uncertainty of the future values. You use Monte Carlo to do the arithmetic, and then you do better, because once you get experience you can update with Bayesian models.

Betz: Throughout these conversations, you and I kept returning to the same gap: the math has been available for decades, but the data to feed it was locked in silos or didn’t exist. That’s changing. Observability platforms, DevOps pipelines, code quality tools, and CMDB graph databases are generating the kind of continuous, structured telemetry that probabilistic models need. And GenAI gives all of us access to postdoc-level research support (see: AI As Tool Creator: The Next Frontier In Knowledge Work.)

Murray Cantor: Right. The math is now trivial to run. The data is there. The question is: are we willing to think differently about these problems? I think we are. I think we have to be.

Once uncertainty is taken seriously — not as an annoyance, but as the defining feature of IT investment — probabilistic reasoning becomes unavoidable. Bayesian thinking is not an academic preference; it is the logical consequence of making high‑stakes decisions with incomplete information. Organizations already operate on priors, intuition, and partial evidence. The difference is whether those beliefs remain implicit and frozen, or explicit and continuously revised as reality unfolds. What has changed is not the theory, but the feasibility: the data, compute, and tooling required to reason probabilistically are now commonplace inside large enterprises. The remaining barrier is cultural. Leaders who can reason in ranges rather than certainties, and who can update their convictions without treating it as failure, will make better decisions over time — not because they predict the future more accurately, but because they adapt faster when the future inevitably diverges from plan.

Murray Cantor is a mathematician, systems engineer, and author based in Sedona, Arizona. He previously served as IBM’s representative to the SysML Partners, led systems engineering methodology at Rational Software, and built defense and intelligence systems at TASC and Boeing. See more about him at murraycantor.com.

Charles Betz is a vice president/principal analyst at Forrester Research covering IT management, technical debt, and enterprise architecture.