A Conversation With Murray Cantor (Part 1)

 

Enterprise IT runs on billions of dollars of implicit risk, yet most organizations still manage it with deterministic spreadsheets and gut‑feel prioritization. Murray Cantor — PhD mathematician, defense systems architect, and veteran of IBM’s most successful high‑risk projects — thinks there’s a better way. Over three conversations in the fall of 2025, we talked about technical debt, uncertainty, and why IT investment decisions deserve better economics. What follows is drawn from those conversations, thematically arranged and lightly edited for clarity.

 


Managing Uncertainty, Not Optics

I first met Murray at a lean software gathering over a decade ago. At the time, I didn’t fully appreciate his background. Over our recent conversations, I got a fuller picture.

Murray Cantor: I have a PhD in mathematics, and I was somehow successful at managing novel, high‑risk projects. I had instincts for this. I would accept a risky project because my feeling was, if I took on those projects and failed, nobody would be surprised. And if I succeeded, I’d be a hero. There was no downside. Other people were afraid to take them.

Charles Betz: And the results spoke for themselves?

Cantor: I actually got audited at IBM because they didn’t believe my productivity numbers. And the same thing happened at Boeing. I was consulting with them on a satellite ground station for intelligence agencies. It was the most successful satellite ground station program they’d ever seen. Boeing sent me to talk to the colonels at NRO [National Reconnaissance Office] about how it was going. I told them what I was doing, and they said, “Oh, thank God — someone’s finally doing that.”

Betz: What were you doing, specifically?

Cantor: I would deliberately manage the things that would reduce uncertainty. I wouldn’t worry about earned value measures or trying to claim earned value early. I was focused on reducing uncertainty. And I organized teams so that their communications would emulate the system communications.

Betz: So you were doing the inverse Conway maneuver before it was named.

Cantor: Exactly. I structured teams around system architecture. And I insisted on continuous integration from day one. Boeing’s original plan was to build components for four years and then have integration day. I just thought: intuitively, that can’t possibly work.

Murray later became an IBM Distinguished Engineer, served as IBM’s representative to the SysML Partners, and built systems engineering methodology at Rational Software. He is currently writing a book on probabilistic methods for engineering investment decisions.


“Stop Calling It Debt — It’s A Liability”

When I reached out to Murray, I was (and remain) deep into technical debt modeling — building system dynamics simulations, collecting survey data, and talking to CIOs. Technical debt is, after generative AI, one of the most frequently encountered topics among Forrester’s senior leadership clients. Murray immediately had thoughts — starting with the term itself.

Cantor: “Technical debt” is not a well‑defined term, frankly. I’ve always felt it was too imprecise for people to find useful. So what I did a while back was start talking about the economic impact of having known issues with your code. I wrote about this for Cutter, on the notion of technical liability.

Betz: How is that different from debt?

Cantor: When you ship code, how much liability are you assuming in terms of future cost and the impact of bad outcomes? A good example is predicting warranty costs. The question is: If you ship something knowing there are issues, is shipping before you address them more economically advantageous than fixing them? Because fixing them takes time, takes money. You may miss a market window.

What Microsoft did with Windows 3.0 was intentionally releasing a product filled with significant technical debt to capture the market, making it economically justified to ship it that way.

Betz: Because Apple was already out there.

Cantor: Right. And so what you’re essentially doing by shipping code is you’re self‑insuring. You can buy liability insurance for code you’re shipping — people do sell that, though it’s still pretty rare. The idea is: How would you price what an insurance policy should cost for the risk you’re assuming? Because that’s what you’re doing. You’re self‑insuring.

If the cost of that self‑insurance is more than the profit of the code, you shouldn’t ship. If it’s much less, go ahead. Treat it as an economics decision.

Betz: I’ve been using a thought experiment with clients: Imagine that time stops; no new business activity. How much would you need to spend to bring every system up to date? A major insurance company told me the number was $550 million. A CTO at a major international bank told me that in some areas, non‑optional technical debt consumes upward of 60% of capacity. We’ve started calling that threshold technical bankruptcy.

Cantor: So what they’re really saying is it’s a future liability issue. The cost of addressing the problems they released is excessive by some measure. That makes sense. They have a real monetary value, and it’s a kind of liability — it fits into the framework I’m describing.

Betz: It does, but it’s hard to quantify.

Cantor: But they can quantify it. This is exactly what we could help them do — quantify it to the point where they understand that it’s a return-on-investment question. How much money should we spend on this?


Uncertainty Is Where Value Actually Comes From

Murray’s thinking rests on a simple but uncomfortable premise: IT investments are investments — and uncertainty is not a nuisance variable.

Cantor: My view is that when you spend money on any IT system or product, it’s an investment, and you should treat it like one. But not only that — it’s an investment fraught with uncertainty.

From a financial point of view, uncertainty provides the opportunity for value. The idea is you want to invest in innovative stuff. Well, by definition, that’s uncertain, but it gives you more opportunity to create more value.

Betz: Don Reinertsen has emphasized this, too — that uncertainty is actually economically valuable.

Cantor: Exactly.


Projects As Options, Not Binary Bets

Cantor: And the argument that a project is worth nothing until the day it’s delivered is ridiculous. Because the day before delivery, it clearly has value. Someone would buy it. How about two weeks before delivery? A month?

People actually reason about this when they do acquisitions. It’s more like an investment option than most of the real options that the literature describes, frankly. It’s exactly like a call option: the right, not the obligation, to take on an existing project and continue the development to delivery and accrue the future benefits.


If technical debt is really technical liability, and if uncertainty is the source of both risk and value, then the uncomfortable implication is obvious: Most IT organizations are making multi‑million‑dollar investment decisions without the mathematical tools those decisions require. Many organizations are managing one of their largest economic exposures — technical fragility — with the wrong mental model. When technical debt is treated as a metaphor, it stays in the IT basement, argued over in moral terms and deferred until it becomes existential. When it is treated as liability, it moves into the realm of capital allocation, risk appetite, and executive accountability.

This reframing does not make the problem easier, but it makes it governable. It forces leaders to confront an uncomfortable truth: Some technical liabilities are rational to carry, even strategically advantageous, but only if the organization understands the risk it is assuming and why. In an environment defined by digital complexity and accelerating change, managing uncertainty is no longer a secondary concern. It is the work.

See more about Murray and his current work at murraycantor.com.

In Part 2, we turn to the missing math — and why, once you accept uncertainty, there’s no avoiding it.