We’re starting to get inquiries about complexity. Key questions are how to evaluate complexity in an IT organization and consequently how to evaluate its impact on availability and performance of applications. Evaluating complexity wouldn’t be like evaluating the maturity of IT processes, which is like fixing what’s broken, but more like preventive maintenance: understanding what’s going to break soon and taking action to prevent the failure.

Volume of application and services certainly has something to do with complexity. Watts Humphrey said that code size (in KLOC: thousands of lines of code) doubles every two years, certainly due to increase in hardware capacity and speed, and this is easily validated by the evolution of operating systems over the past years. It stands to reason that, as a consequence, the total number of errors in the code also doubles every two years.

But code is not the only cause of error: Change, configuration, and capacity are right there, too. Intuitively, the chance of an error in change and configuration would depend on the diversity of infrastructure components and on the volume of changes. Capacity issues would also be dependent on these parameters.

There is also a subjective aspect to complexity: I’m sure that my grandmother would have found an iPhone extremely complex, but my granddaughter finds it extremely simple. There are obviously human, cultural, and organizational factors in evaluating complexity.

Can we define a “complexity index,” should we turn to an evaluation model with all its subjectivity, or is the whole thing a wild goose chase?

One approach that I’m contemplating right now is to measure complexity not directly but through its consequences, like evaluating your foot pressure on the accelerator by measuring the speed. For example, we would use metrics like support budget spending; ratio of support people to servers and applications; size of the code; frequency of change requests; time to resolve a category 1 issue; deployment rate of new services; or time spent on unplanned tasks in I&O. The list needs to be drawn up and checked, but it seems that all these things capture the relative complexity of a given IT environment by measuring its effect. Since complexity is a relative notion, this would not be a measure of “absolute complexity” but a measure that is significant for a specific organization.

Your input and comments on this will be greatly appreciated.