Technology growth is exponential. We all know about Moore’s Law by which the density of transistors on a chip doubles every two years; but there is also Watts Humphrey’s comment that the size of software doubles every two years, Nielsen’s Law by which Internet bandwidth available to users doubles every two years, and many others concerning storage, computing speed, and power consumption in a data center. IT organizations and especially IT operations must cope with this afflux of technology, which brings more and more services to the business, as well as the management of the legacy services and technology. I believe that the two most important roadblocks that prevent IT from optimizing its costs are in fact diversity and complexity. Cloud computing, whether SaaS or IaaS, is going to add diversity and complexity, as is virtualization in its current form. This is illustrated by the following chart, which compiles answers to the question: “Approximately how many physical servers with the following processor types does your firm operate that you know about?”
If virtualization could potentially address the number of servers in each category, it does not address the diversity of servers, nor does it address the complexity of services running on these diverse technologies.
The diversity and complexity issue becomes even more problematic when looking at IT management software. In a majority of IT operations, the IT management software portfolio has been built over time organically, addressing one issue at a time. Very rarely, except in the most mature organizations, do we find an overall IT management strategy. This creates a number of problems, but the most important one is that there is no way to optimize IT operations without a global view of services and infrastructure effectiveness (costs vs. service levels); additionally, the diversity and complexity of data collected, resulting from the tool diversity and specificity, prevent us from obtaining accurate global information. Throwing people at the problem is not a solution either: This is becoming far too complex for any individual to have a clear view of any IT organization. In the past, each time we have been confronted with the problem of managing an increasingly complex physical layer, we abstracted it into a “logical” one that presented a unified view of the technology, the first and best examples being operating systems and file management solutions. Instead of using automation piecemeal in workload automation, data center automation, or run book automation, we should in fact use “IT automation” to abstract the physical layer of IT operations. If we can collect and normalize data from all sources, analyze them, and automate the response, we have already made a step forward. If we now effectively transform this information again to serve higher levels of IT, then we have a solution to provide the basic elements for IT optimization. A recent visit to Network Automation in Los Angeles with my friend and colleague Glenn O’Donnell reinforced my belief that this was the only mid-term hope if we want IT to become more efficient.
As usual, I would love to hear your comments about this.