Glenn O'Donnell This is one of the most frequently asked questions I get in my many interactions with people on the topic of CMDB. The short answer is, “A CMS is possible, but the common model of CMDB is not.” I have even been challenged on Twitter that CMDB is nothing more than an endless time sink (follow glennodonnell to see the threads). Sadly, this is a common perception that is fueled by the many failures resulting from an unrealistic view of CMDB as a monolithic database.

Herein lie the real gremlins of the CMDB and the primary reason Carlos Casanova and I set out to write The CMDB Imperative in late 2007. Yes, the book is now available from your favorite book retailer <wink!> (including Kindle <another wink!>)! The main problem with CMDB is the term itself. The right way to think about the configuration management database (CMDB) is not to think of it as a database at all. It is an organized collection of databases and other data sources that are very likely to be in multiple formats in multiple places in multiple tools from multiple vendors. Some may be in a home-grown Oracle database, some in an HP network management tool, some in an EMC storage tool, some in your HR database, etc.

A monolithic CMDB has several problems, most of them around the challenges of maintaining accurate contents. It might be good to highlight this issue with a little story.

My team and I built our first CMDB in 1986, although we didn’t call it a CMDB. We were too young and naïve and none of this had yet evolved to a reasonable point. It was more an asset database than a CMDB. It was a decent model of our newly complex environment of distributed Sun workstations and servers, but it was a grand failure. We didn’t capture the many relationships needed in a real CMDB, although that was not our fatal mistake. Our youthful naiveté led us to believe we could maintain its accuracy. It was all manual. We had no automated means to populate our CMDB to reconcile it with reality. We quickly abandoned it, but we did learn from that debacle.

Our “CMDB” failed because it was what many beleaguered CMDBs are today – what I like to call unified ambiguity. These CMDBs are consolidated repositories, but the contents are of dubious quality. If you can’t trust your CMDB, you are in trouble. Your decisions are driven by the CMDB. Good data yields good decisions. Bad data results in flawed decisions and sloppy execution. When you make the wrong decision, you need to go back and do it all over again, usually more slowly because you now also have to gather and verify the state of your relevant environment. In The CMDB Imperative, we point out other flaws of the monolithic model, but unified ambiguity is the greatest curse against the CMDB.

So, if the CMDB is no good, what is a CMS and how is it so much better? ITIL v3 introduced the concept of a configuration management system (CMS) that finally abandons the notion of a consolidated data repository. Instead, the CMS incorporates many CMDBs – what we should really call management data repositories (MDRs).

An MDR is a tightly focused information repository. It is not an attempt to build a model of the entire world. It maintains a view specific to its focused domain. You will have one for your network, one for your servers, maybe another for your virtual servers, one for your mainframe(s), one for your distributed applications, and so on. An MDR has two distinct benefits: it is fine-tuned for that domain and it is far easier to keep it accurate.

A network router has different attributes than a virtual server and a J2EE application is very different than a SAN, so it is best to have MDRs dedicated to each and tailored to reflect the precise model of each. A “one size fits all” model is a compromise, the proverbial jack of all trades and master of none. An MDR must be a master of one.

By restricting an MDR to a specific domain, we can keep it more accurate. Most notably, we can implement discovery technology to “learn the truth” about the domain. This has quickly become a key focal point of the CMS, especially with regard to practical solutions you can buy and implement today. With discovery, you can ensure a high degree of accuracy in this particular domain. Each MDR can be its own pocket of the truth. Pockets of the truth are far superior to unified ambiguity!

Here we get to the meat of the “What’s real?” element of the CMS versus what is yet to come. The whole vision of federating the MDRs into a cohesive CMS is still at least a year away, probably more for most of us. There are indeed actions we can take now and technologies we can buy now that will help us in our journey. My June 30, 2008 Forrester report A Federated CMDB Remains Distant, But Start Now, addressed this very issue. At the top of the realistic steps we can take is automated discovery. Other steps of course, are involved with planning. If you try to embark on a CMDB/CMS without the right planning, executive support, and staffing, I guarantee doom to your entire mission. Do not attempt this as a “lone wolf” or “skunk works” within your organization. Both are politically dangerous tactics, especially in economic times like these!

At nearly the same time ITIL v3 was published, a group of leading CMDB vendors collaborated on the release of the CMDBf specification. Its name, CMDB Federation Working Group, may not have been very original, but the work that came out of the CMDBf is profound. Some of the language is a bit different between CMDBf and ITIL v3, but the philosophies are perfectly aligned. CMDBf puts more meat on the bones of what ITIL v3 espouses, as it clearly defines the technical details of how tools will share their data.

ITIL v3 tells us that the data will exist in pockets. CMDB tells us how to link these pockets. In the ultimate view of federation, we don’t try to copy data from one MDR to another. Instead, we “point” to the data using metadata and web services. This is well defined in the CMDBf specification. As we navigate the data for the intended purpose, we jump from MDR to MDR and back as defined by abstraction models that represent the higher-level use case we are executing.

For example, an application will likely be an abstraction model that tells us how the various servers and other elements constitute that application. I show a highly simplified n-tier application example here. Application model The application model contains several infrastructure elements and their relationships. We need not pull all of the data about each of these into a single point. The application model will link to the appropriate MDR(s) for the network devices, which hold the truth about those network devices. It will also link to the relevant server MDR(s), which contain the server truth. Others MDRs are included, but the abstraction that represents the application is relatively simple and concise. The means by which they are assembled gives us the full picture and therefore, the full value of what we need to do.

Is this really all possible? Absolutely! We need to wait for a few of the technologies to mature, however. Vendors are moving swiftly, but we will be well into 2010 by the time the various parts of the CMS will fit together. Even then, legacy tools will be a challenge, probably needing some custom federation adapters. Throughout this period, we will experience small steps forward, many of them already in progress. CMDBf support is already emerging from the major vendors. Others will follow.

We must also view these tools as merely enabling the processes we execute. Good process is still the number one missing link in the IT profession. The tools accelerate the process execution. You can do good things faster or do bad things faster. The choice is yours, but the answers are there if you play your cards right.

By Glenn O'Donnell

Check out Glenn's research.