I receive a lot of inquiries from clients about an EA maturity/assessment model. It’s proven to be a common and excellent way to track EA’s progress and influence plans — so common that we dedicated an entire report to it in our EA Practice Playbook, and we have an upcoming webinar for enterprise architects who want to build/customize their own model. The usual backstory is that an EA leader wants (or has been asked) to create a model from scratch or customize an external model to fit the organization. It’s usually about a 50/50 split between those options.

And what starts as a simple over-the-weekend project quickly becomes a frustrating struggle. The criteria pile up quickly — after all, EA does a lot of things. The granularity is inconsistent — one can measure a piece of a process or the larger process it belongs to. The scoring scale causes frustration — it can score many aspects of your criteria — and is either vague or specific. And when compared to other models, it inevitably looks vastly different from each one. It isn’t long before other day-to-day priorities put the effort on the back burner.

As one who has gone through the exercise a few times, I’ve got five tips that can help you move along faster and complete your model before other priorities swallow it up:

  1. Start by creating criteria “areas.” These “areas” should match how your organization deconstructs EA into digestible components. For some, this might be architecture domains, for others it will be value areas, and in Forrester’s case it was “EA Archetype.” Think about the activities/outputs that EA owns in those “areas.”
  2. Define how your organization perceives progress. Progress is a vague term — and you must decide what it means. Is it more repeatability? Is it increased satisfaction? Flexibility? Some criteria “areas” might require a different definition of progress — or many all at once. And this is what influences how you craft your scale.
  3. Use a few existing models to check for gaps. Once you have your criteria and your definition of progress, use existing models to see how complete you were. You’ll usually find that there were some criteria you didn’t think to include — maybe because your practice doesn’t do them yet. You want those criteria in there — it’s important to call out what you don’t do yet. You can tweak the criteria you see to fit your language and how you’ve deconstructed EA. Example models include E2AMM, NASCIO, and Forrester’s, to name only a few. You’ll notice they’re all different.
  4. Don’t aim to create an individual scale for each criterion. It will take a long time, and you’ll probably never be happy with it. I would recommend creating generic scales based on how you’ve defined progress that will vary in their phrasing. That phrasing will depend on if you’re measuring a process, a piece of tangible EA content, a role or board, an interaction, etc. It will also make taking the assessment easier if one doesn’t have to reread the scale every time.
  5. Let a “rationale” box be your catch-all in place of a precise scale. It’s easy to go overboard and want the scale itself to describe the specific problems that need fixing. And if you take my advice in tip No. 4, you may feel the scale is too high-level. Add a rationale box that capture two to three sentences about why you and/or your participants in the exercise decided on that score, so that your scale doesn’t have to predict or influence their answer. You’ll find out what you don’t know that way — and that’s the point of the exercise.

What are your tips and experiences for a thorough (but pragmatic) approach to creating an assessment model?