In the early part of next quarter, I am entering a research phase on a topic I have alluded to many times: techniques for Process Architecture.

One of the key problems that BPM initiatives suffer from is that, even with all the attention, we end up with processes that still have significant issues — they are too inflexible and difficult to change. They become just another version of concrete poured in and around how people work — focusing on control rather than enabling and empowering.

A phrase that I picked up (from a business architect) put it fairly succinctly:

“People tend to work hard to improve what they have, rather than what they need.”

This was then further reinforced by a process architect in government sector on an email:

“The wall I keep hitting is how to think about breaking processes into bite-size chunks that can be automated.”

The problem is that we don’t have good techniques to design (derive) the right operational process architecture from the desired business vision (business capability). Of course, there is an assumption here that there is an effective business vision, but that’s a subject for another line of research.

I am talking about the operational chunks — the pieces of the jigsaw puzzle required to deliver a given outcome. Not how the puzzle pieces are modeled (BPMN, EPC, IDEF, or any other modeling technique), but how to chop up the scope of a business capability to end up with the right operational parts.

If they even recognize the problem upfront, what normally happens is that folks apply functional decomposition to what they currently think of as their “processes,” which often more closely links the operational activities to the current org chart. Rather than breaking down the silos, this approach tends to reinforce the existing structures (Stafford Beer once described the org chart as “mechanisms for apportioning blame,” which I think is about accurate). The resulting process implementation, while it may have been automated with a BPM suite, merely speeds up the existing processes, complete with all of its arcane exception handling and workarounds.

Putting it another way, you can often end up in a bigger mess, faster! There was no attempt to simplify — merely to automate the cow paths.

Now I have a couple of techniques in mind for further assessment, but I am interested in interviewing anyone who has been involved in major process initiatives where any of the following conditions were true:

  • The process that was under investigation turned out to be a series of processes.
  • The solution followed a dynamic case management approach — where the implementation was composed of a number of processes.
  • The implemented processes changed significantly throughout the project.
  • BPM implementations where process structure changed significantly post initial implementation.
  • Situations where there is a dynamic relationship between processes — i.e., where one process instantiates, triggers, or chains to others.

I would stress, this research is technology-neutral — the techniques I am looking to identify are at an abstraction level higher than the technological implementation. It shouldn’t matter what technology is used to implement. Once you have an environment (BPMS) where one process can trigger another and pass it some context, i.e., just about every BPMS, then you have the basis for inter-process communication. After that it all comes down to how you design the processes. 

I would be happy to share the results with participants who contribute to the research. Interested parties should email me directly at dmiers@forrester.com.