The boundaries of what we mean by “application life-cycle management” continue to stretch and tear, like Arnold Schwarzenegger stuffed into a toddler’s jumper. While we still have to be careful about defining ALM so broadly that it’s no longer a meaningful category, it’s clear that the traditional list of functionality ― task management, build management, requirements, management, etc., etc.― is at least a couple of sizes too small. In fact, the amount of overlap with product life-cycle management (PLM) is so great that it may be increasingly hard to discuss them separately. They may be surprised to find how closely related they are, like Schwarzenegger and Danny DeVito in Twins, but the connection is definitely there.
Even without PLM tugging at it, ALM is stretching to fit the real development processes it ostensibly manages. As development teams are not indifferent to what happens after they hand off their code to the operations people, ALM has been expanding to include more elements of release and deployment. ALM can’t accommodate everything ops-related without ripping apart at the seams, but it does need some alterations.
PLM is a whole different consideration. Rather than expanding the definition of ALM, it adds another layer on top of it ― primarily to accommodate the realities of embedding software in other products (cars, refrigerators, medical devices, etc.). Because the number of these hybrid hardware/software products expands daily, the urgency of figuring out how ALM and PLM fit together as part of a common ensemble has been increasing.
Business processes, such as crafting requirements that encompass both the hardware and software components, is one reason why ALM and PLM need to be stitched together. While product teams already know how to do this (for example, by framing the requirements in terms of “systems of systems”), the tools they use don’t always share the same level of understanding. PLM tools that in theory should accommodate both hardware and software usually fall short when dealing with the digital part of the product. Some elements of ALM, such as source control management, don’t even exist in the PLM world.
ALM isn’t without its own shortcomings in these situations. Traceability often requires pulling new metadata along with a backlog item in each step of development. (Who approved the design? Who performed the test? How were any defects discovered prioritized?) If you think this additive nature of traceability is unimportant, you’ve probably never dealt with compliance. Auditors and regulators get very prickly about the absence of this kind of information, but many ALM vendors are still figuring out how they will address this need.
It’s unlikely that ALM and PLM will merge into some ugly, one-size-fits-all equivalent of stretch pants or the moo moo. Still, the market has suddenly realized that these tools need to complement each other better. PTC’s acquisition of MKS triggered a lot of speculation about the joint future of ALM and PLM, but it’s not clear yet what sort of ensemble will result. Vendors like IBM are, for the time being, playing up systems engineering use cases and trumpeting success stories like General Motors (based on a partnership between IBM and BigLever). Smaller vendors like SmartBear are testing the waters with point capabilities that bridge the hardware and software worlds, such as collaboration between the two groups during document review.
SmartBear’s approach seems pretty, er, smart considering how many aspects of the PLM/ALM relationship are still unknown. Integration is already a key requirement of ALM, so it’s not as though ALM vendors are completely unprepared. However, the use cases are still fairly murky, so we won’t be seeing any grand integrations any time soon. Connections between PLM and ALM will happen faster at the process level. Expect to hear terms like “product line engineering” and “systems of systems” a lot more frequently.