“We can get you a new virtual machine in about 6 weeks.”
“Approval of a new data attribute is a standard 100 hours.”
“One year and a million dollars.”
How often do you find yourselves facing this conundrum? Why can coordinating resources between development and operations teams resemble Dhaka, Bangladesh (above) at rush hour? The reasons are many and varied, from bad incentives to heavy bureaucracy, but one simple cause is too much Work in Process.
Poorly managed Work in Process, or WIP, is a recurring theme in DevOps discussions such as The Phoenix Project. Seemingly hyperbolic complaints about hundreds of stalled change requests and tickets that languish in queue for years are (sadly) all too true.
What Changes Do We Need to Make?
The Lean movement, which originated in manufacturing, offers insights that are increasingly applied to IT systems development. It’s all about flow. As in all forms of production, one of the most powerful ways to get control over IT gridlock and WIP is to improve flow by reducing batch sizes.
We’ve seen the success of this in the DevOps movement. Even otherwise conservative organizations can benefit by reducing batch sizes and increasing release frequency. But is that sufficient? Large organizations are made up of many interacting delivery pipelines; how they work together as a system gives rise to its own complexity and constraints.
Therefore we need to dig deeper. What is the current state of the practice?
Let’s Look a Little Closer
In most enterprises, the current IT delivery and operating model is a combination of project and service management:
- Project management is tightly coupled to cap-ex and a perception of “innovation”:
- Project orientation encourages large batches of work in process and tends to contribute to technical debt, as it discounts the long term.
- Project execution competes for resources with other projects and with service execution (e.g. fractional allocation of personnel).
- Work involving multiple service processes is coordinated through project management expediting, with varying effectiveness.
- Non-project IT work is tightly coupled to op-ex and a perception of the need to “keep the lights on”:
- Defined “services,” perhaps in a “service catalog”
- Simple, sequential “ticketing” processes supporting each service
- Default assumption that the enterprise can always accommodate another service queue, and that encouraging their formation is optimal
- No governance of how many such processes exist
- No overall prioritization or view of work across service queues
Readers will recognize in the above the influence of frameworks such as PMBOK, ITIL, and COBIT, and the movement towards IT “service catalogs.”
This is all coming to a head. Project management has been in conflict with Agile for some time now. More recently there is much disillusionment with ITIL, and we increasingly see public statements against ticketing and even process orientation. I have been told by distinguished Agile thought leaders that they think little, if anything, can be salvaged from the current enterprise model.
Is It Time to Start Anew?
In my opinion, we’re better off starting from the perspective that what we have is evolutionary and hard-won. Those who seek to scale Agile by discarding all practices they deem “legacy” are not likely to succeed, for reasons well understood at the core of Agile theory. As Mike Burrows says in his excellent book, Kanban from the Inside:
“…some will tell you that when things are this bad, you throw it all away and start again. It’s ironic: The same people who would champion incremental and evolutionary approaches to product development seem only too eager to recommend disruptive and revolutionary changes in people-based systems—in which the outcomes are so much less certain.”
We can’t just do away with process. That is a cure worse than the disease.
But the current model definitely needs to evolve and, fortunately, some serious thinkers are exploring how. I think the leader of them all is Don Reinertsen, author of the book, The Principles of Product Development Flow. Reinertsen articulates with compelling clarity the emerging consensus that Agile IT delivery needs to be based on flow and throughput—and that those are enabled by small batch sizes, fast feedback, limiting work in process, managing queues, fostering collaboration, and correctly understanding the variability inherent in knowledge work.
He is especially concerned with queues, stating the following:
“Queues matter because they are economically important, they are poorly managed, and they have the potential to be much better managed. Queues profoundly affect the economics of product development. They cause valuable work products to sit idle, waiting to access busy resources. This idle time increases inventory, which is the root cause of many other economic problems.
Queues hurt cycle time, quality, and efficiency. Despite their economic importance, queues are not managed in today’s development processes. Few product developers are aware of the causal links between high-capacity utilization, queues, and poor economic performance.”
I would suggest that, in addition, we ought to not only limit the work in the queues, but also the establishment of the queues themselves. This is a significant challenge to the PMBOK/ITIL model, which tends to assume the organization can always handle another process. The good news is that this can be done incrementally, through the consolidation of many fragmented queues into fewer, more manageable and visible ones. (Note to IT management tools vendors: if this isn’t in your product roadmaps, you aren’t paying attention.)
Through such consolidation, we can enable better throughput, collaboration, and flow of IT functionality for business value. And isn’t that what it’s all about?
To explore some of these ideas further and to look as how best practices can be supported by effective process automation, take a look at our white paper “DevOps Brings Agile and Lean Methodologies to IT Operations.”
For more on how to navigate the difficult transition from more traditional models to the DevOps in your organization, you can read the words of someone who’s been there, done it, and written the eBook!
- What is “Data Center Colocation”? Data Center Colocation Explained
- Top 5 Cyber Practices To Keep You Safe
- Security Threats in the Multi-Cloud
- What is Threat Remediation? Threat Remediation explained
- HIPAA Introduction and Compliance Checklist
Dummies Guide to Security Operations
When security and operations teams collaborate closely, they can protect your business more effectively against all kinds of threats. Learn how you can maintain better security and compliance in the SecOps For Dummies guide.