Understanding Bimodal IT

Gartner’s model for bimodal IT has both its zealots and its detractors. However, as a CIO how does one cut through the noise and leverage an understanding of the model to help optimize IT operations in their organization? At this Indy CIO event, I shared a table with some CIOs and explored the concepts in bimodal IT while listening to our host’s perspective and checking in with the rest of the room periodically.

Join me for a quick synopsis of what we came to know about how IT has two modes, how to harness that, and where things can go awry.

A Tale of Two Modes

The two modes in the Gartner model are:

  • Mode 1 – “Optimized for areas that are more predictable and well understood.”
    • Repeatability over Agility
    • Low Risk Tolerance
  • Mode 2 – “Exploratory, experimenting to solve new problems and optimized for areas of uncertainty.”
    • Agility over Repeatability
    • High Risk Tolerance

The most common misconception was that everyone should want to move from Mode 1 to Mode 2 IT. Inherent in the Gartner model is that you should be using both modes of operation. That is, some of the functions inside of IT should be operating in Mode 1. Other functions inside of IT should be operating in Mode 2.

You can’t treat an exploratory area, like telemedicine, like you treat the core electronic medical record (EMR) system. In telemedicine, the need for rapid adaptation and velocity of change exceeds the need to not fail. For an EMR, you need repeatability and low probability of failure more than you need adaptability and velocity of change.

Managing the Mixture and Match

Managing effectively in a bimodal IT paradigm isn’t about which mode you’re operating in, but rather is assessing the mixture of areas where Mode 1 is optimal and those areas where Mode 2 is optimal – and aligning the way that you address them to the way that they are best handled. As one participant noted, it’s not that the bimodal model is really all that different from what we’ve done in IT for a long time, but it’s giving a language to the differences so that we can clearly articulate what we’re doing.

It gives us a shared language to speak about the fact that, in some areas, we’re going to tolerate failure, because the impact of failure is low and the need for adaptability or velocity of change is high. We aren’t going to “throw out the baby with the bathwater” and pick only one way of operating, we’re going to develop a ratio of delivery in our organization that matches the needs.

Beyond matching the mode to the area, there’s the need to match the mode to the person, so that their natural talents, behaviors, and dispositions align. We spoke of the challenge of small IT shops where individual contributors and managers may need to work on both Mode 1 and Mode 2 areas. We acknowledged that there are some behavior/psychological assessment models like DISC that can be effective at helping us identify which mode team members might be better at. Those with D or an I focus are more action-oriented and are more suited for Mode 2-type areas, where professionals who are more S- or C-focused have the diligence necessary to continue to advance Mode 1 areas.

Iterative and Agile

An area of confusion in our discussion was exactly what characterized Mode 2 activities and what characterized Mode 1. Despite Gartner’s definitions, it wasn’t always clear what Mode 2 was, though it was clear that it didn’t mean traditional agile development or DevOps or any of the new methodologies for development and continuous improvement.

In fact, we discovered that either mode could be delivered with either agile or traditional waterfall development. The secret seems to live in the iteration cycles. That is, the cycles of development, integration, testing, and deployment are happening faster in Mode 2 – where the cycle costs are lower. Mode 1 cycle costs are much higher due to the much more extensive testing cycles.

So it’s not that you have to pick a delivery approach based on Mode 1 or Mode 2 – it’s that you have to attenuate the cycle times based on the cost per cycle.

Preplanning, Waterfalls, and Staying Agile

Mode 1 is the hallmark of the traditional IT department, where the risks are well-known and are relatively large and the area itself is well-known. Operating a phone system, managing connectivity to the Internet, and managing mission critical systems fit in to this category. There’s the need to have a governance process that reduces the frequency of changes and improves the opportunity to catch errors before they reach the consumer.

In Mode 1 systems, there are many knowns, and so the relative degree of predictability is higher than in new and uncharted areas, where there aren’t established patterns for service delivery. Because of the greater degree of predictability, it’s possible to do better planning and structuring of Mode 1 systems. Mode 2 systems, by contrast, are generally chaotic and don’t follow established rules of how things should be done. Because of these Mode 2 characteristics, planning work and rules are generally less effective.

The velocity of iterations – whether you’re in a waterfall methodology or an agile methodology – is driven by the factors of ability to preplan, tolerance for risk and impact, and urgency of need. A low risk ofimpact tolerance slows cycle times and places a greater burden on each cycle to be “right.” This is convenient when it’s possible to predict and plan the operations – as in a well-established system. A low ability to preplan and a low degree of tolerance for risk and impact means that the costs will be high. This is particularly the case if there’s also an urgency of need.

Scenarios where there are a low (true) need for urgency, low risk and impact tolerance, and a high ability to preplan slow systems into Mode 1 operation. Increasing the tolerance for risk and impact – making failure ok – can move a system from a more Mode 1-like operation to more Mode 2-like operation. Even the “distinct” modes in the model aren’t distinct – they’re points on a continuum.

Our goal in IT is to continue to support responsiveness to the organization while balancing the needs of risk tolerance and impact avoidance. We classically have done that through breaking dependencies, minimizing coordination, and reducing batch sizes.

Breaking Dependencies

When you have complicated systems, you have complicated interactions between them. Systems with well-defined boundaries and contracts look like Lego building blocks. One system can be swapped out with another with minimal – if any – impact on the other systems in the organization. Unfortunately, this is rarely the case in practice, as organizations have connected systems in ad hoc ways. The need for standardization gave rise to the enterprise application integration (EAI) platforms in the 1990s. By defining the EAI or the even more grandiose services bus, the relationships between systems were supposed to be well-known.

Few organizations completed the massive work of deploying an EAI solution or a services bus before they ran out of energy. The work to plan for systems to be changed later and to optimize the interface between the systems was crushed by the realities of needing to deliver something to the organization today.

One of the CIOs I was talking to in this period told me that my project – a SharePoint Intranet project – was the only way that he could demonstrate any tangible value to his efforts for a services bus. For all the work he was doing breaking up dependencies, there was very little to show for it.

When the dependencies are reduced, it becomes possible to reduce the testing scope when you make changes to a system, and this substantially reduces the cost of delivering an update. The heart of reducing dependencies is defining the contracts between systems – whether you implement an EAI tool or not.

Minimizing Coordination

The three-legged race is a famous coordination problem. Friends, classmates, teammates, or members of the same group are paired. Two people each have one leg bound to the other’s. The result is a three-legged competitor. It’s amazingly hard to race in this configuration, as you realize the small differences between the way that you run and the way that the other person runs often leads to falling and tumbling over one another – rather than racing to the finish. This is the essence of the coordination problem.

In IT, we seek to minimize the coordination between systems so that we don’t have to take the cost of coordinating with other systems. Here, too, contracts are the answer. By contracting how the dependency and coordination should happen, you can identify those times when coordination will – and won’t – be necessary. This results in a lower cost of coordination and higher velocity.

Reducing Batch Sizes

Left to the pressures of low risk and impact tolerance, the natural bias of IT is to reduce the frequency that you cycle at. After all, you can absorb the extra testing costs if you only must do it once or maybe twice a year. However, this forgets that the business needs their changes now. To be responsive to the needs of the organization requires delivering more frequently. However, this is in conflict with the need to minimize risk and impact from changes.

Ultimately, this is pressure the CIO must apply to continue to maintain velocity while managing the risk.

Technical Debt

Sometimes the best way to improve velocity is to “buy down” debt. That is, to reduce the number of friction points which make it difficult to operate in shorter cycles. This might be improving the automated or unit testing coverage of an application to reduce the need for manual testing, or it can be retiring old systems which have high maintenance costs and are unnecessarily coupled to other systems.

“Technical debt”, the term often used to describe shortcuts which were taken but never properly addressed, can have a substantial impact on the velocity of the IT department.

Like Clockwork

Looking at a bimodal IT with slower-moving, more risk-sensitive projects and smaller, faster, less risk-sensitive projects is like clockwork. The pieces of the puzzle fit together and work together to provide a time-keeping instrument, even though not all the pieces move at the same speed. The use of different pieces for different needs, knowing where the gears will mesh, and accepting that some pieces will move fast and some will move slow if things are to work properly is like understanding how bimodal IT works. Some things should be Mode 1 and some things should be Mode 2.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *