Mistaking Technical Strategies for Adaptive Execution Challenges Often Leads to a Poor Culture of Performance

Author: Bryan Ritchie

Mistaking

The argument has been made that the biggest single leadership failure is to treat adaptive challenges like technical problems. Adaptive challenges are difficult to identify; require changes in values, beliefs and relationships; demand the involvement of those with the problem to solve; are often cross-functional/disciplinary in nature; are frequently met with resistance; and to solve, require experiments, new discoveries, and significant time to implement and test. 

Technical problems, on the other hand, are almost exactly the opposite. They are easy to identify; have clear, tested solutions; can often be dictated by an authority or expert; are often contained within organizational boundaries (change happens in only one or a few places); are met with compliance or enthusiasm; and to solve requires only an edict and short period of time. So why, if the difference between these two kinds of work is well known, do leaders struggle to apply the right approach at the right time? The challenge is the underappreciation that technical and adaptive issues can be found in BOTH an organization’s strategy AND its execution of that strategy. Put simply, leadership makes assumptions that are often wrong about where the technical and adaptive problems are and what must be done to solve them.

An example may help clarify. Is losing weight an adaptive or technical problem? On the one hand, losing weight can be reduced to a simple equation: fewer calories consumed and more calories burned equals less weight over time (CC – CB = W). If this is a technical problem, we should all be able to simply eat less and exercise more. But the evidence suggests something else is going on: 68.8% of all American adults are overweight and 35.8% of all American adults are obese! If losing weight is simply a technical problem, then the only logical conclusion is that American adults want to be overweight.

But what if the technical component of this problem is only in the strategy? That is, all else equal, eating less and exercising more, as a strategy, will result in losing weight. Most would agree this is a sound strategy. And it is technical in nature according to the description above. Then why is the strategy so hard to implement?

The answer is because all else is not equal. In fact, the execution of weight loss is an adaptive challenge. No one knows, ex ante, the precise mechanisms of their individual body. How much should we eat, when, in what quantities, of what types of food, how fast, and so forth? Very credible research has told us that cholesterol is bad and later that it’s good. It suggests that fat, carbohydrates, and proteins are alternatingly the cause and remedy for weight gains. Diets from “South Beach” to “Paleo” and everything in between suggest little convergence on how to implement this technical strategy.

What is the solution? The trick to addressing adaptive challenges is in the testing of different approaches in the execution. It is making a “bet” and then testing that bet, rigorously, persistently and transparently. This kind of execution requires relentless data collection, visible scoring, and adaptation as results dictate. Let’s revisit our weight loss example. To attack the adaptive problem, I need first to put a stake in the ground, to decide on a reasonable course of action dictated by the strategy. Let’s assume I decide to eat 2000 calories or less per day and work out for 30 minutes four times per week. The next steps are to first do what I said I would and then score it religiously. Then, give this bet an appropriate amount of time based on expectations of cause and effect. Finally, after the appropriate amount of time, review the results. If the results are positive, continue the activities, if not, make changes. 

Think of the power of this approach. If the initial bet was wrong, change behavior. Perhaps working out 30 minutes 4 times a week needs an intensity measure, like burning at least 500 calories per session. Or maybe there are health issues, such as a thyroid problem that needs addressing. The key to revealing the adaptive problem is disciplined application of key performance activities (KPAs) over time that reduces, one variable at a time, the reasons for failure. (As an aside, this is why failure (documented and fast) is such an important, positive, component of innovation.)

Just as for weight loss, this approach also works for adaptive and technical problems within an organization. Whether strategic or implementation challenges are technical or adaptive, they can be resolved by systematic application of strategic bets that are closely scored and reviewed. Clearly it is the scoring and accountability to KPAs that drive this process. But if applied, the results are powerful.

I once led a team that felt they were addressing a technical challenge with a known remedy. We applied the remedy, scored our results over time and then reviewed the outcomes. We were surprised to find that the causal relationship we believed existed actually did not. Interestingly, our competitors were all doing the same thing we were doing. Everyone believed the inputs would lead to the desired outcomes. But in fact, after careful scoring (which everyone else did not do) we learned that the relationship did not exist. So we stopped doing the old behavior and started doing something new. This new behavior was not being done by any of our competitors. But over a short time we recognized that this new behavior was not only accomplishing our objective, but doing so much more effectively and efficiently than anyone had done it before. The result was that our organization rocketed to the top of our industry with respect to several key performance metrics. In short, we found competitive advantage through the systematic execution of what turned out to be an adaptive challenge.

The key insight is that difficult adaptive challenges can be unraveled through disciplined application of strategic “bets” that are systematically scored, reviewed, and then altered as necessary. As Einstein noted, to do the same things over and over again and expect different outcomes is the definition of insanity. Instead, we should test different behaviors and their results on outcomes as a way to find the activities that really result in our desired outcomes. Obviously, and as an important final observation, the trick then is in the disciplined running of the test.