Pick mine, pick theirs, but pick a tool and stick with it!

by Rita McGrath

The utilization of management tools, ideas and frameworks has a strange cyclicality to it. The boss reads another book and suddenly we’re all required to calculate Net Promoter Scores! It’s worth taking a critical eye to the tools that we use to increase performance.  

 The development of new management practices

My former advisor, Ned Bowman, often said that theories move through four stages. The first is the description – when something is very new (hello, ChatGPT), the best we can do is try to describe it. That’s why so many genuinely new ideas are articulated in the form of narratives and stories – we don’t have hard data about new and emergent things.  

Next, once we understand a phenomenon a little better, is the explanation. For instance, when Kathy Philips did her groundbreaking work on the performance differences between diverse teams and homogenous ones, it took research into neuroscience to explain why homogenous teams perform worse than diverse ones on tasks involving creative problem-solving. It turns out that when we are among familiar people, our brains just coast along – when we are confronted with diversity, they have to wake up! The dilemma is that while performance is objectively better with a diverse team, it’s harder work, so ironically, we perform better but feel worse about the work.  

Once we think we know what causes what, we can begin to hazard guesses about what actions will cause subsequent things to occur. Prediction markets, markets that claim to harness the wisdom of crowds benefit from the reality that people will make different predictions about future events. And when groups of people make predictions, their perspective has been shown to be superior to those of experts.  

And finally, when we have the causalities nailed, we can control them. The triumph of Taylorism in creating algorithmic controls for a huge number of applications is a case in point.  

Academia versus practice – something of a chasm

Academics tend to focus on description and explanation. We are rewarded for building on existing theories and developing new ones. That means that solutions to thorny problems are often created, but not disseminated beyond the walls of academia, as academics are not necessarily rewarded by anyone using the ideas they create.  

Consultants and executives, in contrast, focus on prediction and control. They are rewarded for making outcomes happen, not for parsing academic research. This often leads to major disconnects and to perfectly usable knowledge not being put into practice.  

As Luke Williams observed, “I entered academia with the arrogance of a practitioner. I had the sort of attitude that proclaims..."I'm going to show these academics in their Ivory towers how innovation is really done!" It didn't take long for humility to kick in once I realized:  

Many of the problems "we practitioners" were working hard to solve had ALREADY been solved in a research paper...Written when I was in high school. Actually, some of the solutions were conceived in papers...Written before I was born. It makes sense that practitioners wouldn't know about these insights...They're too busy "getting stuff done" to have time to comb through research papers... let alone read a book. The problem is that these two worlds exist in parallel universes and rarely come in contact with one another. As a result, both practitioners and pundits suffer.”

Another issue is the sheer proliferation of potentially useful management tools. In some companies, every quarter or every new management conference results in more would-be tool adoptions. This can create proliferation of systems, a lack of integration and eventually, management logjams.  

Let’s say you want to sort your way through the tools currently in place in your organization and figure out where you have some that work and where you may be able to retire a tool or two.  

What makes a given tool a good one?

Given that so many tools point to basically the same result, perhaps it would be useful to talk about how to tell a solid one from those that aren’t.  

The first thing I look for is whether a tool, concept or framework acknowledges the boundary conditions within which it operates and as importantly in which it doesn’t. For instance, Porter’s five-forces model operates really well in stable industry environments where the industry exists, where firm boundaries are well defined and where competition between players has something of a pattern to it. It has very little to say about nascent industries, ecosystems of co-opetition, or fast-moving shifts in competitive position, (as the recent exchange between Google and Microsoft would represent).  

Theories about the strategic benefits of learning curves (which was the theory behind the famous BCG portfolio matrix) work well in sectors with steep learning curves but not well at all in sectors where learning curves are shallow or in which scale proves elusive.  

A second thing that I look for is that a tool, framework, or theory clearly spells out the dependent and independent variables it claims to operate on. In other words, what is the presumed causality we are thinking works? My friend Phil Rosenzweig wrote about exactly this problem in his wonderful book The Halo Effect. In that book, he points out any number of examples in which pundits identify a high-performing firm or set of firms and then attribute whatever they happen to be doing at the time as the source of the high performance. He rips right into In Search of Excellence as an example of selecting a research sample on the dependent variable (excellence) and then assuming that everything else the firms did led to the outcome. Similar criticisms get made of other books that employ these methods such as the wildly popular Good to Great.  

Finally, before adopting a tool, it’s important to specify the causal mechanism that makes it work, or at least articulate the assumptions about the causal mechanism. The widespread adoption of Taylorism was possible because it progressed through a series of experiments in which different mechanisms for doing tasks can be compared and the most efficient of these adopted. Measurement, feedback and subsequent improved performance could be observed.  

The dilemma with most management tools is that the relationship between cause and effect is hard to pin down. For instance, we think that low employee turnover and engagement are likely to cause high performance; but what if instead a firm was growing so quickly due to sheer luck that it can offer interesting work that people enjoy and pay them well, so they stay?

For a tool to be effective, it needs to be put into use

We know that many tools get at the same fundamental human social processes. What makes the difference is that they are used consistently. I often therefore say to my clients, “pick my tool, pick their tool, but pick one and stick with it!”  

Or as Curt Carlson, the former CEO of SRI International tells us, “I played the violin professionally and the reason you have to practice 6 hours a day is you need to build up a family of mental models to play different kinds of music. It takes about 10 years and you break things down, theory, scales, artistic forms and more. There is no alternative — at least for us humans.”

Speaking of tools…

At Valize, we’ve been working on all kinds of tools to help you compete in rapidly changing and disruptive markets. Send us your problem statement and we’ll see if we can offer some ideas and maybe even solutions.  Check them out!

The original article was written in thoughtsparks.substack.com. You can read the original at: https://thoughtsparks.substack.com/p/pick-mine-pick-theirs-but-pick-a

Complexity is not Supposed to be Complex