Problem Solving: Rigidity in Thinking

Ever so often, we are presented with analysis that doesn’t turn out anything that we already didn’t know, or even worse, superficial. We often put that down to intellectual laziness – but I wonder, is there more to it? While most organizations have some problem solving methodology or the other, and some teams in these organizations follow the methodology to a fault, they still end up with a poor solution to a problem. For today, I want to focus on a major (often under-appreciated) obstacle to problem solving – rigidity in thinking which ends up with teams going down narrow solution paths. What are some of the common biases that lead us down these paths?

The world is a better place for Bill Watterson!

Of cognitive biases

Functional fixedness1 or the Einstellung effect: This is defined as a ‘mental block against using an object in a new way that is required to solve a problem’. In other words, most of us are pre-disposed to look at objects (or for that matter, information) in a certain way. And that in turn, makes it difficult for us to move beyond the ‘tried and tested’ methods to explore different solutions. It has also been called the ‘Einstellung effect’2 which literally means a ‘setting’ as well as a person’s ‘attitude’ in German. This effect is said to occur when a person when presented with a problem or situation that is similar to problems that they have worked through in the past and their solution tends to be a repeat of a previous solution to a similar problem. Several experiments exist – the most well-known one being the candle problem

This is far more common than we think in organizations – even in a fast-moving and evolving world like we are in today, people tend to repeat the same analyses: for instance, continue to use the same models to predict a collection default risk or even use the same set of metrics to continue to monitor the business performance.

Inductive inferences: This one is related to the ‘functional fixedness’ effect. We are all inductive learners – in other words, we manage to learn general concepts, categories etc. by learning from what seems like a hopelessly inadequate number of specific examples. While that serves us just as well as we go about navigating the world, this creates an obvious problem when applied in organizational problem-solving situations.

Analysis that is driven by inductive thinking may have the advantage of speed, but runs the risk of jumping to false, ineffective conclusions. It is important therefore to ask two important questions of all teams:

  1. What are the set of assumptions (or prior knowledge) on which a given instance of induction is based? In other words, what is the set of initial hypotheses? Has the team put enough effort in putting together a good set of hypotheses – while the textbooks teach us to come up with MECE (Mutually Exclusive and Completely Exhaustive), anyone who has worked with data knows that it is an ideal (and probably impossible goal). What we are really after is a good enough set of constraints on the hypothesis space to get us going.
  2. How does that knowledge support generalization beyond the specific data observed? In other words, how do we judge the strength of an inductive argument from a given set of premises to new cases? In data-science speak, how rigorous is your team in building a test dataset for scoring the model and also, how often do you validate the model against a new, updated test dataset?

How can we avoid these biases?

  1. The ‘5-Why method’: This was originally developed in Toyota3 to root cause manufacturing problems. It is a surprisingly simple but can be highly effective – especially when the problem at hand is a diagnostic one. Too often, the first (or even the 2nd level of hypotheses) tend to be superficial and focus on the proximate cause as opposed to trying to figure out the underlying factors. Getting your team to formally look at a problem with the ‘5-why’ framework is a good place to start, even more so if it is a diagnostic one.
  2. Cross-functional teams: One of the most effective ways of breaking the functional fixedness problem is to get cross-functional teams to solve them. There is enough and more research to back this up – and even more so, when the teams are pulled together from adjacent functional areas. Which is also why it is really important to build diverse teams – even more topical in the current environment. Here’s a good podcast on that topic
  3. Ask the machines: As we continue to farm more activities to algorithms, they present a very useful alternative to avoid biases. Statistical ML algorithms adopt relatively weak inductive biases, which is why they are require much more data for successful generalization than humans do: at times, it may even be desirable that ML algorithms lack ways to represent and exploit the rich forms of prior knowledge that guide people’s inductive biases. This is particularly useful in text analysis where you need a lot of iterative Exploratory Data Analysis before you can narrow down on a set of hypotheses.

Further Reading:

  1. https://en.wikipedia.org/wiki/Einstellung_effect
  2. https://en.wikipedia.org/wiki/Functional_fixedness
  3. https://en.wikipedia.org/wiki/Five_whys
  4. If you still need evidence for the power of diversity, here’s a very nice podcast (which actually triggered this entire post): https://www.npr.org/2020/07/27/895858974/creativity-and-diversity-how-exposure-to-different-people-affects-our-thinking

Leave a comment

Blog at WordPress.com.

Up ↑