Problem solving: Avoiding narrative fallacies

Last week, I talked about ‘perfection being the enemy of good’ in problem-solving. However, what if we swing the pendulum to the other extreme and draw conclusions from anecdotal evidence? In other words, are we susceptible to ‘narrative fallacies’?

In ‘Thinking, Fast and Slow’, Daniel Kahnemann describes a (now famous) thought experiment that he and Tversky once ran – they called it the ‘Linda experiment’. Here is how it goes:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

An overwhelming majority of respondents chose Option-2. Even though we are all taught that the probability of two events occurring together is always less than or equal to the probability of either occurring alone, we still fall into this trap. Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic procedure called Representativeness  to make this kind of judgment: Option 2 seems more “representative” of Linda based on the description of her, even though it is clearly mathematically less likely.

Nassim Nicholas Taleb calls out that these mental shortcuts form the basis for ‘the narrative fallacy’: “The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.”

Why do we fall for Stories?

We all love a good story – and even more so, when it is presented with a strong narrative arc. We are inspired by how Steve Jobs returned, rescued and led Apple to one of the most successful companies in history; how Elon Musk built Tesla and inspired a cult following and so on. Even though the reality would have been far more complex – hundreds of engineers toiling away, multiple failed attempts, organizational politics and so on; and in the end, I suspect it really comes down to a combination of serendipity and plain luck.

The question then is: why do we regularly fall for these stories? Behavioral scientists have called out two major reasons:

  1. Cognitive overload: We simply do not have the capacity to process all the information that is available out there – and so we regularly take short cuts.
  2. Sharing with stories: We are compulsive social animals and have built our shared consciousness with stories that we share with each other all the time. And stories need to be simple (to the point of being simplistic) to stick.

Should we care?

It happens all the time: we duck the time and effort it would take to understand multi-factor correlations and choose causality to explain the events around us. And for the most part, it works – I will admit to regularly falling into this trap: for instance, I am a huge fan of biographies and enjoy reading the (simplified) heroics of individuals that has apparently shaped history in almost all areas – politics, sports and business.

However, narrative fallacies do have implications – especially in decision making in organizations. Decisions made and budgets allocated based on spurious causation lead to wasteful and worse, negative outcomes. You don’t have to think too hard to come up with instances in your organization where the ‘Linda problem’ played itself out leading to sub-optimal outcomes. And so, it becomes imperative to ensure that the problem-solving processes need to keep an eye out for overly simplistic narrative fallacies.

What can we do about it?

As you farm problem solving teams within your organizations, here are a few guidelines that might be helpful:

  1. Treat hypotheses with skepticism: You would want the problem discovery process to start with hypothesis generation. And herein lies the first trap: hypotheses reflect the current understanding, and most likely biased by the current narratives around the problem space. Try to pull the team out of that mindset with the following nudges:
    1. Watch out for definitive hypotheses: A definitive, unidirectional hypothesis: ‘X causes Y’, deserves a yellow flag. Challenge with other possibilities: Is it possible that ‘X and Y are just correlated’ or ‘there are other factors (Z1, Z2 ..) which might have caused X and Y’?
    1. Treat hypotheses as localized outcomes: All too often, teams confuse prediction with hypotheses. A hypothesis that asserts that ‘X is true, therefore Y will happen’ deserves a red flag because it completely fails to factor how often and where X is true, not just in the specific observed conditions.
    1. Beware of bivariate hypotheses:Bivariate hypotheses are often the first step to overly simplistic conclusions (e.g. digital campaign to educate loan customers of Covid-19 relief payment options correlated with an increase in Collection rates is an example: completely ignores multiple factors at play). Push for multivariate hypotheses before you draw inferences
    1. Alternative hypotheses: The basic tenet (and the hardest part) of a rigorous problem solving method is to develop alternative hypotheses. In the absence of solid alternative hypotheses, the team ends up creating a bias in the analysis to suite the prediction. Without a doubt, red flag
  2. Experiments are almost always better than models: Data Scientists look at the world through the lens of models – and often tend to forget that models are well, just models – an approximation of reality and hence, susceptible to simplistic narrative fallacies. Push for experiments (Randomized Controlled Tests) with a rigorous measurement mechanism – and there is an art element here: you can break down a larger solution approach into a series of smaller, controlled experiments and iteratively seek the optimal solution. Approach all experimentation with the following:
    1. All experiments are fungible: It is important to treat all experiments as replaceable – be decisive in the face of data
    1. More experiments are not equal to useful experiments: Poorly designed experiments are no replacement for well-designed experiments. More on this in a later post
    1. Discovery is a part of efficiency: In the absence of a lot of data (as it usually is the case), farm multiple experiments (e.g. multi-armed bandit problem)
  3. Watch out for the ‘Texas sharpshooter fallacy’: this happens when the differences in data are ignored but similarities are overemphasized, leading to false conclusions. This often happens when a hypothesis is formulated after the data gathering and analysis is done. The term itself comes from a joke about a Texan who fires gunshots at the side of a barn, paints a target centered on the tightest cluster of hits and claims to be a sharpshooter.
  4. Favor empirical evidence over generalized assertions: In the rush to making decisions, problem solvers often end up with generalized assertions – treat them with suspicion. Almost always better to work with empirical evidence with error functions (e.g. model accuracy) and confidence intervals with the humility to accept the limitations than opt for sweeping assertions.

And finally, respect the role of luck and randomness. Counter-intuitive as this may sound to a problem solver, it is important to approach problems with an element of humility. There are way too many examples of organizations and teams stumbling upon the ‘right answer’ through serendipity and then, conveniently fitting the data to justify the outcomes, (aka retrospective distortion)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: