Last week, I had written about problem solving mindsets. The foxes – problem solvers who are able to solve a wide range of problems and the hedgehogs – problem solvers who are very good at specific problems. If you think about it, most of the systems in organizations have been tailored to reward the hedgehog behaviors – expertise built over time. On the other hand, organizations have not invested enough in building foxes. The natural question then is, what are some of the characteristics of foxes – the uber–problem solvers? By now, many organizations that I work with have internalized the notions of structured problem decomposition, hypothesis-based data analysis et al. What we want to think about is how to push beyond that – what is it that the true foxes, the uber-problem solvers bring that give them the edge? Here’s my list, by no means comprehensive, and by all means, laden with my personal experiences and hence my biases (!):
- Daniel Kahneman has a very elegant construct: the two Systems of thinking1. System-1 is reflexive, instinct based while System-2 is deliberate, reason and data based. We all operate with these 2 Systems all the time and also keep updating System-1 with our System-2 experiences. However, I think the best of problem solversstand out in two ways:
- Be watchful of their System-2 thinking as well, since as Kahneman has pointed out, individual biases creep in all the time, even when we believe that the decision was backed by reason and data. The best problem solvers are able to adopt a mindset of self-scrutiny and belief updating(more on this below)
- And here’s the tougher one: be aware that continuous self-scrutiny can be exhausting and the illusion of knowing is seductive. In other words, be watchful that they are just one problem away from falling into the expertise trap.
- Resist the bias for scope insensitivity. Behavioral economists have demonstrated how even the best of us, time and again, make decisions without really thinking through the scope of the decision. Here’s an example: let’s say your team is tasked with developing a customer default risk model. A scope insensitive model would just try to create one risk metric – while in reality, the risk of default can be very different if you are looking at the next month vs. the next 6 months. Even more so in the current Covid-19 situation: while a lot of consumer behavior is being driven by the current slowdown in economic activity and the fear around the pandemic; the longer-term decisions are surely going to be driven by more structural factors like employment risk etc. The right thing to do then would be build two models. In other words, make sure the problem definition is scope sensitive.
- Belief updating: Self-scrutiny is all about the willingness to update beliefs in the face of evidence. I have talked long and often about the value of the Bayesian belief system – the best problem solvers start with an initial set of assumptions that is built through a combination of inside-out tacit knowledge (being around the problem space long enough) and an outside-in view (based on triangulating with what is happening around them). At the same time, they are always willing to continuously update their assumptions as they learn more. It is this kind of active open mindedness that sets them apart.
- That we will have rare events with severe impact is more or less a given. Much as we are tempted to predict such events, it is futile to do so. As the uber problem solver will tell you, the goal is not to prognosticate but create mechanisms to mitigate the risks if and when such rare events do occur. To use Nassim Taleb’s phrase, the idea is to be ‘antifragile’:
- Create optionality by building redundancies in data points as aids to decision making. Most marketing organizations want to compute the Customer Lifetime Value (CLTV) as a key metric. And my advice has always been: never rely on CLTV as the sole metric because well, you want to approach a metric that claims to capture the ‘consumer value into perpetuity’ with skepticism. In any case, in the VUCA world we live in, trying to base decisions on such a long-range metric seems out of place. The right thing to do is to combine the CLTV with near-term metric like Next Best Action propensity model to be able to guide decisions in the immediate term. And use the feedback loop from the customer response to define/refine the subsequent action and the overall CLTV. There is an old Chinese proverb: ‘Cross the river by feeling the stones’
- Avoid prediction of remote payoffs: Most businesses have lived through the crushing 2008 recession and are staring at another recession right now. And it becomes tempting to try and forecast outlier events – e.g. ‘when will the recession end’. It is not just futile, but downright foolish to model such events. The best problem solvers are always aware of the fact that the it is possible to distribution (fat-tailed or otherwise) only after the events have occurred. Anyone thinking otherwise needs to be treated with a healthy dose of skepticism
- Don’t settle for the normal distribution – as I have said before, life is anything but normal. This sounds obvious, but analysts continue to miss this point consistently. The underlying flawed assumption is the reductionist, somewhat simplistic Central Limit Theorem and the normal distribution. And non-normal distributions (e.g. power law distributions) might even need much larger sample sizes before they start converging to the population statistics. In other words, be hungry about gathering observations and continue to refine assumptions as your sample size continues to grow (belief updating)
- Focus on the right metrics to be able to truly understand the data. As any data scientist will tell you: in a non-normal world, the standard metrics of mean (m) and standard deviation (s) are just not good enough. You need to look at other metrics that better describe the underlying data – e.g. the outliers (kurtosis), asymmetry (skewness).
Again, this list is by no means exhaustive. If you have seen some other behaviors that have worked, do share. If you believe some of these don’t make sense – feel free to challenge. I am always open to updating my beliefs!
1. Thinking fast and slow by Daniel Kahneman. A fantastic book from one of the most influential economists of our time.
2. Antifragile by Nassim Taleb. Discursive, not an easy read – but generally he is very entertaining