It must be budgeting season; or maybe folks are getting their “Naughty” and “Nice” lists ready early, but the practice of Decision Making is getting publicity. But in all the literature about different approaches and rationales, there remains one key component missing.
The new Harvard Business Review features five (count ‘em) models for decision making, based on the predictability of various, known outcomes. Yet one challenge of selecting an alternative based on a probabilistic determination of future “success” – and one omitted by the authors – is, How is success defined? Will we (our customers, etc.) know what success looks like if we reach it?
That’s why we counsel our clients, before making any decision or recommendation, to develop specific, measurable objectives. Defining those criteria that are important ahead of time – and agreeing to the relative importance of each – goes a long way towards making a better decision.
Several of these models include “causal” analysis, to reduce some of the uncertainty by focusing on picking from known alternatives (examples used included “Selecting a new retail location” and “Choosing a rollout for a new product”). In each of these instances, more far-reaching decisions will need to have already been made—before selecting where to locate a new restaurant, a decision on expansion in general must be reached; how to pilot a new product isn’t helpful if the decision to introduce new products, or understanding how new products fit in the general strategy, hasn’t been resolved.
In these instances, again, having a clear understanding of the results and benefits desired, and the constraints faced, ensure that management isn’t making decisions just for the sake of it.
In a recent Op-Ed from The New York Times, “Why We Make Bad Decisions,”Noreena Hertz reviewed a tough medical decision she faced when challenged with conflicting diagnoses. In part, this underscored a common (but flawed) perception that problem-solving (eg, “troubleshooting”, “diagnosing”) and decision making are one and the same (more on that some other time).
But, as she discusses (and the HBR authors acknowledged, as well), the influence of our individual biases is remarkable. She cites one Emory University study (my alma mater), where MRIs show that the critical thinking part of our brain actually shuts down when receiving advice from perceived “experts.” She also cites the dopamine rush we feel when we discover data that backs up our preconceived notions.
Why is that important? It comes back to setting clear, rational objectives in advance. Without that framework, structured specifically to manage both “good” facts (i.e., data we like) and “bad” data (facts that don’t fit our desired outcome), our decision-making will rest on luck—the luck of which experts we pick, which probabilities we assign to outcomes, which outcomes we can conceive, and so on.
Preparing and prioritizing clear objectives won’t guarantee a good decision – it just ups your chances.