Intuition is Just Internalized Data

How good is your gut? We make decisions without complete information every waking minute of our lives. When you roll out of bed in the morning and plant your feet beside your bed to stand up, do you first look down at the floor to verify it’s still where you think it is, or that there’s nothing there that could hurt you? If you do, seek help professional immediately — that’s no way to live. Our amazing brains synthesize patterns in our own perceptions of reality so that we aren’t overloaded with purposeful thought about the mundane. Our muscle memory kicks in, allowing us to put our minds to use solving higher order abstract problems while our bodies are on autopilot. We don’t call these things “intuition,” but I believe the same principle applies.

Instincts and Data Analysis Aren’t Mutually Exclusive

Over the course of our studies and careers, we internalize knowledge about our respective disciplines and industries in which we serve. We gain an understanding of the principles behind the technologies and sciences we use. We learn about the psychology of human behavior in those domains. Those experiences inform our ability to evaluate problems we are tasked with solving. The more experiences we have, and the better our ability to absorb the lessons they teach, the better the foundation we have to make correct decisions in the absence of a complete set of pertinent data. We tend to trust the instincts of experts because we believe that the sum of their past experiences qualifies them to speak to the best solutions for current problems. In other words, it’s not that intuition leads us to good decisions in the absence of data; it’s just that the data set being relied upon may not be directly pertinent to the problem at hand.

We rightly praise those who are able to come to the correct decision without a detailed analysis of reams of data. After all, if the same conclusion is made with minimal effort, then time (and hence money) will have been saved. It’s the reason why we tend not to want to waste time proving what has already been proven, or analyzing what has already been established and accepted. And yet therein lies the rub. Often, whether out of a good-natured will to focus on what we think matters most, laziness, bias, or even the sheer hubris of our own foregone conclusions — we get it wrong. And when our assumptions upon which we make meaningful decisions are wrong, there can be substantial consequences. There is immense value in intuiting which of our assumptions are a waste of time and resources to validate. Perhaps somewhat ironically, this instinct too is forged by experience.

Show Your Work

Think back to your education in mathematics. The instructor would show a problem to the class and ask for its solution. Some that had an aptitude for the material could work out the solution in their heads without having to laboriously break down the problem using memorized formulas and proofs. But for most instructors, that wasn’t good enough. The instructor would insist on the problem solver showing his/her work, in part so that the rest of the class could benefit, but also — and perhaps more importantly — so that the instructor could verify that the solver had a firm grip on why the proposed solution is correct. If one rushes to a solution without understanding and demonstrating the principles behind what makes it correct, it will be more apt to be questioned, misinterpreted, and misapplied to other situations in the future.

Even if on the face of it everyone agrees with your conclusions, there may yet be lingering unspoken doubts. Or, worse still: there may be tiers of management above that disagree or misunderstand those conclusions. And if in the future you move on to bigger and better things, leaving solid documentation for why you chose to do what you did will be greatly appreciated by your replacement and the rest of the team. That’s why it’s usually best to “show your work” for meaningful decisions that you make so that you and your team can quickly refer back to the data in support of your conclusions.

Quantify What Can Be Quantified

It’s one thing to plan a feature or roadmap around anecdotal evidence or subjective intuition; it’s another to validate hypotheses by measuring hard data. It’s not always easy or straight forward, but it can usually be done — and it’s usually worth doing. What is the relative value of doing alternative A over alternative B? Build a cost to value matrix and do the math. Weight the factors according the goals and business objectives of the company. The results sometimes surprise. But even if they only confirm your instincts, you’ll have created an artifact that documents the reasons behind your decision, and ideally you will have created a framework by which to evaluate similar alternatives in the future.

The other benefit to measurement is understanding how much better one alternative is to another. You may correctly reason that option A is better than option B for some anecdotal reason, but you’re never on as solid footing without having quantified the disparity. And once option C comes along, how will that factor in? The more alternatives being judged relative to one another, the more difficult it becomes to choose between them without some kind of objective way of measuring them. Quantify what can be quantified. Most things can be measured — even if only indirectly. Of course there will be things that either cannot be quantified, or that would not be feasible or reasonable to quantify. But that should not be an excuse to not measure what can be measured.

Be Objective and Root Out Your Biases

Above all, data exposes bias. We’re not as smart as we think we are. It takes courage to be willing to prove yourself wrong, to go where the data leads. In the context of product management specifically, we sometimes have a tendency to become passionate about a solution for any number of reasons, chief among them being that we came up with it ourselves. But it’s only through careful analysis that we can properly evaluate our pet solutions against solutions others have proposed. There is certainly a place for passion. As product managers, we need to believe in our products. But that passion should be tempered by the greater good of the company. Resources should be allocated according to an objective evaluation of cost vs value; it should not be a competition between the political influence and passion of individual product managers.

The only scenario in which no data is better than at least some data is if there is a reasonably high risk that the limited data set that does exist is inexorably skewed in a direction that is not representative of the whole — that is, you have a sampling error. In these cases, common sense and best practices of statistical analysis should help us to avoid the fallacious reasoning that leads to invalid conclusions. Does what the data is telling you pass the “smell test?” If it doesn’t, figure out why rather than throwing your hands in the air and concluding that the merit of proposed solutions can’t be quantified. If the raw data is erroneous, fix it. If the formula doesn’t suit you due to factors you hadn’t originally considered, change the formula. If you can’t figure out why the conclusion doesn’t seem right to you, you should strongly consider the possibility that perhaps your intuition is wrong in this case. Perhaps the experience of proving yourself wrong with data will give you better instincts the next time you see a similar problem.

Leave a Reply

Your email address will not be published. Required fields are marked *