Leer la versión en Español

It is often assumed that making decisions based on data — in other words, being “data driven” — leads to more objective decisions, and therefore better ones.

But that belief rests on three flawed assumptions:

  • numbers are objective
  • what is measured quantitatively matters more than what is measured qualitatively
  • numbers offer a complete and objective representation of the thing we are trying to understand

Problem #1 – The illusion of objective data

The problem is that data is not objective. A number may be a number, yes, but it is almost always shaped by earlier decisions: what gets measured, how it gets measured, how a metric is defined, what time period is used, which segment is included, and what gets left out. In other words, human interpretation enters the process long before the numbers appear.

Let’s start with a general example, then move to one that is closer to the day-to-day reality of digital products.

CPI (Consumer Price Index)

The Consumer Price Index (CPI) measures the average change in the price of a basket of goods and services intended to reflect household consumption, and it is widely used to track inflation.

But CPI does not measure “inflation” in the abstract. It measures price changes across a defined basket of goods and services, constructed to represent the average consumption patterns of certain households in a given place and time. That basket is weighted: food does not count the same as transportation, healthcare, rent, or leisure.

So while the index may look objective, it still reflects subjective decisions: which items are included, which are excluded, and how each one is weighted. If certain items are left out, or if the weight of items that rose sharply is reduced, the resulting inflation figure will be lower than what many people actually experience in everyday life.

So yes, CPI is a number produced through a mathematical calculation. But that does not make it free from interpretation. It still carries assumptions about what was counted and how.

Active users, heavy users

Now let’s move to an example that is much more familiar to designers and product managers: metrics like “active users,” “engagement,” or “heavy users.”

  • If we define active users as “users who opened the app” rather than “users who completed at least one transaction,” the number will obviously be higher in the first case, because not everyone who opens the app actually does something meaningful. The moment we define how something is measured, we are already making interpretive choices.
  • The same goes for heavy users. Imagine a ride-hailing app like Uber. The number of heavy users you report will vary depending on whether you define a heavy user as someone who takes one ride per week or someone who takes two rides per month.

In other words, metrics are numbers, but that does not make them neutral. They are not objective simply because they are numerical.

This is why the promise of being “data driven” is often misleading. The idea that relying on data makes a decision objective usually just replaces one visible bias with another that is more sophisticated and less obvious.

Problem #2 – What is not measured quantitatively is treated as less important

There is another trap here too: the assumption that whatever can be measured at scale matters more than what cannot, or what is measured qualitatively.

Clicks, time on screen, scroll depth, or number of features used are usually easy to track. Trust, understanding, perceived value, reduced friction, or quality, on the other hand, are much harder to measure, and when they are measured, it is often through qualitative methods.

More time, more pages = more engagement
Imagine a redesign after which users spend more time in the app, generate more clicks, and view more pages. It is easy to conclude that engagement has increased and that the redesign was a success.

But that may not be what is happening at all.

It could just as easily mean that completing a task now takes more effort. Maybe the redesign introduced friction instead of reducing it. Maybe navigation became less clear. If we do not understand what is behind the numbers, we cannot interpret them properly.

This is exactly why qualitative information matters. A number is still a number, but numbers need context if they are going to mean anything useful. And that is why qualitative input is often essential to interpreting quantitative data well.

Just as a number is not objective simply because it is a number, a metric is not self-explanatory simply because it is measurable. Even when a number looks objective, its interpretation can still be highly subjective.

If we do not understand what sits behind a metric, we can interpret it in almost any way we want — and that means we can misinterpret it. Once that happens, we can end up making the wrong decisions even when those decisions were, technically speaking, “based on data.”

Once again, the problem is not the number itself. The problem is how we interpret it.

And that leads to the third issue: selection bias.

Problem #3 – Bias in variable selection

Bias in the selection of the variables or metrics we choose to observe is another problem that reinforces the illusion of objectivity often associated with data-driven thinking.

We tend to assume that the data we analyze represents “reality,” when in fact it only represents the slice of reality we decided to measure. And that slice can give us a very different picture — not because the data is wrong, but because the variables we chose are not enough to capture the full phenomenon.

So even when an analysis is rigorous and data-based, it can still produce a partial or distorted view of what is happening. And that can lead to decisions that are, at best, suboptimal.

Take a simple example. Imagine that after redesigning the quote flow for travel insurance, you remove steps and see that users now make fewer clicks and complete the flow faster than before. It would be tempting to conclude that the experience improved, declare the project a success, and move on.

After all, the decision seems to be backed by data: “objectively,” there are fewer steps, fewer clicks, and less time spent on the task.

But there is a more important question: did sales increase?

When we measure usage without measuring outcomes, we are being data driven on top of an incomplete representation of reality. And that can lead to suboptimal decisions. In other words, objectivity gets confused with relevance: measuring something well is not the same as measuring what matters.

Conclusion

Numbers are not objective simply because they are numbers. They are shaped by decisions about what gets measured, how it gets measured, what gets left out, and how the result is interpreted. That is why using data to make decisions does not, by itself, guarantee better decisions.

The fallacy of the data-driven approach appears when we confuse data with objectivity, objectivity with relevance, and measurement with understanding. We may be looking at accurate numbers, correctly calculated and rigorously analyzed, and still end up making suboptimal decisions.

That is why, rather than being data driven, it is often better to be data informed. That means using quantitative data as an input, but not as the only one. Good decisions combine quantitative evidence with qualitative information, a clear understanding of the problem, critical thinking, product judgment, user insight, business strategy, and contextual awareness.

In the end, the goal is not simply to make decisions with data. It is to make better decisions — and that requires more than numbers alone.

De éste y otros temas hablamos en nuestros cursos

Nothing Found

Leer la versión en Español

It is often assumed that making decisions based on data — in other words, being “data driven” — leads to more objective decisions, and therefore better ones.

But that belief rests on three flawed assumptions:

  • numbers are objective
  • what is measured quantitatively matters more than what is measured qualitatively
  • numbers offer a complete and objective representation of the thing we are trying to understand

Problem #1 – The illusion of objective data

The problem is that data is not objective. A number may be a number, yes, but it is almost always shaped by earlier decisions: what gets measured, how it gets measured, how a metric is defined, what time period is used, which segment is included, and what gets left out. In other words, human interpretation enters the process long before the numbers appear.

Let’s start with a general example, then move to one that is closer to the day-to-day reality of digital products.

CPI (Consumer Price Index)

The Consumer Price Index (CPI) measures the average change in the price of a basket of goods and services intended to reflect household consumption, and it is widely used to track inflation.

But CPI does not measure “inflation” in the abstract. It measures price changes across a defined basket of goods and services, constructed to represent the average consumption patterns of certain households in a given place and time. That basket is weighted: food does not count the same as transportation, healthcare, rent, or leisure.

So while the index may look objective, it still reflects subjective decisions: which items are included, which are excluded, and how each one is weighted. If certain items are left out, or if the weight of items that rose sharply is reduced, the resulting inflation figure will be lower than what many people actually experience in everyday life.

So yes, CPI is a number produced through a mathematical calculation. But that does not make it free from interpretation. It still carries assumptions about what was counted and how.

Active users, heavy users

Now let’s move to an example that is much more familiar to designers and product managers: metrics like “active users,” “engagement,” or “heavy users.”

  • If we define active users as “users who opened the app” rather than “users who completed at least one transaction,” the number will obviously be higher in the first case, because not everyone who opens the app actually does something meaningful. The moment we define how something is measured, we are already making interpretive choices.
  • The same goes for heavy users. Imagine a ride-hailing app like Uber. The number of heavy users you report will vary depending on whether you define a heavy user as someone who takes one ride per week or someone who takes two rides per month.

In other words, metrics are numbers, but that does not make them neutral. They are not objective simply because they are numerical.

This is why the promise of being “data driven” is often misleading. The idea that relying on data makes a decision objective usually just replaces one visible bias with another that is more sophisticated and less obvious.

Problem #2 – What is not measured quantitatively is treated as less important

There is another trap here too: the assumption that whatever can be measured at scale matters more than what cannot, or what is measured qualitatively.

Clicks, time on screen, scroll depth, or number of features used are usually easy to track. Trust, understanding, perceived value, reduced friction, or quality, on the other hand, are much harder to measure, and when they are measured, it is often through qualitative methods.

More time, more pages = more engagement
Imagine a redesign after which users spend more time in the app, generate more clicks, and view more pages. It is easy to conclude that engagement has increased and that the redesign was a success.

But that may not be what is happening at all.

It could just as easily mean that completing a task now takes more effort. Maybe the redesign introduced friction instead of reducing it. Maybe navigation became less clear. If we do not understand what is behind the numbers, we cannot interpret them properly.

This is exactly why qualitative information matters. A number is still a number, but numbers need context if they are going to mean anything useful. And that is why qualitative input is often essential to interpreting quantitative data well.

Just as a number is not objective simply because it is a number, a metric is not self-explanatory simply because it is measurable. Even when a number looks objective, its interpretation can still be highly subjective.

If we do not understand what sits behind a metric, we can interpret it in almost any way we want — and that means we can misinterpret it. Once that happens, we can end up making the wrong decisions even when those decisions were, technically speaking, “based on data.”

Once again, the problem is not the number itself. The problem is how we interpret it.

And that leads to the third issue: selection bias.

Problem #3 – Bias in variable selection

Bias in the selection of the variables or metrics we choose to observe is another problem that reinforces the illusion of objectivity often associated with data-driven thinking.

We tend to assume that the data we analyze represents “reality,” when in fact it only represents the slice of reality we decided to measure. And that slice can give us a very different picture — not because the data is wrong, but because the variables we chose are not enough to capture the full phenomenon.

So even when an analysis is rigorous and data-based, it can still produce a partial or distorted view of what is happening. And that can lead to decisions that are, at best, suboptimal.

Take a simple example. Imagine that after redesigning the quote flow for travel insurance, you remove steps and see that users now make fewer clicks and complete the flow faster than before. It would be tempting to conclude that the experience improved, declare the project a success, and move on.

After all, the decision seems to be backed by data: “objectively,” there are fewer steps, fewer clicks, and less time spent on the task.

But there is a more important question: did sales increase?

When we measure usage without measuring outcomes, we are being data driven on top of an incomplete representation of reality. And that can lead to suboptimal decisions. In other words, objectivity gets confused with relevance: measuring something well is not the same as measuring what matters.

Conclusion

Numbers are not objective simply because they are numbers. They are shaped by decisions about what gets measured, how it gets measured, what gets left out, and how the result is interpreted. That is why using data to make decisions does not, by itself, guarantee better decisions.

The fallacy of the data-driven approach appears when we confuse data with objectivity, objectivity with relevance, and measurement with understanding. We may be looking at accurate numbers, correctly calculated and rigorously analyzed, and still end up making suboptimal decisions.

That is why, rather than being data driven, it is often better to be data informed. That means using quantitative data as an input, but not as the only one. Good decisions combine quantitative evidence with qualitative information, a clear understanding of the problem, critical thinking, product judgment, user insight, business strategy, and contextual awareness.

In the end, the goal is not simply to make decisions with data. It is to make better decisions — and that requires more than numbers alone.

De éste y otros temas hablamos en nuestros cursos

Nothing Found

Related content