This is the fourth in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here.
Here are five common dysfunctions that we are contending with.
- Teams are incentivised to move quickly and ship, care less about reliable and valid research
- Researching within our silos leads to false positives
- Research as a weapon (validate or die)
- Quantitative fallacies
- Stunted capability
In this post, we’re looking at what happens when research is ‘weaponised’ in teams.
Dysfunction #4 – Quantitative fallacies
I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be. – Lord Kelvin
I fear many people feel like Lord Kelvin.
There seems to be an intuition that knowledge unable to be expressed numerically is less satisfactory than knowledge that fits into a graph. In order for an assertion to be worth considering as serious, it must have a number associated with it. Everything else is anecdote.
Perhaps we have inherited this from finance. Finance are, after all, the masters of presenting future fictions with bold numbers and graphs. Finance whose authority appears to be rarely challenged.
Organisations love quantitative research because it is fast and feels definitive.
Smash out a survey, launch an experiment, categorise customer feedback by keyword, look at the product analytics. Somehow, numbers just feel more reliable. More trustworthy.
The McNamara fallacy (also known as quantitative fallacy), named for Robert McNamara, the US Scretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.
– Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)
In my experience, presenting a number boldly is much less likely to be challenged than any assertion backed up by more qualitative evidence. Yet surprisingly few people seem to be inclined (or able) to ensure that the work done to establish that number has any rigour.
Take surveys. How many organisations the the time to do cognitive interviewing to ensure that the data collected in the survey is valid and reliable? Very few. Most don’t know it is even something you should do, and the others don’t want to spend the time.
Do we just have blind faith that our survey respondents will make sense of the questions the same way as us? Or do we actually not really care so much about the validity? We just want an answer. A definitive sounding answer. Some data to show that we are evidenced based.
How many teams when A/B testing their two versions of the design using unmoderated research watch the videos to make sure that people did really complete the task in a way that could be considered an adequate user experience? To check that the people who undertook the research have any resemblance to who they said they were in the screener? To ensure that they things they say and the scores they give make sense when compared to the experience they actually had?
All sounds a bit time consuming doesn’t it, when all you really want is data to tell you what to do. To take the decision out of your hands.
We’ve managed to convince ourselves with a large enough volume of respondents, these problems go away. But the fact is, these numbers can easily be completely misleading. People don’t understand the survey question and answer anyway. To get the incentive. To find out what other questions you’re asking, because some of us are completists.
Recently my team recently did some survey testing – we were testing a feature prioritisation survey (not my favourite). We observed people who told us they didn’t understand what a features as described in the survey. Regardless, it sounded cool and they then went on to prioritise it highly against other features in the survey regardless.
How often does this happen? No one knows.
The first step is to measure whatever can be easily measured. This is OK as far as it goes.
The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading.
The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist.
This is suicide.
– Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)
There are multiple, related quantitative fallacies.
Some like McNamara and Lord Kelvin, believe that quantitative data is superior. But others are more complex – they trust that the trade off for speed and convenience does not have a dangerous impact to validity and reliability. Other fallacies result from absence of experience and ability in defending qualitative data and critiquing quantitative methods.
The fastest and most ‘definitive’ sounding methodologies (and the tools that enable them) have never been more popular. While it is encouraging that more and more people are keen to take a more human centred approach to product design, experienced researchers need to intervene to make sure that these methods are being used, and critiqued, appropriately.
We need to ensure that our organisations don’t over index to the rapid, quantitative methods because they play well with senior leadership. And when we do use these methods , we need to ensure that we maintain a high enough quality standard that we can genuinely stand behind the numbers and believe they have some reliability and validity.
You can read about the next dysfunction here.