Archive - customer research RSS Feed

Are you in London? Want to be part of a research project? (There’s £40 in it for you!)

I’m doing some guerrilla research for a new web service this Friday and Saturday in London and I’m looking for participants. The specs are pretty loose – you need to use internet banking and be responsible for managing your own and/or your household’s finances. (You’re not allowed to be a financial expert tho, or extraordinarily rich).

I’m happy to come to you and will take up only about 30-45mins of your time – and you get £40, as well as the fun of participating (it will be kind of fun, I promise. Definitely not difficult!). No, you don’t need to be a finance whiz, have a current budget, or be showing a profit.

Drop me an email at: [email protected] if you’re keen and we’ll work out a time and a place!

UPDATE: I’m particularly interested in finding a couple of stay at home mums who might fit this bill…. is this you? Do you know of anyone like this? Please send them (or yourself) my way!

Embracing the Un-Science of Qualitative Research Part Three – Improvising is Excellent

So, recently we’ve been talking about Qualitative Research and how it’s not so scientific, but that ain’t bad.

We identified three ways that you *might* make Qualitative Research more scientific and have been pulling those approaches apart. They are to:

  1. Use a relatively large sample size (which we destroyed here)
  2. Ensure that your test environment doesn’t change (which was shown to be foolish here)
  3. Ensure that your test approach doesn’t change (which we’ll take down now).

So, one of the first things you learn when you come to qualitative research, particularly usability testing, is to write a test script. A test script is good because you’ll be spending an hour or so with each person and you need to have something to prompt you to make sure you cover what you need to cover, and to help ensure you have a good structure to your session.

But this is how scripts are supposed to be used – as a guide. You shouldn’t literally use them as a script! And you should feel confident and comfortable deviating from the script at the right times.

When are the right times to deviate from the script? I think there are two key times.

If you already know what the answer to your question will be, there is very little reason to ask it. Sometimes it is helpful to have an array of people responding in the same way to the same task or question – particularly if your client is particularly attached to a bad idea for some reason. Repetition can help bring them around. Most of the time, though, you’re just wasting valuable research time covering old ground when you could be covering new.

Very often it’s not until the first one or two research sessions that some very obvious issues become very obvious. You wonder why you didn’t pick them up before, but that’s why we do this kind of testing/research. If you’re not updating your prototype (as recommended in Part Two), then you should update your script. Don’t cover old ground for no good reason, research time is too valuable for that.

The other main reason for deviating from the script is if the person you’re interviewing says or does something really interesting. Your script tries to anticipate what people’s reactions might be, to a point – but the point of doing this research is to learn things you didn’t know before, and sometimes what you thought you’d find and what you actually find are very distant from one another – and this is great! This means you’re doing good research. (It’s alarmingly easy to find out the answers you want to find out by researching poorly).

If you’re interviewing someone and they say something interesting and perhaps unexpected – follow it! This is potentially research gold. Don’t let sticking to a script stop you from following this gold. You may, in fact, want to alter your script for future interviews depending on what you discover here.

Of course, this means that when it comes time to do your report you won’t be able to say things like ’80% of people said this’ or ‘only 10% of people did that’. People do like to say those kinds of things in their reports and, of course, clients tend to like to hear them. People like numbers. (Just think of how they latch on to stupid concepts like the ’3 click rule’). But you shouldn’t really be using numbers like this in your reporting anyways. After all – as we talked about in part one – you’re not using statistically significant numbers anyway, you’re probably talking about eight, maybe twelve people. Your percentages, no matter how popular, are not particularly meaningful AND you are helping to fuel the perception that research is about numbers like this when, as we agreed earlier, it is really all about the depth of insight and qualitative research is what you do if you want to pull out fancy percentages.

So, write yourself a script and use it for inspiration and reminders and for structure but don’t be constrained by it and do let the content of your interview guide the questions you ask and what you report.

Which makes me think… perhaps we need to talk some about how to ask good questions whilst interviewing… soon, I think.

(Brief apologies for the delay between parts 2 and 3 – I had to do some holidaying in Italy. Briefly back in London before flying out to UX Week tomorrow morning. Are you having a ridiculously busy August too?!)

Embracing the Un-Science of Qualitative Research Part Two – Ever-Evolving Prototypes are Ace

So, earlier we were talking about whether you can or should attempt to make qualitative research more scientific, and that there are three ways you might go about doing this, being to:

  1. Use a relatively large sample size (deconstructed in Part One)
  2. Ensure that your test environment doesn’t change (which we’ll talk about now)
  3. Ensure that your test approach doesn’t change

One of the fundamentals of quantitative research is its systematic nature. It’s about measuring stuff. And, you don’t want that stuff to change as you’re measuring it for a number of reasons – not the least of which being that it makes it very difficult to plot on a graph :)

Qualitative research, on the other hand, is not about numbers so much. It is about the depth of insight that you can gain from having much greater and more flexible access to your research subjects. As you are seeking insight, not statistics, it matters far less whether whatever you are testing, say a prototype, changes a bit throughout the course of the study.

In my experience, some of the most fruitful research has occurred when the prototype has changed quite a bit from interview to interview – and sometimes even within an interview.

Here’s how it works (again, using the example study I described in part one: a lab based combination of interview & a wee bit of usability which is intended to ensure that my client’s proposition is sound, that it is being well communicated, that the users understand what the service is and how it works, and to weed out any critical usability issues).

On one side of the big brother mirror you have the researcher and the participants (sometimes known as ‘users’. Urgh). On the other secret side of the mirror you have your client including a member of their technical team (or, perhaps, a gun Visio or Omnigraffle driver, depending on what stage your prototype is at) with laptop at the ready.

As you proceed through the first couple of interviews, some really obvious stuff emerges. These are the things that you don’t really notice because you get too close to the project and you develop a kind of ‘design blindness’. Or they’re things that you never really thought about because you were concentrating on other more important or complex aspects of the design.

These findings are almost always crystal clear – the change required is obvious and rarely time consuming. What are your options? You can:

  1. spend time on and note the problem as it occurs in every single interview you perform in that study, or:
  2. fix the problem and spend your valuable research time learning about things you don’t already know about.

OK, so I might have biased that survey just a little, but the answer seems obvious to me. You get so little opportunity to spend with actual end-users – why spend that learning the same thing five times when you could spend that time getting more and richer insight, and exploring the more complex problems.

Because you can use this technique to explore the more complex problems that emerge from qualitative research.

With complex problems the solution is not so clear cut so often this means you end up doing some improvised A/B testing where you explore a number of potential solutions with participants to see which is most effective, or at least, which seems to be the correct solution to further explore.

(Interestingly that Wikipedia entry I linked to there suggests that you need an audience of statistical significance for A/B testing to be effective… of course, I disagree).

This approach to research is more demanding on the researcher than a typical ‘fixed script, fixed environment’ approach to testing. Using this approach, I can never quite be sure what will be on a page when it loads or whether the navigation items might have changed a bit and I need to explore that, or to completely change the flow of my script because the part of the prototype we were going to explore next is a bit broken and we’ll need to come back to it later.

These extra demands are well repaid, though, by the forward leaps that you will see your client being able to take even before the research is complete – and well before your analysis is done and presented. Not only this, but the value of undertaking the research is well and truly demonstrated even as it is being performed – which is gratifying to you and often very exciting (or relieving) to your client.

So, again I say – if it’s numbers you need – go do a survey. Or use one of the great online remote usability testing tools, or any number of excellent quantitative methods. In their own ways, they are fantastically valuable.

But if you want to quickly weed out problems with your site/application/prototype – then I recommend that you consider using this technique of the ever-evolving prototype. It will certainly keep you awake as you’re researching, you’ll get rapid return on investment and excellent bang for buck as far as research techniques go.

What say you?

Embracing the Un-Science of Qualitative Research Part One – Small Sample Sizes are Super

If you’re into qualitative research at all, it wouldn’t have taken long before you had someone ask you about the statistical significance of your research and how you could back your findings with such a small sample size, or to find others out there trying to make qualitative research look more scientific by trying to extract hard data from it.

There are three main ways that you can try to make qualitative research look more scientific, being:

  1. Use a relatively large sample size
  2. Ensure that your test environment doesn’t change
  3. Ensure that your test approach doesn’t change (don’t change the script, and stick to it)

Now, there are some times when one or more of these tactics is appropriate, but conversely, in many instances it has been my experience that by breaking these rules, you are able to get much greater insight into the research question(s) you have set yourself.

There are many different kinds of qualitative research study, so in the interests of clarity, let’s pick one just like I’ve been working on this week – a lab based combination of interview & a wee bit of usability which is intended to ensure that my client’s proposition is sound, that it is being well communicated, that the users understand what the service is and how it works, and to weed out any critical usability issues.

In the interest of not making you read an enormous post, I’ve divided this into three parts. So, let’s start with part one – a large sample size. Now… to the best of my knowledge there is no scientific way to determine the correct number of participants in a qualitative research study. Now, I’m no statistician (if you are, please feel free to weigh in here), but it is my understanding that the likelihood of reaching a statistically significant result using the methodology I’ve described above, is pretty much nil. Not that it’s impossible, but you’d have to do a heck of a lot of interviewing.

And here’s one golden rule of qualitative research that always holds true – if the research is going to take too long or be too expensive, it will not happen. You can count on that one.

As a result, sample size for qualitative research is often driven by the time and budget available – and that’s not necessarily a bad thing. In fact, this is one subject upon which Jakob Nielson and I actually quite agree. Jakob says that most of the time elaborate usability testing is a waste of time and that you should test with no more than five users. He has a natty little graph that illustrates why this is so:

Problem Finding Curve

As you can see – by the time you’re up to five or six users, you’ve gotten to the bottom of most of the usability issues, and from then on you spend more time repeatedly seeing what you’ve already seen before and uncovering very few new findings. In my experience – this is as true for other aspects of research as it is for usability.

I would add a caveat which is that if you have user groups that are quite divergent in their attitudes, experience, or requirements/goals etc. you will want to ensure that you apply this rule to each of those groups. So, for example, if you have an audience of ‘buyers’ and an audience of ‘sellers’ you’ll want to get no more than five each from each key audience. One final caveat – when I say no more than five, I also say no fewer than three (and, what do you know, so does Jakob). You need at least three to identify what are actually patterns from those things that are just personal quirks – because that’s what you’re looking for here – the patterns.

Is it scientific to use such a small user group? If you want to make it look that way, you can look to Jakob for some algorithms and graphs. In my experience – it doesn’t matter whether it is scientific or not. The richness of the information and insight you receive even from this small sample size makes the return on investment enormous – and the small sample size makes it an activity that almost any project can incorporate into their timeline and budget. At the end of the day – those things are far more important than scientific validity.

Is it worth doing qualitative testing with only a small sample size? Absolutely yes. In fact, in many ways, this is the best way to do this research. Qualitative research is not about numbers, it is about the richness of the information and insight you can get access to by spending time with the people who form your audience (or potential audience), and looking for patterns in their reactions and responses.

In many cases, increasing the size of your sample so that it seems more ‘valid’ is a waste of time and money as the later interviews become more and more a repetition of finding you’ve already identified and confirmed. This time and money could be much better spent improving your product and conducting another round of research.

If it’s numbers you’re after – go do a survey. I say embrace and defend the small sample size of qualitative research.

What say you?

(Coming soon: Part Two – Ever-Evolving Prototypes are Ace)

Page 5 of 8« First...«34567»...Last »