Embracing the Un-Science of Qualitative Research Part Two – Ever-Evolving Prototypes are Ace

So, earlier we were talking about whether you can or should attempt to make qualitative research more scientific, and that there are three ways you might go about doing this, being to:

  1. Use a relatively large sample size (deconstructed in Part One)
  2. Ensure that your test environment doesn’t change (which we’ll talk about now)
  3. Ensure that your test approach doesn’t change

One of the fundamentals of quantitative research is its systematic nature. It’s about measuring stuff. And, you don’t want that stuff to change as you’re measuring it for a number of reasons – not the least of which being that it makes it very difficult to plot on a graph :)

Qualitative research, on the other hand, is not about numbers so much. It is about the depth of insight that you can gain from having much greater and more flexible access to your research subjects. As you are seeking insight, not statistics, it matters far less whether whatever you are testing, say a prototype, changes a bit throughout the course of the study.

In my experience, some of the most fruitful research has occurred when the prototype has changed quite a bit from interview to interview – and sometimes even within an interview.

Here’s how it works (again, using the example study I described in part one: a lab based combination of interview & a wee bit of usability which is intended to ensure that my client’s proposition is sound, that it is being well communicated, that the users understand what the service is and how it works, and to weed out any critical usability issues).

On one side of the big brother mirror you have the researcher and the participants (sometimes known as ‘users’. Urgh). On the other secret side of the mirror you have your client including a member of their technical team (or, perhaps, a gun Visio or Omnigraffle driver, depending on what stage your prototype is at) with laptop at the ready.

As you proceed through the first couple of interviews, some really obvious stuff emerges. These are the things that you don’t really notice because you get too close to the project and you develop a kind of ‘design blindness’. Or they’re things that you never really thought about because you were concentrating on other more important or complex aspects of the design.

These findings are almost always crystal clear – the change required is obvious and rarely time consuming. What are your options? You can:

  1. spend time on and note the problem as it occurs in every single interview you perform in that study, or:
  2. fix the problem and spend your valuable research time learning about things you don’t already know about.

OK, so I might have biased that survey just a little, but the answer seems obvious to me. You get so little opportunity to spend with actual end-users – why spend that learning the same thing five times when you could spend that time getting more and richer insight, and exploring the more complex problems.

Because you can use this technique to explore the more complex problems that emerge from qualitative research.

With complex problems the solution is not so clear cut so often this means you end up doing some improvised A/B testing where you explore a number of potential solutions with participants to see which is most effective, or at least, which seems to be the correct solution to further explore.

(Interestingly that Wikipedia entry I linked to there suggests that you need an audience of statistical significance for A/B testing to be effective… of course, I disagree).

This approach to research is more demanding on the researcher than a typical ‘fixed script, fixed environment’ approach to testing. Using this approach, I can never quite be sure what will be on a page when it loads or whether the navigation items might have changed a bit and I need to explore that, or to completely change the flow of my script because the part of the prototype we were going to explore next is a bit broken and we’ll need to come back to it later.

These extra demands are well repaid, though, by the forward leaps that you will see your client being able to take even before the research is complete – and well before your analysis is done and presented. Not only this, but the value of undertaking the research is well and truly demonstrated even as it is being performed – which is gratifying to you and often very exciting (or relieving) to your client.

So, again I say – if it’s numbers you need – go do a survey. Or use one of the great online remote usability testing tools, or any number of excellent quantitative methods. In their own ways, they are fantastically valuable.

But if you want to quickly weed out problems with your site/application/prototype – then I recommend that you consider using this technique of the ever-evolving prototype. It will certainly keep you awake as you’re researching, you’ll get rapid return on investment and excellent bang for buck as far as research techniques go.

What say you?

Embracing the Un-Science of Qualitative Research Part One – Small Sample Sizes are Super

If you’re into qualitative research at all, it wouldn’t have taken long before you had someone ask you about the statistical significance of your research and how you could back your findings with such a small sample size, or to find others out there trying to make qualitative research look more scientific by trying to extract hard data from it.

There are three main ways that you can try to make qualitative research look more scientific, being:

  1. Use a relatively large sample size
  2. Ensure that your test environment doesn’t change
  3. Ensure that your test approach doesn’t change (don’t change the script, and stick to it)

Now, there are some times when one or more of these tactics is appropriate, but conversely, in many instances it has been my experience that by breaking these rules, you are able to get much greater insight into the research question(s) you have set yourself.

There are many different kinds of qualitative research study, so in the interests of clarity, let’s pick one just like I’ve been working on this week – a lab based combination of interview & a wee bit of usability which is intended to ensure that my client’s proposition is sound, that it is being well communicated, that the users understand what the service is and how it works, and to weed out any critical usability issues.

In the interest of not making you read an enormous post, I’ve divided this into three parts. So, let’s start with part one – a large sample size. Now… to the best of my knowledge there is no scientific way to determine the correct number of participants in a qualitative research study. Now, I’m no statistician (if you are, please feel free to weigh in here), but it is my understanding that the likelihood of reaching a statistically significant result using the methodology I’ve described above, is pretty much nil. Not that it’s impossible, but you’d have to do a heck of a lot of interviewing.

And here’s one golden rule of qualitative research that always holds true – if the research is going to take too long or be too expensive, it will not happen. You can count on that one.

As a result, sample size for qualitative research is often driven by the time and budget available – and that’s not necessarily a bad thing. In fact, this is one subject upon which Jakob Nielson and I actually quite agree. Jakob says that most of the time elaborate usability testing is a waste of time and that you should test with no more than five users. He has a natty little graph that illustrates why this is so:

Problem Finding Curve

As you can see – by the time you’re up to five or six users, you’ve gotten to the bottom of most of the usability issues, and from then on you spend more time repeatedly seeing what you’ve already seen before and uncovering very few new findings. In my experience – this is as true for other aspects of research as it is for usability.

I would add a caveat which is that if you have user groups that are quite divergent in their attitudes, experience, or requirements/goals etc. you will want to ensure that you apply this rule to each of those groups. So, for example, if you have an audience of ‘buyers’ and an audience of ‘sellers’ you’ll want to get no more than five each from each key audience. One final caveat – when I say no more than five, I also say no fewer than three (and, what do you know, so does Jakob). You need at least three to identify what are actually patterns from those things that are just personal quirks – because that’s what you’re looking for here – the patterns.

Is it scientific to use such a small user group? If you want to make it look that way, you can look to Jakob for some algorithms and graphs. In my experience – it doesn’t matter whether it is scientific or not. The richness of the information and insight you receive even from this small sample size makes the return on investment enormous – and the small sample size makes it an activity that almost any project can incorporate into their timeline and budget. At the end of the day – those things are far more important than scientific validity.

Is it worth doing qualitative testing with only a small sample size? Absolutely yes. In fact, in many ways, this is the best way to do this research. Qualitative research is not about numbers, it is about the richness of the information and insight you can get access to by spending time with the people who form your audience (or potential audience), and looking for patterns in their reactions and responses.

In many cases, increasing the size of your sample so that it seems more ‘valid’ is a waste of time and money as the later interviews become more and more a repetition of finding you’ve already identified and confirmed. This time and money could be much better spent improving your product and conducting another round of research.

If it’s numbers you’re after – go do a survey. I say embrace and defend the small sample size of qualitative research.

What say you?

(Coming soon: Part Two – Ever-Evolving Prototypes are Ace)

Innies and Outies


I’ve been thinking on this a little bit lately. I’m just about to make some more changes at work, and I have to admit, for a while there I was toying with becoming an ‘innie’.

There is something quite seductive about having the access to resources that innies have that outies never really get. Especially if you work somewhere really big. You might also get to work on stuff that *really* matters, that makes a difference to lots of peoples lives every day. This is pretty powerful stuff.

But… at the end of the day… you only get to work on more or less the same stuff, month after month. You can look forward six months into the future and have a fairly good idea of what you’ll be doing. This, for me, is the downside.

That, and I was never really sure that I wanted to back one company hard enough to commit to them with full time employment. (And they say men are the ones with commitment issues!)

Yes, I’ve been an outie almost all my career. I started off as an innie, but that job ended with a retrenchment (which I think was a good thing and I’m pretty sure didn’t scare me off the innie thing). And I’ve just chosen, again, to be as outie as you possible can be – to freelance.

(For those of you *completely* lost by this innie/outie business – an ‘innie’ is someone who works within a company as an Information Architect, Interaction Designer, User Researcher, etc. An ‘outie’, on the other hand, consults or works for a consultancy, and does IA, IxD etc. on a project by project basis usually as contracted by the large company.)

Are innies and outies different kinds of people? I think, perhaps, they are.

Innies must surely have a lot of patience for internal politics. In many cases, they have a slow moving ship they need to gradually turn around. They need to work very hard, often, to make the business aware of their presence and their importance, and to be able to get involved with the decision making early enough to do their job. They need persistence. They need to be happy to work with the same people for long stretches, even if those people cause them great frustration. They need to be able to deal with bureaucracy that often flies in the face of what they are trying to achieve.

Where innies must manage lack of change, many outies have the opposite problem – never ending flux. They not only have to adapt to new projects but often whole new industries every few months. New teams of people to work with, new sets of politics, new priorities, new objectives, new obstacles, new challenges. Often times they have the same challenges as innies, but they know that these will go away as this project ends and the new one commences. And they can always just fire the client if it gets extreme. Outies, as a rule, require more developed ‘consultancy’ skills – the ability to ‘manage’ clients, to sell ideas, to gain confidence in their abilities and their approach.

I was recently at the Enterprise 2.0 conference in Boston where I’d been invited to speak. There were a lot of ‘innies’ at that conference. A lot of big company innies. I have to say, by and large, it made me happy to be an outie. I felt that being an outie makes me more agile, more connected, more responsive – I feel a drive to keep in touch with what the rest of the world is doing, where I got the distinct feeling that there was a lot more navel gazing (sorry) going on amongst the innies, and that when they did look outwards, they never looked too far.

Sure, their projects may be the really large important ones. They may might be building a space shuttle. But perhaps some part of the innovative work that I was able to do on a much less ‘important’ project will one day feed into the design of a very important project. Who knows… maybe one day they’ll outsource the space shuttle? :)

It is my suspicion that, even if you have worked as both an innie or an outie, you know which one of these you *really* are. That you’re more one than the other.

I suspect that having the experience on both sides is a very valuable thing. Perhaps I should do more ‘innie’ work. But, at the end of the day… I’m most definitely an outtie, and that’s how I do my best work.

How about you?

Image Credit: Mr Truffle @ Flickr

Remind me … what’s so great about Omnigraffle?

For many years, as the groundswell towards Mac has gathered pace, I’ve had to endure many of my colleagues scoffing at the fact that I continue to use Visio when they’ve seen the light and made the move to Omnigraffle.

I got my first Mac in more than a decade last week, so I’ve left behind all my Visio skills for the time being and am trying to ‘level up’ in Omnigraffle as quickly as possible!

But I don’t get what’s so special about it. Can someone remind me?

Making the switch to Mac has been a fascinating experience. I’ve had so little experience with OSX and Mac applications, that I really feel like a beginner. And, no. It’s not as easy and idiot proof as those of you who’ve been using Macs for a while seem to think. Sometimes, really basic tasks like trying to save a document into a particular folder, seem completely impossible to me (there is a lot of functionality hidden behind little black triangles, I’ve come to discover).

I miss knowing all the shortcuts desperately. And knowing how to diagnose problems. I have to learn entirely new patterns and ways of interacting.

I’m a beginner. And it’s really frustrating, and disempowering. It makes me feel pretty dumb.

It also makes me think that I wish that I could have this experience about once a year to REALLY bring home what the experience of using the interfaces that I design must be for very many people. It lifts the ‘Curse of Knowledge, or the The Curse of Expert Ennui as Anne Zelenka might describe it.

We know so much about making our computers work and so much about how they are designed… it’s impossible for us to forget enough to really empathise with novice users.

Which, of course, is why it’s so important to regularly, carefully and empathetically observe users of all levels of expertise and familiarity using your product. You might *think* you know what they understand, but you’re probably wrong. Design expertise is incredibly important, but it only goes so far. Regular observation of real people interacting with technology is a really important input to good design, and becoming a good designer.

Meanwhile. I’m taking any tips on how to become an expert in all things Mac. Let me have ’em.