Did I mention I’m freelancing? (or, coping strategies from the dining room desk)

So, I don’t remember whether I specifically told you or not, but I’ve just gone out to work on my own.

A freelance what, I’m not exactly sure.. I’m hoping to continue to do what I’ve been doing for a while now – design/user research and user centred design (including information architecture and interaction design), but some other interesting opportunities are out there too… All good fun.

What this means is that I get to work from home quite a bit and am more or less entirely responsible for getting stuff done, or not. Both of these present great challenges for someone who is – I’ll admit it – a bit of a procrastinator.

Fortunately this is not my first stint as a freelancer, and I’ve developed some tactics over the years that have proved a godsend in getting work done and not letting it drag out forever.

My number one favourite technique is called ‘structured procrastination‘ and here’s how it works. You’ve got a to do list. It’s reasonably long. Make sure it’s got ALL the things you should be doing or should have done on it. Then, attempt to tackle the task you think you *should* be doing. You may have some success, but if you are like me, this is a task that you’re probably doing ahead of time and the lack of adrenaline makes it less compelling than it could be. Rather than just surfing the internet or doing something even less constructive – go to your list and pick something else on the list to do.

The strange thing is that, when you feel you *should* be doing something else (let’s call it your primary task) all of the other tasks on your list suddenly start looking sooooo much more appealing.

I find that whilst procrastinating about my primary task, I manage to plough through a pile of things that I didn’t think I’d get to for quite a while.

Sooner or later, my primary task re-engages me, or it moves from being the primary task to a secondary task which – you guessed it, makes it more appealing all of a sudden.

I know I’m not the only person who uses this technique, and it’s not one I made up myself. I’m not sure how universal it is – but if you’re getting desperate, give it a try and see how you go.

Do you have any foolproof procrastination busting techniques?

(Oh, and yes. I’m interested in hearing from you if you’ve got an interesting project – email me at leisa(dot)reichelt(at)gmail(dot)com)

Embracing the Un-Science of Qualitative Research Part Three – Improvising is Excellent

So, recently we’ve been talking about Qualitative Research and how it’s not so scientific, but that ain’t bad.

We identified three ways that you *might* make Qualitative Research more scientific and have been pulling those approaches apart. They are to:

  1. Use a relatively large sample size (which we destroyed here)
  2. Ensure that your test environment doesn’t change (which was shown to be foolish here)
  3. Ensure that your test approach doesn’t change (which we’ll take down now).

So, one of the first things you learn when you come to qualitative research, particularly usability testing, is to write a test script. A test script is good because you’ll be spending an hour or so with each person and you need to have something to prompt you to make sure you cover what you need to cover, and to help ensure you have a good structure to your session.

But this is how scripts are supposed to be used – as a guide. You shouldn’t literally use them as a script! And you should feel confident and comfortable deviating from the script at the right times.

When are the right times to deviate from the script? I think there are two key times.

If you already know what the answer to your question will be, there is very little reason to ask it. Sometimes it is helpful to have an array of people responding in the same way to the same task or question – particularly if your client is particularly attached to a bad idea for some reason. Repetition can help bring them around. Most of the time, though, you’re just wasting valuable research time covering old ground when you could be covering new.

Very often it’s not until the first one or two research sessions that some very obvious issues become very obvious. You wonder why you didn’t pick them up before, but that’s why we do this kind of testing/research. If you’re not updating your prototype (as recommended in Part Two), then you should update your script. Don’t cover old ground for no good reason, research time is too valuable for that.

The other main reason for deviating from the script is if the person you’re interviewing says or does something really interesting. Your script tries to anticipate what people’s reactions might be, to a point – but the point of doing this research is to learn things you didn’t know before, and sometimes what you thought you’d find and what you actually find are very distant from one another – and this is great! This means you’re doing good research. (It’s alarmingly easy to find out the answers you want to find out by researching poorly).

If you’re interviewing someone and they say something interesting and perhaps unexpected – follow it! This is potentially research gold. Don’t let sticking to a script stop you from following this gold. You may, in fact, want to alter your script for future interviews depending on what you discover here.

Of course, this means that when it comes time to do your report you won’t be able to say things like ‘80% of people said this’ or ‘only 10% of people did that’. People do like to say those kinds of things in their reports and, of course, clients tend to like to hear them. People like numbers. (Just think of how they latch on to stupid concepts like the ‘3 click rule’). But you shouldn’t really be using numbers like this in your reporting anyways. After all – as we talked about in part one – you’re not using statistically significant numbers anyway, you’re probably talking about eight, maybe twelve people. Your percentages, no matter how popular, are not particularly meaningful AND you are helping to fuel the perception that research is about numbers like this when, as we agreed earlier, it is really all about the depth of insight and qualitative research is what you do if you want to pull out fancy percentages.

So, write yourself a script and use it for inspiration and reminders and for structure but don’t be constrained by it and do let the content of your interview guide the questions you ask and what you report.

Which makes me think… perhaps we need to talk some about how to ask good questions whilst interviewing… soon, I think.

(Brief apologies for the delay between parts 2 and 3 – I had to do some holidaying in Italy. Briefly back in London before flying out to UX Week tomorrow morning. Are you having a ridiculously busy August too?!)

links for 26 July 2007 – Resources for UX Freelancing

Embracing the Un-Science of Qualitative Research Part Two – Ever-Evolving Prototypes are Ace

So, earlier we were talking about whether you can or should attempt to make qualitative research more scientific, and that there are three ways you might go about doing this, being to:

  1. Use a relatively large sample size (deconstructed in Part One)
  2. Ensure that your test environment doesn’t change (which we’ll talk about now)
  3. Ensure that your test approach doesn’t change

One of the fundamentals of quantitative research is its systematic nature. It’s about measuring stuff. And, you don’t want that stuff to change as you’re measuring it for a number of reasons – not the least of which being that it makes it very difficult to plot on a graph :)

Qualitative research, on the other hand, is not about numbers so much. It is about the depth of insight that you can gain from having much greater and more flexible access to your research subjects. As you are seeking insight, not statistics, it matters far less whether whatever you are testing, say a prototype, changes a bit throughout the course of the study.

In my experience, some of the most fruitful research has occurred when the prototype has changed quite a bit from interview to interview – and sometimes even within an interview.

Here’s how it works (again, using the example study I described in part one: a lab based combination of interview & a wee bit of usability which is intended to ensure that my client’s proposition is sound, that it is being well communicated, that the users understand what the service is and how it works, and to weed out any critical usability issues).

On one side of the big brother mirror you have the researcher and the participants (sometimes known as ‘users’. Urgh). On the other secret side of the mirror you have your client including a member of their technical team (or, perhaps, a gun Visio or Omnigraffle driver, depending on what stage your prototype is at) with laptop at the ready.

As you proceed through the first couple of interviews, some really obvious stuff emerges. These are the things that you don’t really notice because you get too close to the project and you develop a kind of ‘design blindness’. Or they’re things that you never really thought about because you were concentrating on other more important or complex aspects of the design.

These findings are almost always crystal clear – the change required is obvious and rarely time consuming. What are your options? You can:

  1. spend time on and note the problem as it occurs in every single interview you perform in that study, or:
  2. fix the problem and spend your valuable research time learning about things you don’t already know about.

OK, so I might have biased that survey just a little, but the answer seems obvious to me. You get so little opportunity to spend with actual end-users – why spend that learning the same thing five times when you could spend that time getting more and richer insight, and exploring the more complex problems.

Because you can use this technique to explore the more complex problems that emerge from qualitative research.

With complex problems the solution is not so clear cut so often this means you end up doing some improvised A/B testing where you explore a number of potential solutions with participants to see which is most effective, or at least, which seems to be the correct solution to further explore.

(Interestingly that Wikipedia entry I linked to there suggests that you need an audience of statistical significance for A/B testing to be effective… of course, I disagree).

This approach to research is more demanding on the researcher than a typical ‘fixed script, fixed environment’ approach to testing. Using this approach, I can never quite be sure what will be on a page when it loads or whether the navigation items might have changed a bit and I need to explore that, or to completely change the flow of my script because the part of the prototype we were going to explore next is a bit broken and we’ll need to come back to it later.

These extra demands are well repaid, though, by the forward leaps that you will see your client being able to take even before the research is complete – and well before your analysis is done and presented. Not only this, but the value of undertaking the research is well and truly demonstrated even as it is being performed – which is gratifying to you and often very exciting (or relieving) to your client.

So, again I say – if it’s numbers you need – go do a survey. Or use one of the great online remote usability testing tools, or any number of excellent quantitative methods. In their own ways, they are fantastically valuable.

But if you want to quickly weed out problems with your site/application/prototype – then I recommend that you consider using this technique of the ever-evolving prototype. It will certainly keep you awake as you’re researching, you’ll get rapid return on investment and excellent bang for buck as far as research techniques go.

What say you?