Archive - UCD process RSS Feed

dConstruct – Questions on Agile UCD

AgileUCDCycle

I had the opportunity to present a talk on the power of iterative methodologies over waterfall at dConstruct last week (a.k.a Waterfall Bad, Washing Machine Good). This is an extended re-mix of a talk that I gave very briefly at the IA Summit earlier this year (I also presented a similar talk at UX Week recently).

I will put my slides up as soon as I can get the file size low enough to get them loaded on SlideShare – will need to get in and optimise those post-it note photos I’m afraid! In the meanwhile, here’s the slide that most people have expressed interest in!

Not to make any excuses, but this is a really tricky talk to give. To get to the meaty bit – the bit where the UCD and the Agile come together, requires a base understanding that Waterfall methodologies (or sequential methodologies) suck, and that iterative and incremental and collaborative/cross-disciplinary methodologies are the way forward. It also requires that you have a working knowledge of Agile.

Lots of people assume that everyone has both of these things, but I can assure you they don’t. If everyone *knew* this, then there’d *be* no waterfall shops out there, right?!

Nonetheless, the most interesting part of this talk is getting down to tin tacks on HOW we can give Agile more UCD flavour – to actually integrated UX activities into a sustainable agile pattern. And this can be pretty tough.

I have to say that while I’m 100% committed to the overall idea of Agile UCD being the methodology of choice, I’m more or less committed to some of the assertions I made in my talk… (is this naughty? I just wanted to take a position for the sake of conversation, and I had plenty of those post-talk!)

For example – one of the things I said was that I think that Agile cycles (or sprints or whatever you call them where you are) need to be longer than 2 weeks. That it takes that much time to fit UX activities into a cycle. I *think* I believe that this is true, if you’re going to do solid UX work in an Agile environment as well as be there in a ‘paired’ environment getting the functionality built. More than one person suggested to me later that perhaps the same end could be achieved by just doing less in a two week sprint. Perhaps.

I don’t think there are any clear rules on this yet… at the end of the day, what I *do* want from this discussion is for UX practitioners to feel comfortable engaging in the discussion about how long a cycle should be with the rest of the Agile team and to not back away from the discussion when rebuffed with a simple ‘well, that’s the rules’, or ‘that’s just the way Agile works’ argument. Again – if your team doesn’t do this then consider yourself fortunate. There are lots of designers out there trying to eek their way into an established Agile environment who come up against this kind of resistance all the time.

There was another series of questions that I got from LOTS of people – and a very valid and important questions they are too…. I’m not sure I have the best responses so I’d love to get feedback from you on what you’d do and say.

The questions are around how do we sell Agile UCD to our organisation and to our clients? How do we manage the fact that this agile approach takes away the security blanket of knowing what you’ll get and when you’ll get it that Waterfall apparently provides.

When I was talking with people about this at dConstruct, I found myself saying these two things repeatedly, and it seemed to elicit nods and smiles and general agreement/enthusiasm.

The first is that whilst in waterfall you might know what you’re getting at the beginning – you have no reason for confidence that you are getting the *right* thing at the end of the process. And, in Waterfall, there’s no real mechanism for changing your mind or learning from the project itself as you go. You can very easily end up with the wrong thing, and then spend a whole bunch of time and money changing everything around so that it’s right – and you’re changing things at the most difficult (read: most expensive) time to be changing things!

So, what this means is that the ‘security’ that Waterfall appears to offer is really, in many cases, false. And although the budget for the project may be fixed at ‘x’ – no one knows what work will be required once the project is completed to get it into it’s correct state. Or the cost to the business of not having got the project as good as it could have been.

Make sense?

The second point is that we can ‘charge out’ Agile in a very different way to Waterfall. Although it would be nice, your client doesn’t *need* to commit to months and months of development work in an Agile environment. They don’t need to commit to the full budget. In fact, they *could* potentially just commit from cycle to cycle. Your company/department can calculate the cost of a cycle, then with the client, negotiate what work will be undertaken in that cycle… rinse, repeat etc.

Of course, this is not really the ideal way to be running a business and you’d hope that, over time, as your clients got more and more clued to the benefits of working Agile rather than Waterfall, they’d be more willing to make longer commitments to the project, making your cash flow a little less precarious!

Fact is though, you do need clients who are clued into the benefits of Agile and Agile UCD for their projects. If they don’t get what’s in it for them (and there is plenty!) then there’s no way you’ll sell them on it. So, knowing those reasons yourself and then educating your clients is a big part of the process.

It’s not easy though, and we’re not going to be in a state where we can be all happily Agile UCDing tomorrow, next week or even next month.

It is, however, without a doubt the ideal way to run your projects. It’s going to require a lot of thought, a lot of work, a lot of trying things out, and a lot of educating our clients and other people in our organisations.

I think it’s going to be worth the effort!

In the meanwhile, lets start sharing ideas and experiences.

How would *you* answer these questions? What questions do you have? What’s been your experience with Agile and Agile UCD?

dConstruct – Collaboration, Creativity & Consensus In User Experience Design Workshop

Workshop in Action

I ran a workshop on Collaboration, Creativity & Consensus for User Experience Design at dConstruct last week. I had lots of fun and learned a lot as well – I know, it makes me sound as though I was a participant, not running the show! Funny how that works! (Hopefully the people who came along also had fun and learned stuff, then we’re all happy! I think they did.)

Finding good ways to collaborate and to work with a multi-disciplinary team is something that is very important to me. It makes the work much more fun and gives so much more insight.

I was really interested in the short discussion we had in our workshop about the importance of fun in project work. There was more or less a consensus amongst us that fun was more than just, well, fun. It was also really important in motivating the team to stay engaged with the project and to do great work, and to be involved. And lots more reasons. I’d be interested to hear your thoughts on if and if so why fun is important in project work.

We did three exercises throughout the course of the day including a brainstorming session with a difference (brainstorming that actually works and doesn’t deserve the bad reputation that bad brainstorming has given it!), a round of design consequences (which we’ve talked about here before), and a run through the KJ Method (whilst channeling Jared Spool).

Something that became clearer to me than ever is the importance of actually *doing* these exercises in order to learn them, and how incredibly hardwired our brains can be to doing things the way we’re used to.

This was never more evident than when we did the brainstorming exercise. I gave some pretty simple instructions to the groups before letting them loose on the brainstorming activity. Admittedly, they were probably fairly different instructions to what everyone was used to when it came to brainstorming, but what followed was pretty extraordinary.

Rather than following these simple instructions, three of the four groups did their own thing, which turned out to be more or less the same thing – rather than using the techniques we’d discussed which are designed to open out the idea generation process, they each proceeded to ‘lock down’ the process by creating a list of things that the device (oh, ok, it was an iPhone!) could do and not do. There was this driving need to ‘lock down’ the environment in which the ideas could be generated. This is not particularly conducive to productive brainstorming!

As it happened, what this meant was that I had to go around to each group and suggest to them that, just for fun, they gave the rules we’d discussed a go – and when they did, the creativity and ideas started flowing! It was a real insight not only into the power of brainstorming well, but also into our own natural desire to get a handle on things, to keep things tight, even when this is potentially detrimental to the activity we’re trying to undertake. I’m pretty sure with out actually seeing this in action, experiencing it for yourself, the lesson is nowhere near as powerful!

There were some really challenging questions raised during the workshop, some of which I’m sure I don’t know the answer to yet (if there is one!). A lot of these were related to how we can bring these kinds of collaborative and creative activities into a workplace that doesn’t naturally embrace them – or worse, where these kinds of activities are looked on as ‘not real work’, or where people pride themselves on working independently.

This can definitely be a tough situation, and I guess my first tip would be to try to make sure you work in a place where collaborative and creative work is embraced! This is going to work for everyone though, so some of the tips that I offer include:

  • Be brave. Those people who think that collaborative and fun activities are childs play and don’t contribute meaningfully to ‘proper work’ often have a talent for making you feel a little silly for suggesting these activities. Don’t let this put you off – press on regardless and have confidence in your techniques!
  • Be methodical. It is actually pretty easy to waste time on these kinds of activities… this is probably why lots of people are pretty cynical about them – they’ve had their time wasted before. Make sure you know WHY you are doing this activity, and WHAT you’re going to get out of it. If it has a clear purpose and outcome then it is more likely to be successful and people are more likely to give it a go.
  • Be prepared. These activities don’t run themselves and most of them require some time, effort and careful thinking to ensure that they are well prepared and run smoothly. Don’t expect to just ‘wing it’ in these sessions. Don’t risk wasting people’s time. Make sure you know what you’re going to do. If you haven’t done it before, consider running a pilot run through before the ‘live’ workshop. Have your shit together.
  • Let your work speak for itself. The absolute best thing you can do is to run a highly productive and fun workshop in your organisation and to let the work speak for itself. People enjoy themselves in these sessions – if they feel like they’re getting good results, then they’re even happier. Word will spread and resistance should gradually die down. If it doesn’t… change jobs! :)

Thanks to everyone who participated in the workshop! With any luck I’ll get a chance to do some more of these in the near future – they’re lots of fun and give you some really great tools for bringing your team and maybe even your organisation together around a project.

Embracing the Un-Science of Qualitative Research Part Three – Improvising is Excellent

So, recently we’ve been talking about Qualitative Research and how it’s not so scientific, but that ain’t bad.

We identified three ways that you *might* make Qualitative Research more scientific and have been pulling those approaches apart. They are to:

  1. Use a relatively large sample size (which we destroyed here)
  2. Ensure that your test environment doesn’t change (which was shown to be foolish here)
  3. Ensure that your test approach doesn’t change (which we’ll take down now).

So, one of the first things you learn when you come to qualitative research, particularly usability testing, is to write a test script. A test script is good because you’ll be spending an hour or so with each person and you need to have something to prompt you to make sure you cover what you need to cover, and to help ensure you have a good structure to your session.

But this is how scripts are supposed to be used – as a guide. You shouldn’t literally use them as a script! And you should feel confident and comfortable deviating from the script at the right times.

When are the right times to deviate from the script? I think there are two key times.

If you already know what the answer to your question will be, there is very little reason to ask it. Sometimes it is helpful to have an array of people responding in the same way to the same task or question – particularly if your client is particularly attached to a bad idea for some reason. Repetition can help bring them around. Most of the time, though, you’re just wasting valuable research time covering old ground when you could be covering new.

Very often it’s not until the first one or two research sessions that some very obvious issues become very obvious. You wonder why you didn’t pick them up before, but that’s why we do this kind of testing/research. If you’re not updating your prototype (as recommended in Part Two), then you should update your script. Don’t cover old ground for no good reason, research time is too valuable for that.

The other main reason for deviating from the script is if the person you’re interviewing says or does something really interesting. Your script tries to anticipate what people’s reactions might be, to a point – but the point of doing this research is to learn things you didn’t know before, and sometimes what you thought you’d find and what you actually find are very distant from one another – and this is great! This means you’re doing good research. (It’s alarmingly easy to find out the answers you want to find out by researching poorly).

If you’re interviewing someone and they say something interesting and perhaps unexpected – follow it! This is potentially research gold. Don’t let sticking to a script stop you from following this gold. You may, in fact, want to alter your script for future interviews depending on what you discover here.

Of course, this means that when it comes time to do your report you won’t be able to say things like ’80% of people said this’ or ‘only 10% of people did that’. People do like to say those kinds of things in their reports and, of course, clients tend to like to hear them. People like numbers. (Just think of how they latch on to stupid concepts like the ’3 click rule’). But you shouldn’t really be using numbers like this in your reporting anyways. After all – as we talked about in part one – you’re not using statistically significant numbers anyway, you’re probably talking about eight, maybe twelve people. Your percentages, no matter how popular, are not particularly meaningful AND you are helping to fuel the perception that research is about numbers like this when, as we agreed earlier, it is really all about the depth of insight and qualitative research is what you do if you want to pull out fancy percentages.

So, write yourself a script and use it for inspiration and reminders and for structure but don’t be constrained by it and do let the content of your interview guide the questions you ask and what you report.

Which makes me think… perhaps we need to talk some about how to ask good questions whilst interviewing… soon, I think.

(Brief apologies for the delay between parts 2 and 3 – I had to do some holidaying in Italy. Briefly back in London before flying out to UX Week tomorrow morning. Are you having a ridiculously busy August too?!)

Embracing the Un-Science of Qualitative Research Part Two – Ever-Evolving Prototypes are Ace

So, earlier we were talking about whether you can or should attempt to make qualitative research more scientific, and that there are three ways you might go about doing this, being to:

  1. Use a relatively large sample size (deconstructed in Part One)
  2. Ensure that your test environment doesn’t change (which we’ll talk about now)
  3. Ensure that your test approach doesn’t change

One of the fundamentals of quantitative research is its systematic nature. It’s about measuring stuff. And, you don’t want that stuff to change as you’re measuring it for a number of reasons – not the least of which being that it makes it very difficult to plot on a graph :)

Qualitative research, on the other hand, is not about numbers so much. It is about the depth of insight that you can gain from having much greater and more flexible access to your research subjects. As you are seeking insight, not statistics, it matters far less whether whatever you are testing, say a prototype, changes a bit throughout the course of the study.

In my experience, some of the most fruitful research has occurred when the prototype has changed quite a bit from interview to interview – and sometimes even within an interview.

Here’s how it works (again, using the example study I described in part one: a lab based combination of interview & a wee bit of usability which is intended to ensure that my client’s proposition is sound, that it is being well communicated, that the users understand what the service is and how it works, and to weed out any critical usability issues).

On one side of the big brother mirror you have the researcher and the participants (sometimes known as ‘users’. Urgh). On the other secret side of the mirror you have your client including a member of their technical team (or, perhaps, a gun Visio or Omnigraffle driver, depending on what stage your prototype is at) with laptop at the ready.

As you proceed through the first couple of interviews, some really obvious stuff emerges. These are the things that you don’t really notice because you get too close to the project and you develop a kind of ‘design blindness’. Or they’re things that you never really thought about because you were concentrating on other more important or complex aspects of the design.

These findings are almost always crystal clear – the change required is obvious and rarely time consuming. What are your options? You can:

  1. spend time on and note the problem as it occurs in every single interview you perform in that study, or:
  2. fix the problem and spend your valuable research time learning about things you don’t already know about.

OK, so I might have biased that survey just a little, but the answer seems obvious to me. You get so little opportunity to spend with actual end-users – why spend that learning the same thing five times when you could spend that time getting more and richer insight, and exploring the more complex problems.

Because you can use this technique to explore the more complex problems that emerge from qualitative research.

With complex problems the solution is not so clear cut so often this means you end up doing some improvised A/B testing where you explore a number of potential solutions with participants to see which is most effective, or at least, which seems to be the correct solution to further explore.

(Interestingly that Wikipedia entry I linked to there suggests that you need an audience of statistical significance for A/B testing to be effective… of course, I disagree).

This approach to research is more demanding on the researcher than a typical ‘fixed script, fixed environment’ approach to testing. Using this approach, I can never quite be sure what will be on a page when it loads or whether the navigation items might have changed a bit and I need to explore that, or to completely change the flow of my script because the part of the prototype we were going to explore next is a bit broken and we’ll need to come back to it later.

These extra demands are well repaid, though, by the forward leaps that you will see your client being able to take even before the research is complete – and well before your analysis is done and presented. Not only this, but the value of undertaking the research is well and truly demonstrated even as it is being performed – which is gratifying to you and often very exciting (or relieving) to your client.

So, again I say – if it’s numbers you need – go do a survey. Or use one of the great online remote usability testing tools, or any number of excellent quantitative methods. In their own ways, they are fantastically valuable.

But if you want to quickly weed out problems with your site/application/prototype – then I recommend that you consider using this technique of the ever-evolving prototype. It will certainly keep you awake as you’re researching, you’ll get rapid return on investment and excellent bang for buck as far as research techniques go.

What say you?

Page 5 of 14« First...«34567»10...Last »