Why collaborative research analysis rocks out.

dConstruct workshop - affinity sorting

(a quick definition, given that I’ve discovered that English is at least three separate languages: to rock out = to perform exceptionally well and give great satisfaction, as say, a rock band might ‘rock out’ on stage’.)

These days when I’m doing any kind of user research, rather than going to my secret consultant place and doing that consultant magic that results in a presentation of research findings, I much prefer to get into a big room with clean walls and several hundred sticky notes and my clients/project team, and to work out the research findings collaboratively.

Am I just being lazy and getting my clients to do my work? Well kind of…. but with good reason!

Why do it? Well, there are a few reasons.

Firstly, to combat what I think is probably the single most frustrating outcome of a research project – having your results either not accepted or immediately shelved, meaning that all of your work has come to pretty much nothing. By involving your clients in the process, they have a stake in defining exactly what the findings are, what is important, what is not. When you’re presenting the findings, you (or even better, the project team) are presenting the *team* findings, not just your own.

Secondly, to educate your client. To help them to understand that there is actually a rigorous process that occurs between the interviews or focus groups or whatever your research activity is, and when the findings magically appear in the presentation. To allow them to use the tools themselves when it is appropriate.

Thirdly, to get better results. Having your client with you will ensure that you apply appropriate rigor in reviewing research data. Not to say you don’t do this by yourself as well but it’s great to have the extra incentive.

Not only that but in this situation, three or four or five heads definitely are better than one. Take for example this study that Jared Spool shares in his article on the KJ Technique (which I’m referencing all the time!)

Back in the late 1970’s, the US government commissioned a study to look at effective group decision making. In the study, they asked 30 military experts to study intelligence data and try to construct the enemy’s troop movements.

Each expert analyzed the data and compiled a report. The commission then “scored” each report on how well it reported the actual troop movements. They found that the average military expert only got 7 out of a 100 elements correct.

Each expert then reviewed all of the other experts’ reports and rewrote their initial assessment. The average accuracy for these revised reports was 79 out of a 100.

What was different between the first report and the second? The experts didn’t have any new information. All they had were the perspectives of the other experts. When they added those perspectives to their own, their accuracy increased ten-fold.

It’s been my experience that if you can get your project team members (and their associated and diverse expertise) involved in the research analysis process, then you will most definitely get more accurate and more useful research findings.

So, how do you do it?

I’m sure there are a whole bunch of ways to do collaborative research analysis but I’ve gotten the most success from the following approach.

Firstly – encourage as many team members as possible to observe the research (if possible). Give them sticky notes and markers, give them the rules for writing sticky notes (one concept per sticky, clear handwriting in capital letters) and ideas about what kind of things should go onto the sticky notes. Don’t worry about the fact that you’ll have duplicates. Get them to write as many stickies as they can.

Then, when it comes time for analysis, you want a big room with lots of clean wall space. Plaster the walls with white or brown paper (whatever is easiest to get hold of) so you can move the stickies around en masse with ease. Then it’s time to get stuck into the process.

  1. start by defining the research question(s) – you should have done this before you undertook the research so this should just be a refresher. I like to get them written up and positioned somewhere highly visible in the room. This is what we’re trying to discover, the questions we’re trying to answer. They help maintain our focus.
  2. do a large scale affinity sort (follow steps 4, 5 and 6 from the KJ Technique). I know that this process looks completely chaotic at first… it is. Trust the process though, it actually does work. What happens is that you end up with lots of big groups with very vague names and some duplicates around the room. After you’ve done the very first sort, pick a big group and start dissecting it – look for groups within groups, and make sure that the group labels are actually meaningful in relation to your research questions. This is the tough part – you need to keep driving the group to keep seeking themes and meanings within the groups… and to sort and re-sort, and have lots of long, pedantic discussions – until finally the room full of stickies is completely sorted. (You can deal with the duplicate issue now by sticking duplicates one on top of the other so that they are not over-represented within groups).
  3. prioritise your findings. As a group – review all of the findings that you’ve come up with (each group is now a ‘finding’), and start grouping your groups together based on their relevance to your research question. You might have meta-group headings that are something like ‘Interesting but out of scope’ and ‘In Scope – High Priority’, ‘In Scope – Low Priority’ etc.
  4. then finally, go back to your research questions and work out what you’ve found. Based on the research you’ve done in this project, what are the answers to your questions?

Be sure to photograph all of your work, and then instead of the dreaded task of writing a ‘research report’, your job is then to gather all of this information into a digestible format for the team to use going forward.

And, of course, because they’ve actually been involved in the process, they’re much more likely to actually use it. Yay!

What do you reckon? Have you tried working like this? How did it go? Any other techniques that you’ve found work well? I’d be interested to hear what you think! :)

8 thoughts on “Why collaborative research analysis rocks out.

  1. Leisa, I’m with you all the way on the collaboration front. I no longer give research presentations. Everything these days is based on workshops with a very high level of collaboration. However, one constraint I continually battle against is other people’s time. In the vast majority of circumstances, it’s simply not feasible for all the project team to attend all (or even a majority) of the research sessions – whether they be lab based or in the field. And this seems to be a critical factor in the success of your collaborative analysis. I have found collaborative analysis to work brilliantly when analysing data from one research participant e.g. a small team of 3 analyse their observations after a day shadowing someone in the field. However, for higher level analysis across many research participants, an issue I’ve experienced is that when people have only seen a subset of the research sessions, their analysis is heavily skewed towards their own observations and experiences. To be honest, I’m also guilty of skewing when I’ve done field research as part of a team. So I tend to turn up to the workshop with much of the analysis done, and use storytelling techniques amongst participants where they share the research findings with each other. Everyone is on a path of discovery and everything is very collaborative. I’ve found this to create significant buy-in.
    My conclusion is that the research analysis is imperfect, but that’s OK because I have traded perfection for project team buy-in. And it didn’t take huge amounts of people’s time.

  2. @Paul – yeah, good point. The time consuming aspect is tricky. I have found it quite successful though to try to get as much of people’s time as I can but to let them come and go (to meetings etc.) as they need to. You don’t actually need to have everyone in the room all the time to get the value from the approach – although getting as much time from the core stakeholders as you can is great.

  3. @ Chris – I think the only thing we’re missing is a stop watch and some kind of a leaders board to see who can get their research findings in the least possible time.

    Not that it’s a competition, of course, and research analysis tends to take the time that it takes (and shouldn’t be rushed)… but I think I did forget to mention that the other great advantage of this approach is that it does tend to get you to your implementable results much more quickly :)

  4. […] With a headline like Why collaborative research analysis rocks out it was no surprise I found Leisa Riechelt’s recent blog made very interesting reading. Leisa, originator of the wonderfully evocative phrase Ambient Intimacy, and all round sticky note queen (3M should sponsor her) argues thusly: These days when I’m doing any kind of user research, rather than going to my secret consultant place and doing that consultant magic that results in a presentation of research findings, I much prefer to get into a big room with clean walls and several hundred sticky notes and my clients/project team, and to work out the research findings collaboratively. […]

  5. Leisa,

    while looking at your description and the original KJ-Technique post, the notion of card-sorting came to mind – is there enough similarity between the large-scale affinity sort that you mention, and card-sorting, to warrant any further thought? A couple of your other commenters have mentioned the difficulty that people have with freeing up anough time to participate. And I thought that an aspect of that might be the difficulty with getting people to free up time *at the same time* – corporate diary-making is the worst task in the project management lexicon! This triggered a thought about on-line card-sorting tools, and whether anyone had tried using any of those to spread th time-burden a bit. Then again, it would be handy of the tools could be a bit more social…surely a start-up somewhere is already working on this ;-)

    But I meant to say first (sorry!), thanks for doing this excellent review and pointing at the original article. Perhaps pure science might suffer a little, but the advantages for getting translation from research to action far outweigh such considerations

  6. […] Why collaborative research analysis rocks out Filed under: research — Ralph Hockens @ 3:33 am Full article: Why collaborative research analysis rocks out. It’s been my experience that if you can get your project team members (and their associated and diverse expertise) involved in the research analysis process, then you will most definitely get more accurate and more useful research findings. […]

Comments are closed.