(a quick definition, given that I’ve discovered that English is at least three separate languages: to rock out = to perform exceptionally well and give great satisfaction, as say, a rock band might ‘rock out’ on stage’.)
These days when I’m doing any kind of user research, rather than going to my secret consultant place and doing that consultant magic that results in a presentation of research findings, I much prefer to get into a big room with clean walls and several hundred sticky notes and my clients/project team, and to work out the research findings collaboratively.
Am I just being lazy and getting my clients to do my work? Well kind of…. but with good reason!
Why do it? Well, there are a few reasons.
Firstly, to combat what I think is probably the single most frustrating outcome of a research project – having your results either not accepted or immediately shelved, meaning that all of your work has come to pretty much nothing. By involving your clients in the process, they have a stake in defining exactly what the findings are, what is important, what is not. When you’re presenting the findings, you (or even better, the project team) are presenting the *team* findings, not just your own.
Secondly, to educate your client. To help them to understand that there is actually a rigorous process that occurs between the interviews or focus groups or whatever your research activity is, and when the findings magically appear in the presentation. To allow them to use the tools themselves when it is appropriate.
Thirdly, to get better results. Having your client with you will ensure that you apply appropriate rigor in reviewing research data. Not to say you don’t do this by yourself as well but it’s great to have the extra incentive.
Back in the late 1970’s, the US government commissioned a study to look at effective group decision making. In the study, they asked 30 military experts to study intelligence data and try to construct the enemy’s troop movements.
Each expert analyzed the data and compiled a report. The commission then “scored” each report on how well it reported the actual troop movements. They found that the average military expert only got 7 out of a 100 elements correct.
Each expert then reviewed all of the other experts’ reports and rewrote their initial assessment. The average accuracy for these revised reports was 79 out of a 100.
What was different between the first report and the second? The experts didn’t have any new information. All they had were the perspectives of the other experts. When they added those perspectives to their own, their accuracy increased ten-fold.
It’s been my experience that if you can get your project team members (and their associated and diverse expertise) involved in the research analysis process, then you will most definitely get more accurate and more useful research findings.
So, how do you do it?
I’m sure there are a whole bunch of ways to do collaborative research analysis but I’ve gotten the most success from the following approach.
Firstly – encourage as many team members as possible to observe the research (if possible). Give them sticky notes and markers, give them the rules for writing sticky notes (one concept per sticky, clear handwriting in capital letters) and ideas about what kind of things should go onto the sticky notes. Don’t worry about the fact that you’ll have duplicates. Get them to write as many stickies as they can.
Then, when it comes time for analysis, you want a big room with lots of clean wall space. Plaster the walls with white or brown paper (whatever is easiest to get hold of) so you can move the stickies around en masse with ease. Then it’s time to get stuck into the process.
start by defining the research question(s) – you should have done this before you undertook the research so this should just be a refresher. I like to get them written up and positioned somewhere highly visible in the room. This is what we’re trying to discover, the questions we’re trying to answer. They help maintain our focus.
do a large scale affinity sort (follow steps 4, 5 and 6 from the KJ Technique). I know that this process looks completely chaotic at first… it is. Trust the process though, it actually does work. What happens is that you end up with lots of big groups with very vague names and some duplicates around the room. After you’ve done the very first sort, pick a big group and start dissecting it – look for groups within groups, and make sure that the group labels are actually meaningful in relation to your research questions. This is the tough part – you need to keep driving the group to keep seeking themes and meanings within the groups… and to sort and re-sort, and have lots of long, pedantic discussions – until finally the room full of stickies is completely sorted. (You can deal with the duplicate issue now by sticking duplicates one on top of the other so that they are not over-represented within groups).
prioritise your findings. As a group – review all of the findings that you’ve come up with (each group is now a ‘finding’), and start grouping your groups together based on their relevance to your research question. You might have meta-group headings that are something like ‘Interesting but out of scope’ and ‘In Scope – High Priority’, ‘In Scope – Low Priority’ etc.
then finally, go back to your research questions and work out what you’ve found. Based on the research you’ve done in this project, what are the answers to your questions?
Be sure to photograph all of your work, and then instead of the dreaded task of writing a ‘research report’, your job is then to gather all of this information into a digestible format for the team to use going forward.
And, of course, because they’ve actually been involved in the process, they’re much more likely to actually use it. Yay!
What do you reckon? Have you tried working like this? How did it go? Any other techniques that you’ve found work well? I’d be interested to hear what you think! :)
[...] With a headline like Why collaborative research analysis rocks out it was no surprise I found Leisa Riechelt’s recent blog made very interesting reading. Leisa, originator of the wonderfully evocative phrase Ambient Intimacy, and all round sticky note queen (3M should sponsor her) argues thusly: These days when I’m doing any kind of user research, rather than going to my secret consultant place and doing that consultant magic that results in a presentation of research findings, I much prefer to get into a big room with clean walls and several hundred sticky notes and my clients/project team, and to work out the research findings collaboratively. [...]
[...] Why collaborative research analysis rocks out Filed under: research — Ralph Hockens @ 3:33 am Full article: Why collaborative research analysis rocks out. It’s been my experience that if you can get your project team members (and their associated and diverse expertise) involved in the research analysis process, then you will most definitely get more accurate and more useful research findings. [...]
[...] Why collaborative research analysis rocks out. [...]
My name is Leisa Reichelt. I am the Head of User Research at the Government Digital Service in the Cabinet Office.
I lead a team of great researchers who work in agile, multidisciplinary digital teams to help continuously connect the people who design products with the people who will use them and support experimentation and ongoing learning in product design.
If you're interested in working with me or would like to talk more please email me