customer research · UCD process · user experience

cardsorting for validation: truth, dare or torture?

Recently I had cause to use a closed card sorting with the objective of ‘validating’ a proposed Information Architecture model (and some labeling). Argh. I think I will do what I can to avoid that approach in the future.

Card sorting in the initial stages of the project is a noble pursuit, in my opinion, and one that is bound to help you learn more about your users, how their heads work, and the problems that they’ll have with your site. Not to mention their ideas around what your content should be, and how it should be organised and what it should be called.

An IA Validation card sort happens a little way down the track when you think you know what your sitemap is going to look like, and what things are going to be called. You probably even have some draft wireframes that you’re not ready to commit to, but that you developed as you were thinking through the conceptual model for your IA and getting into the nitty gritty of the sitemap.

Once upon a time, I used to think that a card sort at the beginning and a card sort at the end of the IA scoping process was good practice. For my mind, I think that the second user testing exercise needs to be something related to the wireframes… maybe paperbased prototypes (or maybe even interactive prototypes?!), but definitely something that puts your IA into a context… a context beyond a few titles on some cards, that is.

Perhaps this is because of the kinds of sites I seem to be working on these days and that they seem to be less siloed and hierarchical, but use contextual linking between sections, meaning that content doesn’t *have* to live in one place. Perhaps it’s that sometimes you *want* to have your users ‘explore’ the site, rather than feel they understand it 100% at first glance.

Perhaps it depends on what kind of ‘take away’ you want from your testing experience. This probably depends a lot on whether you’re the ‘client’ or the IA. The client, it seems, really wants to see users having no issues or questions or independent opinions regarding your IA. They want testing to say – yes, this approach is 100% correct. Please proceed. They want nothing but good.

The IA, on the other hand, should be interested in eliciting questions and alternatives and seeing which paths people prefer to take to content that may be accessible using multiple paths. Card sorting is good for this.

Which leads me to another thought – do you let your client watch your user testing? Do you think this is a good idea?

And while I’m at it – what are your thoughts on card sorting for IA validation?

Ah! and one more. Comparative testing. Does anyone have war stories on testing two distinctly different IA models? How did that go? Did you test the two models on all of your users, or have two groups of users and test one on each?

Technorati Tags: , , ,

15 thoughts on “cardsorting for validation: truth, dare or torture?

  1. You don’t why you had a bad experience. Care to elaborate on that?

    Meanwhile, although I can see down sides to closed card sorts, if you’re in the business of “validating IA” then are there very many alternatives? I know it sounds rather mercenary, but if you come away from a closed card sort with some data that shows that 80% of users grouped the cards in the same way then that’s your “validation,” no?

  2. Hi Adipex. Listen, I hope you’re not going to start kicking up the stink you did that time at the UXNet thing. I don’t hate you or anything, but why is it that every time I post on somebody’s blog, you turn up and start spoiling it? I’ve had it up to here with all your aggressive questioning, OK?

  3. ROFL @ Jonathan

    that’s my most favourite way to deal with comment spam ever… start a conversation :)

    btw: I fully plan to elaborate on my bad experience with card sorting for validation as per your comment above… have been snowed with work! (eh. distracting me from my blog – how annoying!). More soon! :)

  4. Who has time for validation? ;-)

    I think that if you’ve got a higher-fidelity model of the site at that point, and you’re going to go to the trouble of recruiting participants, seems like it would make more sense to test your prototype.

    Did you already do an initial card sort? If so, it seems like a lot of effort to put into taxonomy, especially if you listen to the “scent of info people,” who say it doesn’t matter as much as we might think.

    Lastly, with the card sort, you’re not really testing “the IA,” in its entirety, just the taxonomy. And if that’s what you want to do, that’s fine. Just seems a little tight in focus at that stage in the game.

  5. Lance, I think you’re getting two things confused here: design and the validation of a design.

    It’s probaby not going to be possible to user test a prototype so that you can derive the kind of metrics required for a validation exersize. The people that pay for these things do so in an attempt to prove something statistically. Unless you’re the type of IA that gets the stopwatch out during user tests (and hey, I’m not knocking that!) it’s not really a very robust response to the brief.

    In this case (at least until Leisa tells us what it was that irked her), the closed card sort, or some derivative of it, is probably the best and perhaps only method that clients will pay for I’m afraid.

  6. ok. finally a moment to see if I can be a little clearer about what I found problematic with the closed card sort as ‘validation’.

    i think that the problem is that by the time you’re confident enough with your sitemap that you can do a closed card sort, you’ve also started work on some of the interface/information design. So, some of the decisions that you’ve made and documented in your sitemap have been informed by what you know about the interface design.

    So, lots of things are missing in the card sort – cross referencing, information scent, any indication of what ‘content’ is associated with a label that you’ve previously viewed.. that kind of thing. Consequently, you find yourself conducting/observing the sessions wishing you could pull out your wireframes (in as draft a state as they might be), and say ‘see, here – this is what you’re talking about, this is what you’re looking for! But, technically, being a card sort… you’re not allowed to do that!

    There was a bit of a discussion around this topic on the sIGIA-L list lately Donna Maurer made a comment that she also suspected that closed card sorts perhaps weren’t a good method for validating IA because ‘information seeking and categorisation are very different from a cognitive perspective’. I think that kind of sums up what I’m trying to say (but oh so more succintly!)

    The other thing to is that, particularly if your client is in the process of ‘butt covering’ (which is why they want to do IA validation via card sort), they’re never going to want to see that you’re ‘learning things’ because that suggests that the IA isn’t perfect… all they want is a tick in the validation box…

    does that explain the irk factor more clearly?

    for me, i just feel frustrated that I could have learned a lot more from that experience if I’d not done a card sort. I would have liked to have introduced some scrappy wireframes instead/as well.

  7. Oh OK. Donna has been going on for a while now about how she hates card sorting for these reasons. It’s worth noting though that she says she used to do loads of them and describes how she did them in the most obviously bad way (from my point of view), so frankly I’m not surprised there’s some disillusionment.

    My take is that card sorts are good in early exploratory stages of projects as one of (and I stress ONE of) a number of methods to gain some insight as to the way users might think about subject matter. Personally, I’d use them if I knew very little of the subject myself, and needed some users to educate me. They are also good because they keep the client on board, shows you some initial directions you can try out, and are easy to conduct and interpret (particularly if used for “validation”). The problems come as you rightly point out when you start trying to use the card sort to build navigation systems. So, er, don’t do that! All techniques fall apart (some utterly) when you push them too far. The trouble is you don’t know when they’ve gone over the edge until its too late.

    However, in this particular context – that of “validating” and IA – I think they have a different purpose: to demonstrate that at least a majority of users agree that they would put certain cards in certain groups. Yes, that’s not a good measure of much at all, but frankly I’ve not got the energy on most projects to argue the toss with the client if they’re adamant about it.

  8. Yes, that’s not a good measure of much at all, but frankly I’ve not got the energy on most projects to argue the toss with the client if they’re adamant about it.

    yep, i think that’s how I got myself into this situation in the first place, and this post was a little vent to remind myself that it was a bad idea (and, potentially warn others considering it).

    there’s been a little anecdotal evidence to show that this idea of ‘validating IA’ is beginning to get a bit of a foothold (where someone is contracted to show that an IA does or does not work effectively), so it’s also interesting in that context to consider what the appropriate approach to that scenario might be. (Although, I guess it would depend a lot on what articulation of the IA you had available to use).

  9. Oh good, I’m glad I said that somewhere ;)

    Jonathan – I don’t hate card sorting! And if you think I used to do loads of them (badly) and don’t do them now, you have either misinterpreted or put together patches of comments to make an incorrect story. I have *so* often said the same as you – that it is ONE input to an IA. And it is a good input.

    But I do think *closed* card sorting may be cognitively invalid as it is usually performed, and every time I say that I also say I’m going to do some proper reading to see whether it is true, which is more than anyone else has ever done.

  10. Sorry for misrepresenting you, Donna. You are right, I had formed that opinion from a few disparate comments. I now stand corrected.

    Like you and Leisa, I am somewhat frustrated by a lack of research into the efficacy card sorting and its related techniques. I see that the venerable Eric Scheid on SIG-IA last week, said:

    “I’d like to see the results of a Card Based Classification Evaluation compared against the results of a Closed Card Sort, and both interpreted in light of the results of a Category Agreement Analysis.”

    I happen to work for an organisation that I think may well be able to conduct such a piece of research and make the results public. It might be time for me to suggest they award me a slice of their R&D budget for this (although the budget is currently zero, so that might take a while…).

  11. Jonathan – would you mind emailing me (I can’t figure out who you are when you post anonymously ;). I’d like to have a chat with you about that idea. I am in a position to do something like this, make it public, and actually do it soon. Follow the link to my blog, where you can easily find my email address (no point putting another link out for spammers)

Comments are closed.