Card sorting in the initial stages of the project is a noble pursuit, in my opinion, and one that is bound to help you learn more about your users, how their heads work, and the problems that they’ll have with your site. Not to mention their ideas around what your content should be, and how it should be organised and what it should be called.
An IA Validation card sort happens a little way down the track when you think you know what your sitemap is going to look like, and what things are going to be called. You probably even have some draft wireframes that you’re not ready to commit to, but that you developed as you were thinking through the conceptual model for your IA and getting into the nitty gritty of the sitemap.
Once upon a time, I used to think that a card sort at the beginning and a card sort at the end of the IA scoping process was good practice. For my mind, I think that the second user testing exercise needs to be something related to the wireframes… maybe paperbased prototypes (or maybe even interactive prototypes?!), but definitely something that puts your IA into a context… a context beyond a few titles on some cards, that is.
Perhaps this is because of the kinds of sites I seem to be working on these days and that they seem to be less siloed and hierarchical, but use contextual linking between sections, meaning that content doesn’t *have* to live in one place. Perhaps it’s that sometimes you *want* to have your users ‘explore’ the site, rather than feel they understand it 100% at first glance.
Perhaps it depends on what kind of ‘take away’ you want from your testing experience. This probably depends a lot on whether you’re the ‘client’ or the IA. The client, it seems, really wants to see users having no issues or questions or independent opinions regarding your IA. They want testing to say – yes, this approach is 100% correct. Please proceed. They want nothing but good.
The IA, on the other hand, should be interested in eliciting questions and alternatives and seeing which paths people prefer to take to content that may be accessible using multiple paths. Card sorting is good for this.
Which leads me to another thought – do you let your client watch your user testing? Do you think this is a good idea?
And while I’m at it – what are your thoughts on card sorting for IA validation?
Ah! and one more. Comparative testing. Does anyone have war stories on testing two distinctly different IA models? How did that go? Did you test the two models on all of your users, or have two groups of users and test one on each?