What do you do when you run a study and you don’t know what the results mean?
I saw it happen and the experience stuck with me.
A long time ago, I worked with a team that wanted to test a new site navigation prototype. Could users find what they where asked to find? Because they wanted to ensure ‘statistical significance’, they tested with hundreds of users, so qualitative input was out of the question.
Neither A nor B showed a strong advantage. And when they examined the depths of the tree, they saw users splitting off into all sorts of directions. Not to mention some weird outliers that may have been a bug in the test or may have been genuine data.
Instead of reducing noise, the high volume of unmoderated sessions only created more confusion. They were hoping for a clear winner. They ended up with a lot of “whys” instead.
What did they learn?
More data is great, but you still have to ask “why” if you want to understand why users are doing something. Consider hybrid platforms that offer unmoderated tests with user commentary included, or two-part studies that put quantitative after qualitative.
Unmoderated tests are difficult to adapt mid-study when you encounter the unexpected. Start with moderated if you feel any uncertainty, and invest in a test batch when you have a large number of responses in mind.
If and when you do pursue an unmoderated test batch, build in plenty of time to finesse and stress-test your prototypes to make your study go more smoothly.
Prep your team for the possibility of an indeterminate conclusion. While black and white methods like multiple choice surveys make reports more crisp and decisive, they also shove people into boxes and depending on your question, they might not really answer your question.
Keep close track of how many variables you include. Is there risk of them muddling your results?
If you need help talking to users, we offer user research and usability testing as a part of our flexible programs. Book a consultation to see if there’s a fit.