Usability Testing Possibilities

Where I work we were having difficulty teaching our fellow staff members why we should do remote unmoderated testing in addition to moderated testing. In response to that I wrote a report titled: “[B]What Usability Testing Method Should I Use?[/B]”.

I post it here in hopes that someone finds it useful. If you have anything thoughts or anything to add, I’m happy to adjust the report. Iterate, iterate, iterate. Right?

This is very thorough! I’d never really thought much about it, but you’re totally right that the results of moderated testing can totally be warped by the moderator. I like to think I’m being objective when I compile recommendations based on tests that I have facilitated, but who am I kidding. We did quite a bit of unmoderated remote testing at my previous job, in addition to local moderated testing, and it was good to have the balance. Sometimes the outcome of a test was voided because the participant missed the point completely, but they’re relatively inexpensive through sites like usertesting.com, which you’ve mentioned, that it wasn’t a big deal. Perhaps that’s worth adding to your summary—that a combination of methods will give you a more balanced spread of data.

Nice write-up!

We do mostly moderated testing. I would like to do more unmoderated tests but is there other criteria you have on what type of projects are good for ummoderated vs. moderated. I think unmoderated tests still capture viedo and audio right? Because if not and you only get the screen capture, I feel like so much information would be missing. Does unmoderated tests work better when you have straight forward questions or wonderings? I know I just asked a bunch of questions within this post :slight_smile: But this is an area where I am very interested to see if I could move in this direction with our testing at the State of MI.

I’ve been in a situation, early in my career, where we ran some unmoderated tests on a prototype before any moderated testing. The tests failed spectacularly. The participant totally missed the point, went down a rabbit hole, and got distracted and preoccupied by some parts of the prototype that were features of the prototyping software and not actually related to our product.

This was a good thing. It made us realise that we had totally missed the mark with the direction of the product and had made a bunch of assumptions that weren’t true. It also highlighted that we needed to refine the scope of our test and make prototypes that were more contained and focussed—and to use better prototyping software that didn’t get in the way. It cost us less than $50 to learn this—probably cheaper than running a local moderated test. It was a cheap way to fail and learn.