How to approach designing a user test for an online course? A story of my failure so far and a request for help

TL:DR; Are there any best practices for testing online course material specifically? The bulk of my experience is e-commerce websites and WSIWYG tools


I’m in the position of needing to gather some feedback on an online course. The course is fairly long and a new format for our business. We estimate it’s at least a 12 hour commitment over several weeks between reading, answering assignment questions, and attending a live video conference.

Our first test was straight-forward: we offered course certificates for free if testers would give feedback on the course through some post-course surveys and attend 2 video calls.

This failed spectacularly. Few people logged in at all. Roughly 4/30 people made enough progress through the course to be eligible to get credit. Technically testers have another week to complete but I can tell already that most won’t. I am going to send a short survey to those who didn’t complete the course to hopefully find out what factors they feel stood in their way.

I am thinking the design of the test is to blame as it expects a lot of commitment from testers for a payout they only get if they complete the course. However, I don’t have a huge budget to pay testers and I don’t think paying a tester to complete the entire course is necessarily the way to go either.

Ideas I’ve had to fix it:

  • Break the course into smaller parts and test pieces of it, pay testers for time with free access to other (already tested) courses

  • Still have testers test the whole course, but pay testers for time with free access to other courses instead of free credit for the course they are completing now - that way it doesn’t matter if they finish or not they get paid

  • pay testers actual money and stop being a cheapskate (this will be a fight with the bosses but I’m somewhat used to fighting them for things)

  • recruit way more testers and expect 85% dropoff because people dropoff of online courses all the time

Anyways love to hear your thoughts about the whole thing.

I think all of those ideas are potentially good ones, but the one that has worked for me in the past is to find testers that are literally part of your core audience and let them do the course for free in return for feedback. It seems to me that the people you are using as testers are not the right audience – they see no benefit in doing the course.

Does that make sense?

1 Like

Thanks for your advice - makes perfect sense and in fact that’s exactly what I thought I was doing.

We were pretty certain we did have an interested core audience because we recruited testers from existing customers. However, this course is a different format (longer and more interactive) than our other courses, plus I admit it only applies to a segment of our customers, so it seems (I’m assuming here I haven’t gotten the post-course surveys yet) that they were blindsided by the amount of commitment needed to test OR they realized parts of the course weren’t relevant to them.

We had a ton of interest in participating in the test so maybe we can try again with more selective focus on who is being chosen.

When you’ve run course tests before how have you handled course credit for people who didn’t finish? We gave them a month-long deadline to finish. Any opinions on that?

Ok, interesting. Then I think you’ve kinda got your answer. If your test group is your audience and they aren’t completing the testing then something isn’t working. I’d think about going back to the people that you tested with and asking them to complete a short survey to find out why. Was the course content not applicable? Was it hard to follow? Did they need more time?

It depends on what expectations were set. What did you tell them ahead of time re credit for partial completion?