Routine User Testing Best Practices

I’m currently consulting for an e-commerce client. I pursued some low-hanging fruit and completely destroyed some of their assumptions. (For example, proving an issue was more about information architecture and not UI.) As a result, my client is looking to set up routine user testing to make sure they aren’t surprised again.

Has anyone helped a client set up routine user testing? In your experiences, what are some best practices in setting up routine user testing? (i.e. testing the same things over and over versus a new script each time?)

Thanks so much!

I don’t have an answer to the question, but you’ve got my ears perked up on your lead-in.

Can you expand more on your recent success? I’m curious as to how exactly you did it, for my own future reference.

1 Like

Well, what I did was change the questions they were asking in their user testing. For instance, the questions they had asked were more centered around the UI. They were trying to decide between a “Quick Look” (Modal window displaying more info for the product) versus other methods.

I brought the questions back to simple e-commerce psychology and needs. One of the questions I asked was if the user had enough information to help them decide whether or not they might be interested in a product. Another was asking what kinds of information helped them decide; what kinds of product photos; what would they need to see on the product grid and what they’d need in the “Quick Look.” The information we got back indicated that the user didn’t care HOW we presented the information, but rather WHAT we gave them in terms of information.

My client HAD been focused on the UI, but this clearly showed they needed to audit their microcopy, photography, and other information.

(I hope that isn’t too vague, I’m not entirely certain how much information I’m allowed to give about my client.)

Hi @sally – welcome.

I’ve invited a few people into this topic to give you some advice. Hold the phone…

1 Like

hi @sally
I didn’t help any client to design a routine test procedure.

I was involved in two main scenarios:

  • e-commerce platforms:

    • we were testing all the new features with qualitative tests (interviews, usability tests and remote tests)
    • we were testing all the new features with quantitative tests (A/B test with a software)
    • we were testing all the bug-fixing features with quantitative tests (A/B test with a software)
    • every 3 months we invited some top sellers and some top buyers for a workshop showing them the product roadmap and collecting their needs/feedbacks
  • fin-tech products:

  • we do automated tests for checking the system performances in terms of server workload, security etc (with a software)
  • we do testing all the new features with qualitative tests (interviews, usability tests and remote tests)
  • we support our customers
  • every 6 months we invite customers and partners showing them the product roadmap and collecting their needs/feedbacks

I hope it will be up to your alley :wink:

1 Like

Thank you so much! I really appreciate your response!

Thank you so much @HAWK!

1 Like

Hi, @sally,

Congrats on helping your client see the real issue was information architecture and not the UI - that’s delivering true value through user testing. ;->

When I was supporting a consumer product, we set up bi-weekly user testing sessions for a mobile app that was in development. Here are a few of the key elements of that process that might be useful to you:

  • We scheduled 5 users per day of testing (different people every week).
  • The sessions were 30-45 minutes long and were very focused (didn’t review the whole interface, just a particular feature or task flow).
  • Often we used a prototype instead of a completed build so that we didn’t use a lot of dev resources on code that was going to change. (Sometimes, when it was practical or necessary, we had a “live” product.)
  • We required the product managers and developers/engineers to attend the day of testing, so they could see first hand where customers were struggling. I gave them instructions on how to observe and what to listen for and told them to take notes that we would review at the end of the day.
  • After the 5th user finished, we gathered for a group debrief and created a list of all the things we heard/observed. Then we prioritized the list and picked the top 3-5 things that needed to change for next time.
  • Developers/engineers went off for 2 weeks (the team’s sprint length, at the time) and implemented what they could of the 3-5 key takeaways and I had that time to tweak the script for the next round of testing.

Many of the developers had never seen an actual user test, just consumed reports that summarized the findings. It was much more effective for them to watch, in person, because they really felt the users’ pain. (“I can’t believe they don’t see the blue button - it’s right there!!!” But they couldn’t deny what they were seeing with their own eyes.) . Sometimes, after the 3 morning sessions, the developers would have already tweaked the UI and wanted me to put the updated design in front of the afternoon participants to see if it addressed the issue. Even if they didn’t have the solution coded by the end of the day, they had a good idea of what they wanted to change for next time and were excited to make the improvements and collect more feedback.

I hope this helps answer your question. Feel free to reach out with questions, if you have any.

~Katy

2 Likes

@katymullally Thanks! That’s actually very useful!

@sally You’re so very welcome! I’m heartened to hear that the info was useful to you. I’m interested in hearing how you proceed with this client and how the research engagement develops. Feel free to DM me or tag me in future posts. ~Katy

2 Likes