User test procedure question


#1

Hi beautiful Uxers!

I have a question about a possible procedure for a user testing session.
So, I want to know if it will be of a good practice and or, if it will give good reliable results to show the same page twice to the same people in a user testing session, first the actual page we do have online (the one we think has flaws) and then the same page with the improvements and modifications we think are going to fix these flaws (this being a prototype).

Is this something you have done before? Or I should just choose another 6 different participants to show the second improved page?

Thanks so much!


#2

hi @martam

in my experience, I did A/B test only in a quantitative way, very often with software running in the background of the web app/website.

I don’t know if for a qualitative test with 6 users you will be able to collect enough feedbacks to have a clear vision how to finish the design task.

Maybe what you can do is to build a clickable prototype for each version and to measure the task success rate.

I hope it will help :slight_smile:


#3

Thanks dopamino for your answer, really appreciate it.
Just to clarify, this is not for an A/B test but a user test where we have a “lab” and we invite participants.


#4

hi @martam

you’re welcome!

Yes, I understood and, maybe, I was not 100% clear in expressing myself (my English is far from being perfect)
My point is that showing two versions of the same UI (for me this is an A/B test) to 6 people could not provide the right feedback to optimise the design deliverable (I don’t know if you are going to test wireframes, mockups etc).

Does my answer make more sense now?


#6

Oh! yes now it does :slight_smile:

So yes, I do have the clickable prototype for the second part, as for the first part the test will be done on a fully functional live page.

Given this, will you show those same 6 participants the prototype after seeing and interacting with the fully functional first one or, will you choose another 6? So it will be giving them the same task, to complete twice in two different screens, that have the same function but different look.

This is difficult, my english is neither close to perfect! :stuck_out_tongue:


#7

I believe that is not a good idea to test two prototypes in a single session. The learning curve will affect the feedback and the user will not really feel what will be the more natural and ergonomic solution for her/him.

I would follow another approach:

  1. submit the prototype to the 6 users
  2. collect the findings and the feedback during and after the execution of the task
  3. after the execution, ask them precise questions regarding the UI component and the UX patterns you’d like to check and to measure (eg buttons, call to actions and navigation patterns)

For instance, you want to measure if a specific button is visible, you should ask "in the screen XYZ, did u notice a button somewhere on the page?"
If the user answers yes and she/he is able to remember the position, then the UI is performing well.
If the user struggles in remembering it, the UI is not performing well.
In that case, you should show her/him the version B asking “Is, in your opinion, this a better place for the button?”

FYI I found this post very interesting: https://blog.taplytics.com/how-booking-com-a-b-tests-like-nobodys-business-8158fd75d6b6


#8

This is great!
Thanks so much for your time I will try that :slight_smile:


#9

Hi @martam
Another way of doing this is doing the same task with both versions of the page with the same participant, BUT ensure that you mix the order in which you display the designs. This is to minimise any bias.

For example:
Participant 1 will see Design 1, and then Design 2.
Participant 2 will see Design 2, and then Design 1.
Participant 3 will see Design 2, and then Design 1.
etc

You want to be testing how well a participant can complete the task, rather than asking their opinions on which one they may prefer.
Also, consider running a series of small testing sessions, so you can iterate on the designs and keeping testing, rather than just the once off session with 6 participants.
Hope this helps.


#10

Thanks so much Ruth!


#11

so Design 1 is with flaws and Design 2 is flawless and point to show Design 1 is they don’t get bias right?


#12

We don’t know if design two is flawless. Thinking that creates a bias before the sessions even begin.
It’s important to have hypotheses for each thing/element that you’re testing and try to prove it wrong (not just to prove that you’re right).
Hope that makes sense :slight_smile:


#13

yeah, you are right. What do you do ruth?


#14

I’m a design researcher from Australia :slight_smile:


#15

Nice, in which company?