How would you




Could someone please recommend an effective way to user test this particular screen?

Please note
I am not after feedback on the UI or any of the visuals. Please only provide thoughts on what the most effective user-test methods could be, if possible.

Some context:

The smartwatch’s biometric capabilities can detect whether the user is about to experience an asthma attack…When the user experiences an asthma attack, the device displays what’s shown in the image I’ve included.

I would imagine that an asthma attack is the essential context needed to ensure the effectiveness of the user-test…I don’t know because I’m too much of a rookie at this.

Please help.


Get a bunch of asthmatic users in a room and get them to test the prototype.


Figure out what you want to test; does the device accurately detect attacks, does it give adequate warning, does the user understand what the device is saying etc.

Would putting users in a room with a the device be adequate or do they need to be in a more typical environment such as a noisy train station, jogging or driving a car?


I laughed pretty hard at this!


What happens when the user clicks yes? Do they get the option to change their mind? Asthma attacks can progress from routine to fatal. Nobody knows they’re going to die when they start wheezing.


This particular screen displays when the user’s asthma levels are high or even critical. A different screen will appear when things are just starting. Anyway, when the user clicks “No”, he/she’ll the interface will assist him/her in contacting her emergency contact or an ambulance – something like that.

I do appreciate that you’ve highlighted this concern, but this thread really is just about selecting the right testing methods and setups.


Well your original post was very vague and you seem to be expecting people to figure out your job for you without even explaining what you have got. my point was that do you have interactive prototypes, pretty pictures or a CGI graphic of something that doesn’t exist.

Also if a user clicks yes, why would the device give them emergency services contacts? It sounds like you have a lot to figure out first.

If you snap at people on here then no one will bother to help you. This is a community for sharing and often responses with a different angle than you intended can help develop your thoughts. This is not a place for new members to come along and make demands without contributing.

Good bye.


Rachel, I’ve miscommunicated my intentions, it seems --not in regards to what the UI does, but what I’m hoping for from anyone that takes the time to help by sharing their thoughts on what I need/ed help with, and in particular, how I’ve expressed that to you.

I assure you that I was not snapping at all, if you’re referring to how I concluded my previous response, please believe me when I say that everything was meant with the utmost sincerity…I should’ve included words that would’ve reiterated my intentions (in this case: express my gratitude for your willingness to help), like “Many thanks regardless” or even just “:)”…lesson learned.

For what it’s worth, the reason I chose the words used to conclude my previous response is that previous instances of me asking members of a forum for their thoughts on this same question all ended with the conversation diverted off-topic to visual design, for example.

And just as with this thread, I really did appreciate that people were passionate about trying to help as best they can, but fact remains that the focus was: selecting the best test methods. The premise was that all preliminary work (research, synthesis, ideation etc) that lead to the proposed solution featured in the image of the watch, had already been done and validated.

TLDR: I honestly did not mean what you thought I meant with the way I concluded my previous response.

I’ve emailed a copy of this message to you directly as well.


How have I managed to not notice this reply until now?? Damn it!

Anyway, great points and questions. The “Yes” button accounts for the possibility of the device misreading the user’s biometrics.

But as for what I wanted to test, I just want to test how easily users would be able interact with this screen if they were physically impaired.

For instance, I’ve made the “No” button as easily accessible as possible by making it as large and as visually contrasted as possible…all while trying to avoid obscuring either of the other two elements (if the user is not having an asthma attack and is just engaged in exercise, for example).

My doubts and confusion are being caused by how everything would be different in the context of an actual asthma attack. I mean, hypothetically, testing this UI while the user experiences an asthma attack is surely not a viable test strategy? Or is it?


I think you need a fully working prototype capable of detecting asthma attacks with a touch display. Give it to an asthmatic user(s). Let the software log everything shown on display and all user actions. After a while interview the user and analyze logs.

My personal concern is whether it is possible to read and understand the “Are you okay” question during the attack. And why do you need the “No” button? I imagine the following algorithm: attack detected -> Are you Ok? -> if a user does not respond, start emergency sequence.


Hi Anlev,
Thanks for that. In response to why a No button is needed, I really like your idea of starting emergency sequence automatically if the user doesn’t respond…it was actually one of the ideas that me and the peers had to choose from to move forward with and explore further. The problem we found with this revolves around the possibility of false negatives – If a user doesn’t respond due to reasons other than being incapacitated by asthma, for example…

In response to your first paragraph, that is certainly an idea! This is only for a course project (with no budget etc), but I’ll certainly try and get some feedback from peers on your suggestion.

Thanks again


I suggest you consider the following option:


In this case, a user does not need to read the “Are you ok” question. I imagine, reading those small letters (or remebering the question) would be challenging during attack. And without the question answers Yes and No have no meaning.


I see what you did there, and I like. A yet-to-be-validated assumption, though, is: The user will know this is the asthma product communicating with them…I’ll see if I can find some screenshots of ideas from others working on this same thing.

For now, that mockup of yours reminds me of something I did earlier in the project:

Thanks again for your help, you’re probably right about the labelling of the buttons and the extra work involved with reading the small sentence.