👉 SURVEYS directly on pages with high bounce rate

Hi, fellow UXers! :wave: I have a question for you … Do you ever use surveys directly on the website or apps (on pages with high bounce rate, cancelation, downgrade or churn pages) to investigate a certain “leak”?

If so, which software do you normally use for that?

Would love to hear about your experience. :slight_smile:

I wouldn’t think surveys would be the answer here, but contextual observations. I’d rather setup a moderated usability test than rely on surveys to give me answers.

To put it bluntly, a survey is the wrong tool for the job.

Surveys are superb for doing things like gathering a large body of responses from users on directed topics, vetting potential users for usability tests, and getting responses to a very short list of questions. They’re best when providing a large body of quantitative data measurements by their nature.

But you’re not looking for quantitative data. You don’t need to know what’s happening. You need to know why - and that means you need qualitative data, which is very difficult to sift through, categorize, and draw meaningful conclusions from in large data sets.

There’s another problem to be mentioned here, though. Putting a survey here isn’t going to give you the direction you need to investigate properly because the survey won’t affect the bounce rate. Those who are going to leave will leave, and will likely do so without ever filling out the survey. The responses you get will likely be non-bouncers.

This is why moderated usability testing is really the route to go here. You need focused, qualitative feedback on the design in order to make intelligent decisions about iterations. This is best gathered through either contextual studies (which are a lot harder to run during the pandemic) or usability studies (which are much easier to run).

3 Likes

Completely agree with you that usability testing gives us a more in-depth answer WHY something is happening, and should be the primary source of insights about users. :raised_hands: However, in some cases at the very start of the project, you cannot invest time in elaborate usability testing.

In these cases a survey can as well serve its purpose at least to make quick changes. And we know that small improvements can have a huge impact on user experience. Often it is about iterating bits and pieces here and there. (Really depends on a project, I’m thinking about the products that have a great fundament.)

For me the solution would be using both and getting as many insights I can get in a certain amount of time. :star:

Would still love to hear about the best software to use for surveys and best practices. :hugs:

I want to be clear on something… are you suggesting that the survey is the only tool you’re trying to use to research the problem?

If so, that is a woefully terrible approach.

If you’re using it as a starting point for additional research before design, then you may try this approach.

Be forewarned: there is a great deal of risk that comes with this approach, stemming from the fact that it will only catch users who are at risk of page abandonment based on cursor position. It will not catch data from users that actually abandon the page through non-cursor means, abandon it too quickly to notice/digest the popover, or are so frustrated that they abandon the page even after seeing the popover.

In the same way that changes can have positive impact, changes can also have extremely negative impact. You need to spend the time to do the right research to give as much guidance as possible before your first iteration - which, in many realistic circumstances, is often the only one you’re given a chance to create.

Too often, once a project is completed, the next iteration gets put on the back burner indefinitely, whatever the intentions of the business or best practices of the team might be.

Do the right work, even if it takes more time. You may never get a chance to do it again.

1 Like

My primary method for research is moderated usability testing. Period. But I’m considering additional tools to achieve more in the same amount of time. (Well, there are always external factors wanting you to provide results fast …)

Will I rely 100% on these results? No way. But can combined with solid, in-depth user interviews help in some way with UX? I believe it can when taking it with a grain of salt.

However, your argument about people being frustrated when seeing popup/over is completely valid. It is one of my fears as well.

Thank you for showing me this approach and exposing some of the concerns. :slight_smile:

Hi Sasa, I’m in the process of adding embedded surveys to some key pages across our website (not necessarily product just yet). I’m hoping to have them setup within a week, so I can report back on progress later if needs be. We’re going to try using our own survey tool that we’re building as the main way of collecting prioritised feedback.

Just as a point in response to Doug - email surveys have average response rates of 3-5% while embedded/popup surveys get 20-30% response while also appearing in the context you’re looking to examine. I feel like disregarding them right out of the gates is a bit pre-emptive in judgement? For smaller research teams, they offer a secondary way to gather insights in the background while focusing on more hands on work like moderated usability testing.

1 Like

Glad to hear that! Please share your experience with surveys and how satisfied you are with insights. :smiley:

1 Like

It doesn’t matter what size screw you’re looking to fix if all you have is a hammer.

A survey is the wrong tool for this type of research.

Your results are only as good as your research. If you’re using the wrong tools, you will get poor results.

Do good work. If you don’t have the bandwidth do it, then don’t do it (if you’re able to set your priorities - I understand not everyone is.) Using a survey in place of ethnographic studies or user interviews is not good practice. Using a survey to “gather background information” is highly likely to do little more than bias your research towards unfounded hypotheses, in this particular situation at least.