It is true that the wording of a question can greatly influence the respondent’s answers.
There is bias baked into all survey and interview design, you can’t hope to escape it but you can hope to account for it in a way that lets you separate signal from noise. As long as you are onsistent your approach, you’ll learn something from your data - even if it is just “My interview script resulted in too many hypothetical answers!”
For interviews, this is a good primer here and the last time I did an interview series I followed this general approach>
- No script, just had a list of topics/ground to cover and then segued between them in any direction to suit a more natural conversation
- Directed those conversations towards real-life, recent incidents: “tell me about the last time you…” and avoiding “what do you typically do when…” generalizations
- Questioned everything participant stated to be important/valuable/necessary, and diplomatically: “But why is that important?” Kinda relates to the 5 why’s thing. Also JTBD-style is a great model.
- I tried to let there be some negative space in the conversation…it can be difficult to resist the urge to “help them out” with leading answers at first but it’s better to have an awkward silence
As for surveys, there are lots of cognitive biases to fight there so it just depends on what your survey’s objective is. These were the principles I used in the last survey I constructed:
- Consistent ratings scales within a survey
- Minimize order bias by randomizing the choice or question order
- Blending a balanced mix of negative and positive phrasing
- Keep longer questions with context more neutral, indifferent and free of editorializing