Recently, I was asked a series of questions around the impact and importance of collecting quality user feedback and thought I’d share. Hopefully you’ll find the answers insightful.
As a UX professional, what does high-quality user research allow you to do better? How does it impact your day to day effectiveness?
User research opens a direct path to designing something that users will adopt. It moves us past our own biases and misconceptions and into a position of working with real data.
By connecting with our users through research, we get a clearer window into who they are as people, gaining a better understanding of their needs, behaviors and problems to ensure that the products we’re building are truly focused on their success.
For product designers, user feedback provides an ongoing defense for design decisions because it’s real data insights from real people, eliminating a lot of the subjective fluff. As product managers, it’s the lifeblood of every initiative on the roadmap, influencing when and how they get built.
User research also gives meaning to our work. Our day-to-day becomes centered around the problems we’re solving for, not the projects we’re working on. Likewise, our focus shifts more toward learning from our outcomes rather than just hitting deadlines with our output. It’s our users that we ultimately go to work for everyday. Any opportunity we have to engage and learn from them helps us build products that truly benefit the user and also makes what we do much more rewarding in the process.
How do you choose what questions to ask when gathering user feedback?
The questions you ask depend on your goals. Let’s look at an easy example: the NPS (Net Promoter Score) survey. The objective is to determine the likelihood of someone promoting your product. For this, a simple 1–2 question ask would suffice, starting with the obvious: “Based on your experience, how likely are you to recommend this product to a friend?” (with a range of options from 0 to 10). Then, to clarify the response, you might ask a follow-up question: “Can you explain your score?” Low impact on the user’s time and relevant to your goals.
As a quick side note, most surveys only scratch the surface. Using the example above, if your ultimate goal is to gain a more in-depth perspective, you’d want to consider brainstorming a set of more personal questions that could be asked in an interview-style follow-up conversation.
I’ve found the best process for choosing questions is to write down 1–3 things you’re ultimately wanting to learn. From there, brainstorm as many questions as possible (could be general or specific). Then, begin pruning. Take a look at each question and determine how effective it is.
- Is what you’re asking clear?
- Is it focused/on-topic?
- Is it contextually appropriate?
- Is it redundant, or could it be combined with another question for a stronger ask?
- Is it following a logical sequence?
- Are any of the questions leading or otherwise able to compromise the validity of responses?
- And most importantly, does it tie directly back to what you’re trying to learn?
By the end of this exercise, you should have a list of questions that you feel confident will help you arrive at valuable insights.
What channels do you think are most effective for collecting feedback and why?
Context is key. The channels you choose ultimately depend on the type and depth of information you’re looking for. Not all of them are created equal, so you’ll need to have an understanding of how people normally interact with you in those channels and determine which would best facilitate your collection strategy. Here are some that have proven to be the most effective:
These services make it easy to reach people where they are, in the right context and often in a convenient, easy-to-use experience. Where I currently work, we use Intercom for all of our support-related affairs, both on our marketing site as well as in-app. It’s beautiful, integrates and connects across all of our digital spaces and allows for a number of interactions with users, from surveys to simple message notifications. This is all important because simplicity and convenience greatly increases the likelihood of involvement.
Most of the communication in this channel is in the context of nurturing customer success, so conversation is typically geared toward fielding concerns, troubleshooting issues and clarifying understanding. But, within those conversations are nuggets of information that help paint a picture of what users are feeling, what they’re needing and where we can improve.
Incorporating feedback asks directly within the application (where people are and where they’re most engaged) is where you’ll likely get some of the most valuable insight.
Take the previous NPS example. Since the ask is directly related to the user’s experience with the product, a optimal placement for it would be in-app, displayed after a period of time or after a certain number of actions to ensure the user has had enough time with the product to construct a thoughtful response. It’s fresh on their minds and they’re in a state to respond genuinely. Combine that with a short, simple flow and a well-designed interface and you‘ve greatly increased the likelihood of success.
It’s important to be where our users are, but with power comes great responsibility: be respectful of their space. We don’t want to be seen as invaders.
Day-to-day, I tend to observe conversations on our social channels, clarifying concerns, liking and sharing posts or just having friendly dialogue. The goal is to build human-to-human connection and rapport, which in turn establishes a safe zone where feedback solicitation and recruitment for research is welcome. Many of the user interviews I conduct originate from conversations I have on these channels.
Being a tried and true medium for outreach, email is a great way to collect feedback. For example, where I work, when new users sign up for our product, we send out an initial welcome email. Then, we follow up down the line to see how things are going and if the user has any concerns or feedback around the product. We often get very insightful replies from these emails.
In the same vein, it’s a great medium for recruiting participation in other research activities, like user interviews and external surveys. For example, when we receive NPS scores and comments, based on what we’re looking for at the time, or if there’s a comment we feel should be explored, we’ll email a follow-up response inviting them to a one-on-one session (similar to an interview).
User Interviews (Google Hangouts)
When it comes to understanding our users, interview-style conversations are by far the most lucrative and comprehensive. They allow us to dig beyond the surface to uncover insights we likely won’t find in more passive asks. They’re also opportunities to put a face to a username, number or persona. Human-to-human connection is both an invaluable and often underrated concept, but one that benefits everyone. It helps in building rapport, and for the user, a relationship with the brand. The only drawback is the amount of time it takes to plan, recruit, conduct and then synthesize the learnings. Depending on your scenario, you may not have the resources and/or bandwidth.
Usability Sessions (Google Hangouts or GoToMeeting)
To quickly validate product ideas and get immediate feedback on whether they’re meeting expectations or are usable, I regularly conduct usability tests where I ask users to interact with live prototypes of proposed design solutions (inVision is my preferred choice for this) while they vocalize their thoughts, or use inVision’s commenting system to annotate comments directly on the designs. These sessions are helpful in identifying and understanding behaviors and patterns, which ultimately guide the experience of the product. Like user interviews, these afford a wealth of perspective and for the user, a feeling of connection and ownership of the product (“We’re building this together”).
What are some considerations you like to keep in mind when collecting user feedback?
There are three key areas that when executed well, ensure successful outcomes: The framing; the planning; and the delivery.
As with anything, it’s important to start with the why — our purpose. When we finally get feedback, what will it help solve? Why is that important? How might our learnings impact other goals, or the larger company initiatives? Understanding the problem (the reason why we’re doing the research in the first place) frames and drives everything that follows.
With our purpose and effort in focus, the questions now become: Who’s our target? What do we need to know? How do we plan to get it? The objectives at this stage are to define our research goals (specifically what we’re trying to learn); identify our segments, if need be (the feedback we’re looking for may only be relevant from one or two particular groups); brainstorm the data we need to gather; and then select the channel(s) that will be most conducive for that learning. From this exercise, we can begin to generate the right type, quantity and quality of questions (see the earlier section on choosing the right questions). All of this helps us down the line when we’re interpreting our results to determine if any of them shed light on our goals.
With a clear plan forward, it’s time to execute. Here are some things to consider:
This is your list of questions. What we ask and how we ask it make or break a successful research effort. To summarize earlier points: Make sure your questions are clear, focused, concise and well-sequenced.
Feedback needs to be asked at the right place at the right time. Revisiting the NPS example, you wouldn’t want to ask users to rate their experience after only two minutes of use, or after they’ve only taken one or two actions. There are other scenarios where you may be looking to measure the quality or experience of a certain action, at which point it might make sense to ask immediately after the action, or after a few times of that action being taken.
For example, Lightstream Studio (a product I PM at Lightstream) allows users to broadcast content via browser to a number of platforms like Facebook, YouTube and Twitch. Stream quality is critical for ongoing retention of our product. To maintain a pulse on that part of the experience, an effective option might be to display a small popup after the user ends their broadcast asking them to rate the quality of their stream (could be as simple as a thumbs up/down or emoji reaction).
Efficiency & Courtesy
Be respectful of your user’s time. You want to have the right amount of questions to ensure you’re getting the answers you’re looking for, but you also want to keep the barrier as low as possible so the user feels enticed to complete them. There are some scenarios where a longer survey is warranted, and that’s fine, as long as the questions are carefully constructed and that the user is aware up-front.
Also, remain sensitive to how often you solicit feedback. Persistent email, phone or direct message prodding quickly turns good intentions into spam and disdain; the same is true for those incessant website/app pop-up asks. Be strategic and aware with your approach, and be considerate of their space.
Usability & Experience
Whether email, website or in-app, if you’re using an interface to collect feedback, it needs to be easy to locate, easy to use and aesthetically pleasing. It should be designed to attract the user to take action. If it sits on top of or alongside other content, make sure there’s enough contrast to help it stand out. Don’t obstruct/block the experience (such as forcing a full takeover modal that can only be closed by finishing a survey). On mobile, buttons should be easy to tap and the position of the ask should take space into account.
Experience is everything when it comes to gathering data from users, and requires a number of careful considerations. A well-planned and designed workflow increases user confidence and encourages both engagement and follow-through.
Monetary-based motivation is entirely optional, but goes a long way. Rewarding users for their time is a common practice that typically improves their willingness to engage and complete activities. When using rewards, users should be aware of them up-front, and they should be distributed immediately following their participation in those activities.
How does collecting user feedback impact other departments like marketing, product, or sales?
User feedback in customer-centric companies is the fuel that drives every internal working part. Every process and every business decision is powered by a deep understanding of who the user is, what they’re needing to do and ensuring they can do it successfully. That level of understanding enables teams to become more empathetic towards their users, which in turn allows them to build strong connections and lasting relationships. Marketing, product and sales teams all share that common goal.
When collected intentionally, rigorously and consistently, user feedback helps marketing teams deliver the right messages to activate the right audiences; helps product teams prioritize and build the right features and products; and helps sales teams build strategies that convert newly inspired users into loyal, paying customers.
Parts of this article were featured in Qualaroo’s A Guide To Collecting User Feedback for Saas.