Prototype and Testing
A design isn’t finished until someone is using it.
Of all the tools in a User Experience Designer’s tool kit, usability testing is arguably the most powerful. Arguably, we’re already one step into iteration, as this chapter brings us full circle, right back to research and discovery. The way you know that you’re at a stage to employ usability testing is that your team is beginning to ask questions such as “Will users be able to. . . ?” and “Are there any red flags in this design?” “Why are users getting stuck?” Will our customers understand how to. . . ?” “Is it intuitive?”.
After working through your user’s journey maps and workflows, it’s likely that some of these questions have come up. In this chapter we get practical about usability research. How to build the right fidelity prototype, recruit the right participants for testing, and a few pointers on how to conduct and record research interviews.
Prototyping as a researcher’s love language
While usability testing is the most widely adopted method for testing digital products, there are many opportunities to quickly gather feedback from your user along the journey of building your product or service. This is where prototypes and minimum viable products come in. Prototyping is a mindset as much as it is a technique — it can take on many forms of output and fidelity. It is a researcher’s best communication tool — allowing designers to more efficiently articulate ideas and facilitate discussions with less ambiguity than verbal dialogue. Prototyping empowers us with the open, iterative mindset necessary to evolve good ideas into great ones. It allows us to kill bad ideas early, through visually tallying ideas with ourselves, our team, and most importantly our users. It's a way to practice participatory design.
The Fidelity Framework
Think of fidelity like placing bets. Low fidelity is a small bet—quick to make, low risk, but might not tell the whole story. High fidelity is a bigger bet—more time invested, but potentially higher payoff. Here's when to use each:
Early concept exploration
Final design validation
Quick stakeholder alignment
Developer handoff
Testing basic user flows
Stakeholder buy-in
Rough technical feasibility checks
Design system components
[Source]
Step 1: Recruiting Users
One of the biggest barriers is often a lack of knowledge about where to find users, so let’s dig in.
Identify representative users
Look back at your personas. With usability testing, we’re generally trying to observe patterns from a variety of representative users. Along with your team, list the characteristics of the target users for your usability study. In addition to specifying the characteristics of those you want to talk to, talk about the kinds of personas you don’t want to see in any of your sessions.
Build criteria for participation
Qualify the criteria you set. For example, when the Gmail team wanted to test designs with “active Gmail users,” they translated that into precise, measurable criteria that they could use to screen prospective participants: “People who use Gmail as primary personal email account and receive at least three emails per day.”
Screening questionnaire
Next, write a screener questionnaire that can be used to identify and select people who meet each of your precise criteria. Write questions for every one of your criteria. Like any good survey or questionnaire, it’s important that your questions aren’t leading and don’t reveal the “right” answers. Many people will try to give the answers they think you want so they can get your participating reward or incentive (social desirability bias)
Reach out
As soon as you have some idea of what you want to test, start recruiting. It takes a little lead time, so start even before your prototype is ready, or research guide is set. NNG suggests recruiting more participants than you need — Absent participants and study interruptions are unavoidable, but you can protect yourself from rework and lost time by recruiting a few more participants than what you’d typically need.
Social Media. Facebook and reddit groups, as well as online professional networks can often yield good results to recruit participants for your usability testing as long as your screening criteria is robust.
Via Friends and Family. There’s always a chance that your perfect participants are a few connections away. You just have to find them, and the only way to do that is to ask around. Leveraging your network can be cheap and effective and for basic usability questions (Do people understand my product? Can they complete tasks?) you can still get useful data by testing friends and family.
Note: For a slightly more mature stage product, it's recommended to test with people at least one degree apart, to get the best results and make sure to be true to the measurable criteria from your screener questionnaire. For remote usability testing, there are several testing tools that recruit people based on your screening criteria. Platforms like Dscout, Usertesting and Pingpong are a few such platforms.
Taking the time to find the right users for your usability testing will pay off in the quality of the research you gather. It’s always worth the effort, and you don’t waste time conducting research with users who don’t fit the bill.
Step 2: Building the Research Document
This consists of:
Your research goal
Open-ended questions that help you achieve your goal
Agenda for the time you have with the user
A good research question is:
Centered around understanding or discovering something new about people, not your product. We can often feel our product or service is the sun and people's lives revolve around it but our products are just a tiny part of people's lives. A good research question looks at understanding something about people rather than just about products.
About a problem or idea we don't fully understand. Sometimes research is done as a check-box exercise rather than for the right reasons. Ensure the research question addresses a knowledge gap or hasn't been done before.
About gainging more information to move forward an idea or concept. Like above, we want to ensure our research question will help us gather the information that enables us to move forward with an idea or make a better decision.
Do people like or want this product/feature/idea?
Focus on what the user's need by observing their actions (what they do) instead of by their words. Try A/B Testing
Can or would users use the product/feature/idea?
Try to answer instead, How do users interact with the product or service? and Have people used something similar before/ what was their experience like?
Do people find value in the product/feature/idea?
Analyze your data-based indicators and evaluate how well your offerings are performing by looking at retention rate and engagement
Is this product/feature/idea (good) enough for users?
Look at whether users are coming back to use your product or service and if more users are finding it through word of mouth
[source]
Step 3: Documenting your user testing sessions
Participants provide the best feedback when they feel comfortable with the moderator, which could be you or one of your team members. Starting with getting-to-know-you small talk can make it easier for participants to feel comfortable and open up, both in person and virtually. You want to establish a professional-but-friendly rapport with participants right from the start.
I usually start these sessions with a brief conversation about their past experiences and existing habits that are relevant to whatever is being tested. Every session helps the team learn a bit more about the users. Always thank participants for coming, communicate how grateful you are that the participant is taking the time to participate in the study and remind participants that they should be open and honest about their experience, so the design team can make improvements.
Moderated Testing Session
For task-based usability sessions, you’re gauging if users complete the task(s) with this design, when and where they get stuck or confused, and trying to understand why. Document patterns you see about what works and doesn’t. Also take note if most users failed to discover any important elements or features. Indicating the relative severity of the different problems helps teams prioritise their work.
Present participants with simple goals and scenarios to
Observe them using the product or prototype to complete key tasks
While asking them to think out loud.
To get the most useful feedback about the product or service you’ve designed, let participants know that you’re here to find out how to improve the design, so constructive criticism is more than welcome.
Just as we did with foundational research, it’s always a good idea to supplement our primary research with secondary desk research. For instance, you may want to learn about the competitive landscape, as well as the team’s perceptions of competitive and related products. Competitors’ products are great 'prototypes' to learn from.
Ask yourselves, what are the closest competitors to this <feature, product, idea>?, How does this compare to competitors?, What behaviours, conventions, or expectations might users bring to this product based on their experiences with other products?
Overcoming Biases
Confirmation bias
One of the most effective methods for overcoming confirmation bias during research is to ask open-ended questions when conducting interviews.
For example, if you’re conducting an online survey with a large group of participants and one of your questions is: “How do you use our product?” As the designer, you have a few ideas about how you think people use your product, so you may give them options to choose from. If none of the options apply to the user, and they can’t select “other” or skip the question, they’ll be forced to choose one of the multiple-choice answers that doesn’t match their actual experience. That means you’ll end up with false information that skews your research data and potentially provides incorrect evidence for a hypothesis you already had.
Last updated
Was this helpful?