A design isnât finished until someone is using it.
Of all the tools in a User Experience Designerâs tool kit, usability testing is arguably the most powerful. Arguably, weâre already one step into iteration, as this chapter brings us full circle, right back to research and discovery. The way you know that youâre at a stage to employ usability testing is that your team is beginning to ask questions such as âWill users be able to. . . ?â and âAre there any red flags in this design?â âWhy are users getting stuck?â Will our customers understand how to. . . ?â âIs it intuitive?â.
After working through your userâs journey maps and workflows, itâs likely that some of these questions have come up. In this chapter we get practical about usability research. How to build the right fidelity prototype, recruit the right participants for testing, and a few pointers on how to conduct and record research interviews.
Prototyping as a researcherâs love language
While usability testing is the most widely adopted method for testing digital products, there are many opportunities to quickly gather feedback from your user along the journey of building your product or service. This is where prototypes and minimum viable products come in. Prototyping is a mindset as much as it is a technique â it can take on many forms of output and fidelity. It is a researcherâs best communication tool â allowing designers to more efficiently articulate ideas and facilitate discussions with less ambiguity than verbal dialogue. Prototyping empowers us with the open, iterative mindset necessary to evolve good ideas into great ones. It allows us to kill bad ideas early, through visually tallying ideas with ourselves, our team, and most importantly our users. It's a way to practice participatory design.
This toolkit by IDEO is a comprehensive guide to rapid prototyping
The Fidelity Framework
Think of fidelity like placing bets. Low fidelity is a small betâquick to make, low risk, but might not tell the whole story. High fidelity is a bigger betâmore time invested, but potentially higher payoff. Here's when to use each:
Early concept exploration
Final design validation
Quick stakeholder alignment
Developer handoff
Testing basic user flows
Stakeholder buy-in
Rough technical feasibility checks
Design system components
[Source]
Step 1: Recruiting Users
One of the biggest barriers is often a lack of knowledge about where to find users, so letâs dig in.
Identifying representative users
Look back at your persona archetypes. With usability testing, weâre generally trying to observe patterns from a variety of representative users. Along with your team, list the characteristics of the target users for your usability study. In addition to specifying the characteristics of those you want to talk to, talk about the kinds of personas you donât want to see in any of your sessions.
Building criteria for participation
Qualify the criteria you set. For example, when the Gmail team wanted to test designs with âactive Gmail users,â they translated that into precise, measurable criteria that they could use to screen prospective participants: âPeople who use Gmail as primary personal email account and receive at least three emails per day.â
Screening questionnaire
Next, write a screener questionnaire that can be used to identify and select people who meet each of your precise criteria. Write questions for every one of your criteria. Like any good survey or questionnaire, itâs important that your questions arenât leading and donât reveal the ârightâ answers. Many people will try to give the answers they think you want so they can get your participating reward or incentive (social desirability bias)
Reaching out
As soon as you have some idea of what you want to test, start recruiting. It takes a little lead time, so start even before your prototype is ready, or research guide is set. NNG suggests recruiting more participants than you need â Absent participants and study interruptions are unavoidable, but you can protect yourself from rework and lost time by recruiting a few more participants than what youâd typically need.
Social Media. Facebook and reddit groups, as well as online professional networks can often yield good results to recruit participants for your usability testing as long as your screening criteria is robust.
Via Friends and Family. Thereâs always a chance that your perfect participants are a few connections away. You just have to find them, and the only way to do that is to ask around. Leveraging your network can be cheap and effective and for basic usability questions (Do people understand my product? Can they complete tasks?) you can still get useful data by testing friends and family.
Note: For a slightly more mature stage product, it's recommended to test with people at least one degree apart, to get the best results and make sure to be true to the measurable criteria from your screener questionnaire. For remote usability testing, there are several testing tools that recruit people based on your screening criteria. Platforms like Dscout, Usertesting and Pingpong are a few such platforms.
Taking the time to find the right users for your usability testing will pay off in the quality of the research you gather. Itâs always worth the effort, and you donât waste time conducting research with users who donât fit the bill.
Step 2: Building the Research Document
This consists of: (1) Your research goal, (2) Open-ended questions that help you achieve your goal and (3) An agenda for the time you have with the user.
A good research question is:
Centered around understanding or discovering something new about people, not your product. We can often feel our product or service is the sun and people's lives revolve around it but our products are just a tiny part of people's lives. A good research question looks at understanding something about people rather than just about products.
About a problem or idea we don't fully understand. Sometimes research is done as a check-box exercise rather than for the right reasons. Ensure the research question addresses a knowledge gap or hasn't been done before.
About gaining more information to move forward an idea or concept. Like above, we want to ensure our research question will help us gather the information that enables us to move forward with an idea or make a better decision.
Do people like or want this product/feature/idea?
Focus on what the user's need by observing their actions (what they do) instead of by their words. Try A/B Testing
Can or would users use the product/feature/idea?
Try to answer instead, How do users interact with the product or service? and Have people used something similar before/ what was their experience like?
Do people find value in the product/feature/idea?
Analyze your data-based indicators and evaluate how well your offerings are performing by looking at retention rate and engagement
Is this product/feature/idea (good) enough for users?
Look at whether users are coming back to use your product or service and if more users are finding it through word of mouth
[source]
Step 3: Documenting your user testing sessions
Participants provide the best feedback when they feel comfortable with the moderator, which could be you or one of your team members. Starting with getting-to-know-you small talk can make it easier for participants to feel comfortable and open up, both in person and virtually. You want to establish a professional-but-friendly rapport with participants right from the start.
I usually start these sessions with a brief conversation about their past experiences and existing habits that are relevant to whatever is being tested. Every session helps the team learn a bit more about the users. Always thank participants for coming, communicate how grateful you are that the participant is taking the time to participate in the study and remind participants that they should be open and honest about their experience, so the design team can make improvements.
Moderated Testing Session
For task-based usability sessions, youâre gauging if users complete the task(s) with this design, when and where they get stuck or confused, and trying to understand why. Document patterns you see about what works and doesnât. Also take note if most users failed to discover any important elements or features. Indicating the relative severity of the different problems helps teams prioritise their work.
Present participants with simple goals and scenarios to
Observe them using the product or prototype to complete key tasks
While asking them to think out loud.
To get the most useful feedback about the product or service youâve designed, let participants know that youâre here to find out how to improve the design, so constructive criticism is more than welcome.
Just as we did with foundational research, itâs always a good idea to supplement our primary research with secondary desk research. For instance, you may want to learn about the competitive landscape, as well as the teamâs perceptions of competitive and related products. Competitorsâ products are great 'prototypes' to learn from.
Ask yourselves, what are the closest competitors to this <feature, product, idea>?, How does this compare to competitors?, What behaviours, conventions, or expectations might users bring to this product based on their experiences with other products?
Biases
Confirmation bias
One of the most effective methods for overcoming confirmation bias during research is to ask open-ended questions when conducting interviews.
For example, if youâre conducting an online survey with a large group of participants and one of your questions is: âHow do you use our product?â As the designer, you have a few ideas about how you think people use your product, so you may give them options to choose from. If none of the options apply to the user, and they canât select âotherâ or skip the question, theyâll be forced to choose one of the multiple-choice answers that doesnât match their actual experience. That means youâll end up with false information that skews your research data and potentially provides incorrect evidence for a hypothesis you already had.
