Prototype and Testing

A design isn’t finished until someone is using it.

Of all the tools in a User Experience Designer’s tool kit, usability testing is arguably the most powerful. Arguably, we’re already one step into iteration, as this chapter brings us full circle, right back to research and discovery. The way you know that you’re at a stage to employ usability testing is by listening for questions such as “Will users be able to. . . ?” and “Are there any red flags in this design?” “Why are users getting stuck?” Will our customers understand how to. . . ?” “Is it intuitive?”.

After working through your user’s journey maps and workflows, it’s likely that some of these questions have come up. In this chapter we get practical about usability research. How to build the right fidelity prototype, recruit the right participants for testing, and a few pointers on how to conduct and record research interviews.

Prototyping as a researcher’s love language

While usability testing is the most widely adopted method for testing digital products, there are many opportunities to quickly gather feedback from your user along the journey of building your product or service. This is where prototyping comes into play. Prototyping is a mindset as much as it is a technique — it can take on many forms of output and fidelity. It is a researcher’s best communication tool — allowing designers to more efficiently articulate ideas and facilitate discussions with less ambiguity than verbal dialogue. Prototyping empowers us with the open, iterative mindset necessary to evolve good ideas into great ones. It allows us to kill bad ideas early, through visually tallying ideas with ourselves, our team, and most importantly our users.

This toolkit by IDEO is a comprehensive guide to rapid prototyping

How to recruit participants?

One of the biggest barriers is often a lack of knowledge about where to find users, so let’s dig in. Where do we begin?

1

Identifying representative users

With usability testing, we’re generally trying to observe patterns from a variety of representative users. Along with your team, list the characteristics of the target users for your usability study. Looking back at your personas would be useful at this stage. In addition to specifying the characteristics of those you want to talk to, talk about the kinds of personas you don’t want to see in any of your sessions.

2

Building criteria for participation

Then figure out precise criteria you can use to identify those users. For example, when the Gmail team wanted to test designs with “active Gmail users,” they translated that into precise, measurable criteria that they could use to screen prospective participants: “People who use Gmail as primary personal email account and receive at least three emails per day.”

3

Screening questionnaire

Next, we write a screener questionnaire that can be used to identify and select people who meet each of your precise criteria. Write questions for every one of your criteria. Like any good survey or questionnaire, it’s important to write questions that aren’t leading and don’t reveal the “right” answers. Many people will try to give the answers they think you want so they can get your participating reward or incentive (remember social desirability bias?)

4

Reach out

As soon as you have some idea of what you want to test, start recruiting. It takes a little lead time, so start even before your design is finalised. NNG suggests recruiting more participants than you need — Absent participants and study interruptions are unavoidable, but you can protect yourself from rework and lost time by recruiting a few more participants than what you’d typically need.

  • Social Media. Facebook and reddit groups, as well as Bumble professional networks can often yield good results to recruit participants for your usability testing as long as your screening criteria is robust. Similarly, LinkedIn’s Campaign Manager is designed for advertising and recruiting for job roles, works well for recruiting professional research participants, too. You’ll need to set up a screener questionnaire elsewhere and link to it from your message to make sure that anyone who received your message and wants to participate is a great fit for your audience.

  • Via Friends and Family. There’s always a chance that your perfect participants are a few connections away. You just have to find them, and the only way to do that is to ask around. Leveraging your network can be cheap and effective and for basic usability questions (Do people understand my product? Can they complete tasks?) you can still get useful data by testing friends and family.

  • For a slightly more mature stage product, it's recommended to test with people at least one degree apart, to get the best results and make sure to be true to the measurable criteria from your screener questionnaire. For remote usability testing, there are several testing tools that recruit people based on your screening criteria. Platforms like Dscout, Usertesting and Pingpong are a few such platforms.

Taking the time to find the right users for your usability testing will pay off in the quality of the research you gather. It’s always worth the effort, and you don’t waste time conducting research with users who don’t fit the bill.


The Fidelity Framework

Think of fidelity like placing bets. Low fidelity is a small bet—quick to make, low risk, but might not tell the whole story. High fidelity is a bigger bet—more time invested, but potentially higher payoff. Here's when to use each:

Low Fidelity is Best For:
High Fidelity is Best For:

Early concept exploration

Final design validation

Quick stakeholder alignment

Developer handoff

Testing basic user flows

Stakeholder buy-in

Rough technical feasibility checks

Design system components

Real example: When building startup StreamPro's analytics dashboard, they started with paper sketches because they needed to validate if streamers understood basic metrics before worrying about pretty graphs. For Quantcast's DSP redesign, they went high-fidelity early because the success metric was "feeling more enterprise." The visual polish was actually part of the product strategy.

[Source]


Conducting and documenting your user testing sessions

Participants provide the best feedback when they feel comfortable with the moderator, which could be you or one of your team members. Starting with getting-to-know-you small talk can make it easier for participants to feel comfortable and open up, both in person and virtually. You want to establish a professional-but-friendly rapport with participants right from the start.

I usually start these sessions with a brief conversation about their past experiences and existing habits that are relevant to whatever is being tested. Every session helps the team learn a bit more about the users. Always thank participants for coming, communicate how grateful you are that the participant is taking the time to participate in the study and remind participants that they should be open and honest about their experience, so the design team can make improvements.

Finally, the meat of these sessions, includes

  1. Presenting participants with simple goals and scenarios so we can

  2. Observe them using the product or prototype to complete key tasks

  3. While asking them to think out loud.

For task-based usability sessions, you’re gauging if users complete the task(s) with this design, when and where they get stuck or confused, and trying to understand why. Document patterns you see about what works and doesn’t. Also take note if most users failed to discover any important elements or features. Indicating the relative severity of the different problems helps teams prioritise their work.

A template to organize your usability study notes.

To get the most useful feedback about the product you’ve designed, let participants know that you’re here to find out how to improve the design, so constructive criticism is more than welcome.

Just as we did with foundational research, it’s always a good idea to supplement our primary research with secondary desk research. For instance, you may want to learn about the competitive landscape, as well as the team’s perceptions of competitive and related products. Competitors’ products are great “prototypes” to learn from.

Ask yourselves, what are the closest competitors to this <feature, product, idea>?, How does this compare to competitors?, What behaviours, conventions, or expectations might users bring to this product based on their experiences with other products?


Biases

Confirmation bias

One of the most effective methods for overcoming confirmation bias during research is to ask open-ended questions when conducting interviews.

For example, if you’re conducting an online survey with a large group of participants and one of your questions is: “How do you use our product?” As the designer, you have a few ideas about how you think people use your product, so you may give them options to choose from. If none of the options apply to the user, and they can’t select “other” or skip the question, they’ll be forced to choose one of the multiple-choice answers that doesn’t match their actual experience. That means you’ll end up with false information that skews your research data and potentially provides incorrect evidence for a hypothesis you already had.


And with that, we come full circle in our journey — from research to research. You can reach out to me at shisingh@unicef.org for additional questions or if you need a heuristic evaluation, UI audit, or for any other design advice, asset or assessment.

Good luck on your product journey!

Last updated

Was this helpful?