Sense Making
Individuals can be unpredictable, yet when viewed as part of a group we often find patterns and predictability in behaviour.
The purpose of sense making or interpreting is to move from seeing “what is” to “so what.” We take the raw material about our users and their environments to identify what is important in context of their decision-making and action-taking. It helps isolate the most important pieces of information to be unpacked further. The journey map is already one type of sense making we've attempted, so let's see what other tools we have at our disposal.
Looking for insights
Let’s look at a few more techniques employed by design researchers, to synthesise qualitative information. At the most fundamental level, we can start by mapping out all our information in one place. A professor of mine used to say that there’s no better way to see patterns in information than looking at everything you have, all in one place. It’s not unlike what we see on popular detective shows — there’s always that extensive wall of cutouts, cut across by red threads. Luckily we have tools like Miro to make this process easier.
Once we have an overview of our research notes, we can create Affinity Maps — Clustering common themes together and prioritise the important ones. Alternately, if we find contradiction and not similarity, we can take a note of Identified Tensions — articulate, on two ends of a spectrum, things that contradict with each other. These can often become the axes for 2 x 2s — a model often employed by strategists to arrive at actionable recommendations and insights.
In a way, it's useful to think about research synthesis as a form of data visualisation.

Designit also recommends looking for edge cases and contrast in your findings. Averages are typically well-known. Outliers and edge cases, on the other hand, are often surprising. Many well-known macro-trends contain counter-trends that are sometimes the result of some maverick doing things differently, but that can also be an early sign of a pendulum swing.
Observations to → Findings → Insights
A finding is an observation that characterises unexpected behaviour or disconnects what you see. It is non-judgemental and explainable in many possible ways. A finding is rooted in observation, and does not interpret. To help us interpret findings, we can ask why. This brings us one rung up the abstraction ladder and arrive at an insight. An insight interprets a finding and explains why this surprising behaviour might be happening.
For example, if you’re observing users trying to form a new habit, and arrive at a finding that people tend to club a new practice either before or after an existing habit, an insight might be — Existing habits are a good trigger for initiating behaviour change, or building a new habit
How do you identify a good insight?
A good insight increases empathy for the user experience. A well crafted insight allows us to frame better opportunity statements, which is the topic of our next chapter.
They are non-obvious, generalisable (grounded in multiple pieces of data or applicable to multiple participants) and actionable. When articulating an insight, you need to make sure you’re asking ‘so what’ — why is this important, and why does it affect the project?
This process of asking ‘so what’ or ‘why’ is also known as laddering. Findings ladder up to form insights. Alternately, asking ‘how’ helps us ladder down from an abstraction.
The iceberg model is another way to arrive at juicy insights. In this method of laddering we go from observed events to underlying mental models through 3 steps of ‘why’.
If you only see one solution to a problem, then you don’t really understand the problem.
"How Might We" Statements
Opportunity Statements are the bridge between the research we conducted and the solutions we will generate. They are a way of framing the problems from user research as opportunities for inventive solutions. The final step of synthesis is to reframe a problem as a possibility.
Framing How Might We statements, is one such tool that helps us translate insights into actionable design solutions. These are small but mighty questions that allow us to reframe our insights into opportunity areas and innovate on problems found during research.
The problem we encounter is that How Might We (HMW) statements are often either too vague or too specific.
Vague and broad HMWs is that they give minimal direction or inspiration. These statements are meant to spark ideas you can later test with users. Without any focus, where should you start?
For example, ‘How might we initiate behaviour change?’ might be too broad in scope and unable to spark tangible ideas.
When HMW statements are too narrow, we lose all the possibilities of innovative ideas that can arise from them. With too much focus, we are stuck on one particular solution already. We want several different ideas to test at the end, so focusing too much on one solution will limit creativity and innovation.
For example, ‘How might we help users build a new habit through peer validation?’ might be already point to a product feature, so we might want to reconsider broadening the scope of the opportunity
We can turn the dial on these through a laddering technique — grounding a vague statement by asking how, and broadening a narrow statement by asking why. A good HMW statement helps you focus on solving a problem. In this example, perhaps an ideal middle ground could be to ask ‘How might we nudge our user to change their behaviour by triggering the habits that they are trying to build?’
Ultimately, it needs to be a question that tickles your imagination and guides you into the next (divergent) phase of your project — to explore possible solutions in solving the right problem.
People Nerds and NNG suggest that phrasing the HMW questions positively help generate a wider scope of solutions or answers to the question.
Decision Making
A useful decision making tool at this stage is the confidence scale. In product development and prototyping, speed and quality are two important variables. Prioritising one is typically at the expense of the other. This tool helps you make that trade-off.
It recommends focusing on quality if you’re confident in the problem you’re solving and the solution you’ve created; and focusing on speed if you’re less confidence in the importance of the problem you’re trying to solve (regardless of confidence in the solution). If your confidence in the importance of problem is high, but confidence in the solution is low, then you’re in a bind and need to balance speed and quality. This is where rapid prototyping helps move things along, and iterate through variations of a solution to meet the user’s needs more acutely.
Biases
False Consensus Bias
The assumption that others will think the same way as you do. An individual is more likely to focus on information that proves their personal beliefs regardless of the truth. When you’re conducting any type of UX research, you have to be cautious. You can avoid false consensus by limiting the guidance you give users and identifying and articulating your assumptions.
Sunk Cost fallacy
The deeper we get into a project we've invested in, the harder it is to change course without feeling like we've failed or wasted time. To avoid the sunk cost fallacy, break down your project into smaller phases, and then outline designated points where you can decide whether to continue or stop.
Last updated
Was this helpful?