User Research Methods

February 1, 2026

Apply Jobs to Be Done interviews, the Rule of Five for usability testing, and behavioural log analysis to build products users actually need rather than products they describe wanting.

Jobs to Be Done Interviews

Jobs to Be Done (JTBD) is an interview framework that reframes research away from feature preferences and toward the underlying motivation that caused someone to seek a solution. The canonical JTBD interview question is: "What were you doing the day you first went looking for a solution like this?" This question forces the respondent to recall a specific moment and describe the context — what they were working on, what was frustrating, what triggered the search. The answer reveals the job the user was trying to accomplish, which is almost always more specific and more actionable than a feature wishlist would be.

The contrast with conventional product interviews is structural. "What features would you like to see?" produces answers shaped by whatever the user has recently seen or thought about, not by what they actually need. "What happened when you tried to do [specific task] last week?" produces a narrative grounded in real behaviour. JTBD researchers at Intercom found that understanding the jobs customers were hiring their product to do — specifically "making progress in a specific circumstance" rather than "using a messaging tool" — revealed product improvements that surveys asking for feature requests had not surfaced in 12 months of feedback collection.

Usability Testing: The Rule of Five

Jakob Nielsen's research at Nielsen Norman Group established that five users uncovering usability problems is the threshold of diminishing returns for a single round of testing. Five users consistently surface 85 percent of the usability issues in a design; the sixth and seventh users begin duplicating findings already observed. This finding is counterintuitive for teams trained to seek statistical significance, but usability testing is qualitative discovery, not hypothesis testing — you are looking for problems, not measuring rates.

The practical implication is that usability testing should happen frequently with small groups rather than occasionally with large groups. Five users tested this week and five more tested next week after fixes are applied produces more actionable learning than ten users tested every six months. The testing protocol is straightforward: give the participant a specific task to complete without assistance ("please find where you would set up an automated report"), observe without intervening, and note every moment of hesitation, confusion, or incorrect click. Recording the session with Hotjar or Microsoft Clarity — which capture session replays and click heatmaps at no cost — allows the full team to watch the testing evidence rather than relying on a research summary.

Survey Design Mistakes

The two most destructive survey design errors are leading questions and surveys that exceed seven questions. A leading question embeds an assumption about the user's experience: "How easy did you find the checkout process?" assumes the process has a measurable level of ease and suggests the answer should be on an ease scale. The neutral alternative is "What happened when you tried to complete your purchase?" — which does not presuppose an outcome and allows the respondent to describe problems the researcher did not anticipate. Leading questions produce data that confirms what you already believe, which is the most expensive type of research failure.

Survey length beyond seven questions causes completion rates to drop by more than 50 percent, creating a survivorship bias problem: the respondents who complete a long survey are not representative of the user population. They are typically the most engaged, most frustrated, or most opinionated users — a sample systematically different from the median user whose behaviour you are trying to understand. A seven-question maximum forces disciplined question prioritisation: if you cannot decide which seven questions matter most, you have not yet identified the specific hypothesis you are trying to test, and the survey will produce noise rather than insight.

Behavioural Data from Logs

Product logs contain a more honest record of user behaviour than interviews or surveys, because users cannot exaggerate, minimise, or misremember what they actually clicked. The three most diagnostic metrics available from raw event logs are: feature adoption rate (the percentage of users who triggered a feature at least twice in the first 30 days), drop-off points in multi-step flows, and error message frequency by screen. Feature adoption rate distinguishes features that are discovered and used from features that are discovered and abandoned after one use — the latter are candidates for removal or redesign rather than promotion.

Mixpanel and Amplitude both provide cohort analysis that visualises the week-by-week activity of users who signed up on a specific day. A cohort of January 2026 signups tracked through weeks 1 through 8 produces a retention curve that reveals exactly when the largest drop-off occurs and what percentage survive to become regular users. A cohort retention curve that drops from 100 percent in week zero to 20 percent in week one indicates an onboarding problem; a curve that holds at 60 percent through week four then drops to 30 percent in week eight indicates a product depth problem that emerges only after initial engagement. Each shape requires a different intervention.

Frequently Asked Questions

What is the Jobs to Be Done interview framework? JTBD reframes interviews around the specific circumstance and motivation that caused a user to seek a solution, rather than feature preferences. The key question is "What were you doing the day you first went looking for a solution like this?" which surfaces real behaviour rather than invented feature lists.

How many users do I need for usability testing? Five users reveal 85 percent of usability problems in a single round, according to Nielsen Norman Group research. Additional users produce diminishing returns. Run frequent small rounds after each iteration rather than infrequent large studies.

What is the most common survey design mistake? Leading questions that embed assumptions about user experience. "How easy was the checkout?" assumes ease is the relevant dimension. "What happened when you tried to complete your purchase?" is neutral and allows unexpected problems to surface. Also limit surveys to seven questions or fewer to prevent completion rate drop-off.

What free tools can I use for session recording and heatmaps? Hotjar and Microsoft Clarity both provide free session replay and click heatmaps. They capture what users actually click and where they scroll rather than what users report they do, which is the most reliable behavioural data available without a dedicated data infrastructure.

What does a cohort retention curve reveal about product problems? A steep drop from week zero to week one indicates an onboarding problem — users are not finding the core value in their first session. A drop that occurs at week four to eight indicates a product depth problem — users engage initially but run out of reasons to return after initial exploration.

Related Turkish Products