Validating Your Idea

February 20, 2026

How to validate a startup idea correctly: smoke tests, the rule of 10 interviews, asking about willingness to pay, and why user interviews often mislead founders.

Behaviour Over Opinion: Why Interviews Mislead

Most validation interviews produce false positives. When you ask someone "would you use this?" they imagine an idealised version of the product, remember times they encountered the problem, and say yes because saying no feels unkind. The question measures politeness, not intent. Rob Fitzpatrick documented this problem comprehensively in The Mom Test: even your mother will mislead you if you ask the wrong questions. The solution is to ask about past behaviour rather than future hypotheticals — "tell me about the last time this happened" produces reliable data; "would you ever need this?" does not.

The specific questions that generate reliable signal are rooted in the present and recent past: "How are you solving this problem today?", "What does that cost you in time and money?", and "Have you tried any tools for this?" A person who has spent money and time on imperfect solutions to the problem you're targeting is a genuine potential customer. A person who says "I've never really tried to solve it" is revealing that the problem isn't urgent enough to act on — and that's the most important validation data you can collect before building anything.

The Smoke Test: Measuring Intent with a Landing Page

A smoke test is a minimum viable experiment that measures purchase intent before the product exists. The structure is simple: build a landing page that describes the product as if it exists, includes a price point, and features a "Buy Now" or "Join Waitlist" button. When a visitor clicks the button, they see a "We're still building this — enter your email for early access" page instead of a checkout flow. The click-through rate from landing page to that CTA measures intent, not curiosity.

The baseline that distinguishes genuine demand from polite interest is a click-through rate above 5–10% from relevant traffic. Send the landing page to 100 people who match your target customer profile — not friends, not your network as a courtesy, but people who have the problem you're solving — and measure conversions. Specificity of the traffic matters more than volume: 20 clicks from 50 targeted potential customers is stronger validation than 100 clicks from 2,000 cold visitors. The smoke test answers the only question that matters before writing code: "Are there enough people who want this to build a business?"

The Rule of 10 Interviews

Ten independent confirmations of the same problem constitutes genuine signal. Three or four confirmations leave too much room for sampling bias — you may have found the only four people in the world with this problem. The independence criterion is critical: the ten people should not know each other, should not have been referred through a single community, and should not have been primed by a description of your solution before expressing the problem. If people discover the problem unprompted when you ask about their workflow, that's independent confirmation; if they agree with a problem you described to them, that's agreement, not confirmation.

The rule of 10 also has a quality threshold: the problem should be mentioned within the first five minutes of the conversation without prompting from the interviewer. A problem that only surfaces when you ask "do you ever experience X?" is a problem that people tolerate rather than actively want to solve. A problem that people bring up spontaneously when describing their current process is a problem that frustrates them enough to pay for a solution. Document every interview in a spreadsheet with three columns: what they said about their current workflow, what problem they named, and what they've tried to solve it. After ten interviews, patterns emerge clearly.

Asking 'Will You Pay?' the Right Way

Direct questions about willingness to pay ("would you pay for this?") produce unreliable answers because people systematically underestimate their future behaviour when asked hypothetically. The reliable framing is a constraint-based question asked within the first five minutes: "What are you currently paying to solve this problem, or what would you pay monthly to have it solved today?" The addition of "today" removes the hypothetical and anchors the answer in present reality.

Pre-sales are the strongest form of payment validation: customers who submit a credit card number for early access are demonstrating real commitment, not expressing abstract interest. A waitlist with credit card on file converts at 20–40% to paid customers at launch; a waitlist with email only converts at 3–8%. The practical implementation is a landing page with a Stripe checkout for a founding member plan at a meaningful discount — if 20 people pay $29 for early access, you have both validation and your first $580 in revenue. If zero people pay, you've learned the most important thing you could learn before spending three months building. The Mom Test principle applies here too: the only data that can't be faked is money changing hands.

Frequently Asked Questions

How many user interviews do I need before building? Ten is the minimum for problem validation; five is enough to identify that you're asking the wrong questions and need to reframe. The ceiling is around 20–25 for a focused problem space — beyond that, you're hearing the same themes repeatedly and the marginal information value drops sharply. The goal is to reach saturation: the point where new interviews stop surfacing new insights. Most founders reach saturation between 15 and 20 interviews for a single customer segment.

Who should I interview for problem discovery? Interview people who have the problem, not people who might have the problem. The fastest way to identify them is to find communities where people already complain about the problem you're solving: specific subreddits, Slack communities, LinkedIn groups, or in-person professional associations. Someone who posted "I'm so frustrated with [process X]" six months ago is a better interview candidate than someone you hypothesise has the problem.

What is the difference between problem validation and solution validation? Problem validation confirms that the problem exists, is painful, and is common enough to support a business. Solution validation confirms that your specific solution solves the problem in a way that customers prefer over current alternatives. Do problem validation before solution validation — it's much cheaper to discover that the problem isn't real than to discover it after you've built the solution. Most founders conflate the two and start pitching their solution before they've confirmed the problem.

Can I validate using online surveys instead of interviews? Surveys confirm patterns at scale; interviews generate hypotheses. If you have a very large audience, a survey can confirm which of three problem framings resonates most — that's a valid use of surveys. Using surveys for initial problem discovery produces shallow data because survey questions are closed-ended and can't follow an interesting thread the way a conversation can. Start with interviews, use surveys to confirm at scale.

What does a failed validation look like? Failed validation looks like: fewer than 5 of your 10 target interviews confirm the problem independently, no one on your landing page clicks the CTA, and nobody will pay anything for early access. These are all useful outcomes — they tell you to reframe the problem, change the target customer, or abandon the idea before building. Most founders treat failed validation as embarrassing rather than valuable. The cost of learning early is three weeks of interviews and a landing page; the cost of learning late is a year of development and a failed launch.

Related Turkish Products