Idea Screening in New Product Development

image new product thoughts

To keep growing, both startups and established brands need a new product pipeline. One of the tools that can be used toward that goal is idea screening—the process of evaluating new product ideas at an early stage to help identify promising ideas for future product development. Idea screening should not be thought of as a single siloed process: qualitative approaches are also essential to feed the idea screening process.

But qualitative researchers should have a good understanding of what happens after their work has ended. Quantitative idea screening helps to identify the most promising early-stage opportunities and accelerate them to the next phase of new product development. This could take the form of a minimum viable product, a new formulation, or a line extension. Simply put, taking a systematic approach to idea screening helps a company align scarce marketing resources to the candidates with the most market potential. This article describes an approach that qualitative researchers can use to help their clients build a systematic way to identify new product opportunities. We will outline a practical framework using a set of standard measures. Larger organizations and funding mechanisms (i.e., venture capital firms) use this approach—but it is not only limited to companies with deep pockets. Smaller companies can use this approach, too.

What is the Role of Idea Screening?

Idea screening is a structured way to simultaneously evaluate many ideas and identify those with the greatest potential for success. Idea screening can be used to evaluate vastly different ideas, or ideas within a certain category. This process can also be used to identify general areas where consumer needs are not currently being met. Unlike concept testing (which attempts to evaluate more fully developed advertising concepts with detailed language, images, pricing, or other marketing mix variables), idea screening focuses on the appeal of basic ideas using a limited set of measures. This allows researchers to quickly see which ideas warrant further investment, or not.

  • For startups, idea screening offers a way to prioritize options, ensuring that those with broader appeal move forward. For example, a personal care start-up might have developed a patented eco-friendly formula. This ingredient could be used in many categories (e.g., hand lotion, hair care, personal hygiene). Screening multiple category ideas, each based on this novel ingredient, is a good use case for idea screening. The results can then be used to guide both R&D and marketing strategy before additional investment in prototypes or packaging is needed.
  • Established brands, on the other hand, might use idea screening to extend product lines or expand into adjacent markets. In these cases, alignment with an existing brand’s equity (also called “brand fit”) will be important. For example, a global beverage brand considering a low-sugar line product can screen ideas to see which flavors, packaging, or positioning best align with current brand perceptions.

How Do I Design an Effective Idea Screening Process?

As a qualitative researcher, you already have good visibility into the new product development process. After all, you conducted focus groups or in-depth interviews that identified possible new product opportunities. But what should the client do next if there are multiple competing ideas?

First and foremost, introduce the idea of screening those many ideas, and use a more systematic approach—especially one that relies on consumer or buyer input to help the company prioritize new product development efforts.

Here are four foundational elements to get your idea screening journey off the ground:

  1. Obtain Management Buy-In

Whether you are dealing with a “solopreneur,” a start-up, or a mid-to-large organization, management must agree that this approach will be used to identify new product priority areas. By “buy-in” we mean that management agrees to the process and agrees to deploy the needed resources for product development once the results come in. Absent top management support, the best ideas can be overlooked or deprioritized, leading to missed opportunities.

  1. Define Your Success Criteria

Establishing clear objectives is critical to the success of an idea screening process. Start by defining what success will look like. For example, do you want to find ideas with the highest purchase interest, or is it more important to find distinct ideas that fill a gap in the market? A startup might prioritize broad consumer appeal, while an established brand might focus on ideas that align with its image. Setting “go/no-go” criteria based on benchmarks or previously tested ideas can also ensure that a winner is defined properly and passes basic performance criteria.

  1. Standardized Stimuli

In idea screening, the stimuli must be descriptive but also very simple. At the early idea stage, ideas can be thought of as “nuggets,” with just enough information to convey the idea without excess detail. A brief written description with (or without) a visual representation will suffice. The goal is to get a consumer to focus on the product’s main attributes. All ideas, however, must include an end benefit. That is, what will the buyer receive from the product or service they just read about? Remember, we are not screening two or three ideas—we are often screening ten, twenty, or more.

Here are some very simple hypothetical examples of idea “nuggets” that could be put into an idea screening research study:

  • The Energy You Need, Naturally. Powered by 100% plant-based ingredients for a clean energy boost, this energy bar keeps you going throughout the day without the jitters or a crash.
  • The Cleaner That Does It All. This multi-surface cleaner is safe for wood, glass, and countertops, simplifying your cleaning routine with one product for every room.
  • Stay Cool Anywhere, Anytime. This portable fan has a rechargeable battery and a sleek, compact design, providing refreshing, on-the-go cooling that lasts for days.

Concise, focused stimuli will keep the evaluation clean, consistent, and reliable. Depending on the total number in your study, the ideas can include more detail, but clarity is key: make sure to have enough detail to explain the idea, but not so much as to confuse respondents, or slow things down, or skew the results. Generally, price is not included in idea screening, because the goal is to screen the basic premise of the idea before setting a price.

  1. Consistent Sample Definition & Statistical Precision

Locking in the appropriate sample definition is essential for obtaining reliable results across ideas and, importantly, over time if your approach will be used on an ongoing basis. For broader-appeal products, a general population audience sample is appropriate (i.e., balanced on males/females, ages 21-64, and geographically representative). For niche ideas, such as a high-performance running shoe or users of nutritional supplements, a “booster sample” may be needed (i.e., ensuring that there are large enough samples important subgroups, such as runners or people who take B vitamins). The analysis will benefit by yielding relevant feedback from a large enough sample. Although there are no hard rules, we recommend feedback from a sample of at least 250 respondents per idea to strike a balance between precision and cost.

What Are Evaluative vs. Diagnostic Measures?

Including both “evaluative” and “diagnostic” measures allows for a well-rounded analysis that captures consumer appeal and areas for improvement. What are the differences between these types of measures?

Evaluative Measures

Evaluative measures assess performance and represent a person’s intended behavior. Examples of evaluative measures include purchase intent, intended frequency of use, and expected use occasions. These measures represent what the respondent intends to do based on the stimulus you have exposed. Although evaluative measures are critical to include because they assess performance (i.e., they address “how big is the opportunity”), they don’t necessarily explain why one idea performs better than another. For that we also recommend including “diagnostic” measures.

Diagnostic Measures

Diagnostic measures explain why consumers feel a certain way about an idea, and help to identify strengths and weaknesses. Diagnostic measures can be used to enhance or refine a promising idea that may be missing some key support points. For instance, an idea with high purchase intent but low distinctiveness (uniqueness) might indicate that a large opportunity exists, but that its core selling proposition may not be defendable against competitors. Including open-ended verbatim comments or exploratory questions (while certainly valuable in later stages) should generally be avoided in idea screening because of the extra time required.

What Measures Should I Use?

Evaluating an idea’s potential appeal requires us to focus on a set of key measures that highlight interest, differentiation, and benefits. Here is a breakdown of 10 key metrics we recommend in idea screening:

  1. Purchase Intent

Purchase intent assesses whether consumers would buy the product based solely on its description. This metric is a relatively strong predictor of potential demand and in-market performance, and serves as the primary evaluative measure. This question is typically asked as “How likely would you be to buy this product assuming that it was available where you shop and sold for a reasonable price?”

  1. Distinctiveness

Distinctiveness (also sometimes asked as “uniqueness” or “new and different”) measures whether an idea stands out from competitors. For startups, distinctiveness can be a make-or-break factor: a new beverage brand entering a crowded market, for instance, needs to differentiate itself based on distinctive features, like excessive caffeine (example: Red Bull) or innovative packaging. This question is typically asked as “How unique or different do you consider this product to be when compared to other products currently on the market?”

  1. Relevance

This measure evaluates how well the idea aligns with consumer needs and values. A relevance question would use an agreement scale with phrasing such as “This is a brand for someone like me.” For new brands, relevance is critical in ensuring that a product dovetails with the intended buyer’s mindset or philosophy. For example, a skincare company might focus on ideas that cater to consumers’ eco-friendly desire for all-natural ingredients, aligning brand identity with buyer values.

  1. Replacement vs. Addition

This metric identifies whether the idea replaces an existing product or serves as an additional purchase. This measure also reveals the consumer’s expected usage patterns for the product. This question is typically phrased as “Would you expect to use this product to replace an existing product, or would you use it in addition to products that you currently use?” A replacement product suggests that it meets current needs but offers improved features, performance, or convenience. A product used in addition to other brands may indicate a niche product, special use occasions, or perhaps that it fulfills both existing and untapped needs.

  1. Household Members Who Could Use the Product

Broad household appeal can increase purchase potential. This question is typically asked as “Who in your household, including yourself, is most likely to use this product?” For example, a company making family-friendly meal kits would want to know who in the household sees themselves consuming the product. All things being equal, a product with broad household use would receive a higher rank than products with narrow household use.

  1. Problem-Solving Ability

Problem-solving ability assesses whether the idea effectively addresses a specific pain point. A tech startup might test if their productivity app helps users balance work and personal tasks, providing a clear, targeted solution that could enhance consumer interest. This question can be phased as “Does this product solve a particular problem for you that other products currently do not?”

  1. Frequency of Use

This measure captures the anticipated frequency of use. This is typically asked as “How often do you think you would use the product you just read about?” followed by a frequency scale. Products that consumers expect to use more often tend to generate stronger appeal and justify their value more effectively. Breadth of use (above) is combined with frequency of use to assess overall use opportunities. A product can succeed either by broad but infrequent use, or on narrow but heavier use.

8.Anticipated Use Occasions

Anticipated use assesses the versatility of the idea across different use occasions. This is typically asked as “Which of the following occasions would you be likely to use the product you just read about?” Depending on the category, we would want to explore how consumers will use the product—for example, in a daily routine, or just for special occasions. By identifying potential applications, companies gain insights into the product’s relevance and marketability across different consumer needs and contexts.

  1. Optional: Brand Fit (Assumes a Parent Brand)

When testing new product ideas under an existing brand name, brand fit is important to know. Line extensions should not dilute or damage an existing brand’s equity. For example, a luxury fashion brand might want to explore wellness ideas to complement its premium image. Assessing brand fit means that we want to understand how well the brand extension aligns with the parent brand’s image and reputation. Conversely, some brands can compete in multiple categories if the company’s brand equity extends that far (e.g., Honda cars, generators, and robotics).

  1. Optional: Value for the Money (Can Be Asked With or Without Price Being Shown)

Value perceptions weigh a product’s perceived benefits versus its price. This is especially relevant in competitive categories or among price-sensitive consumers. A startup offering premium organic snacks, for instance, might test whether consumers find the product’s higher price justified by its perceived health benefits. And, obviously, asking a value for the money question without a price being shown is generally not recommended, hence this question would typically be included only if pricing were stated. However, a question about expected price could also be considered.

Scoring of Concepts

While all of the above measures are important, the three most important measures are purchase intent, uniqueness (distinctiveness), and the anticipated frequency of use. Together these measures capture the essence of each idea’s performance. In some situations, when there are many ideas being considered, you may want to consider some form of scoring to identify those that show more relative strength. To accomplish this, weights would be assigned to each measure of interest, and an aggregate score calculated. Assuming we limit this approach to the three core measures mentioned, a possible approach would be to assign a weight of 0.6 to purchase interest, 0.2 to uniqueness or distinctiveness, and 0.2 to anticipated frequency of use. This can be calibrated to a 0-100 point scale and then a relative ranking then calculated.

Summary

Qualitative researchers can play a critical role in helping their clients identify new product ideas by designing and conducting idea screening. When strategically designed, idea screening provides a systematic data-driven approach to identifying promising new product ideas. By combining qualitative insights with quantitative measures and benchmarks, companies can prioritize the ideas with the highest potential. From there they can refine their innovation pipeline and optimize resource allocation. This approach can be used by startups and established brands alike to make smarter, more efficient product development decisions.

Essential Metrics in Idea Screening

Measure Type Description/Purpose
1. Purchase Intent Evaluative Evaluates the likelihood that consumers would buy the product based solely on its description
2. Uniqueness (Distinctiveness) Evaluative Measures how much the idea stands out from existing products in the market
3. Relevance Diagnostic Evaluates how well the idea aligns with consumer needs, preferences, and values
4. Replacement vs. Addition Diagnostic Identifies whether the product will replace an existing item or be an additional purchase
5.  Household Members Who Would Use Diagnostic Assesses whether various members of a household could use the product, expanding its appeal.
6. Problem Solving Ability Diagnostic Evaluates whether the idea effectively addresses a specific pain point or unmet need
7. Anticipated Use Occasions Diagnostic Evaluates how many different uses consumers foresee for the product, indicating its versatility
8. Frequency of Use Evaluative Measures the frequency with which consumers expect to use the product, reflecting its volumetric contribution
9. Brand Fit (Optional) Diagnostic Assesses whether the idea aligns with the established brand image and reputation
10. Value for the Money (Optional) Diagnostic Determines if consumers perceive the product’s benefits as worth its cost

Navigating Research Bias: From Problem Definition to Analysis

Twists & turns of incidence image

  If people suspect bias in a research study, it leads to a lack of confidence in the results. Unfortunately, research bias exists in many forms and in more ways than we would like to admit. In research, bias refers to any factor that abnormally distorts data, results, or conclusions. As researchers, our job is to not only understand bias, but to find ways to either control or offset it.

After all, it is the negative impact of bias that we most want to avoid, such as inaccurate conclusions or misleading insights. Additionally, bias is not limited to any one specific research approach. Sources of bias exist in quantitative and qualitative research and can occur at any stage of the research process, from initial problem definition to making client recommendations. Each of us brings our unique perspective based on personal experiences, and what we believe to be true or not. The insidious thing about bias is that we won’t always anticipate it, nor will we recognize it even as it is happening!

This article explores some of the more well-known types of bias that commonly arise in marketing research studies and offers suggestions on how you can avoid  these errors. We also provide a reference table that lists these biases, summarizes common errors, and provides ways to avoid the negative impact of bias in your research practice. Let’s take a look at some of the more common forms of bias, from initial research design to analysis and interpretation.

Bias in Problem Definition

The foundation of any research project starts with problem definition and the approach you will take to address it. This is the first point at which bias can creep in. A research problem framed too narrowly, or based on preconceived assumptions, can lead to a study that simply validates pre-existing (and potentially wrong) beliefs rather than uncovering new insights.

Incorrect assumptions about the source of a marketing problem are one form of bias. For example, a company’s sales team believes that weak sales of a flagship brand are due to a high price point. This belief led to promotional campaign testing, instead of executing a more broadly-defined brand image study. Had the appropriate research been conducted, management would have learned that the brand actually represents a great value, but that there was little consumer knowledge of the value story. The company spent time and money to solve the wrong problem based on an incorrect assumption.

Over-reliance on industry trends can also skew problem definition. For example, a company’s management team believes that future trends in marketing support more spending with social media influencers. The company decided to spend heavily with several social media influencers. A post-campaign analysis proved that the extra spending never reached their more broadly-defined audience and did not fully capture market diversity.

To mitigate this, it is crucial to:

  • Engage diverse stakeholders who can challenge our own assumptions.
  • Conduct exploratory research to broaden understanding and build hypotheses.
  • Frame your problem definition around the consumer’s mindset, not preconceived beliefs.

Bias in Question Construction

Once the problem is defined, another potential source of bias is in question construction. Subtle changes in phrasing can lead to drastically different responses, and leading questions can direct respondents towards a particular answer. Asking “Wouldn’t you agree that our customer service is exceptional?” encourages a positive and biased response and downplays negative feedback. This is often seen in satisfaction surveys that impact employee compensation, where the recipient of the feedback is coaching the respondent to provide good ratings (i.e., “Please rate us highly!”).

Loaded questions, which embed assumptions, are similarly problematic. For example, asking, “How much do you spend on luxury skincare products each month?” presumes that the respondent purchases products in the luxury skincare category, which can alienate respondents or overstate data.

Double-barreled questions (i.e., two questions in one) are also a common error. For example, “How satisfied are you with our product quality and customer service?” Mixing two separate aspects in one question can confuse respondents and lead to unclear data, as they may have different opinions of the two aspects.

To avoid these errors:

  • Use neutral language in all questions.
  • Pilot test surveys to identify unintended biases.
  • Ensure each question focuses on a single issue, avoiding double-barreled phrasing.

Sample Selection and Coverage Bias

Bias in sample selection or coverage areas can severely distort the findings of marketing research. Sampling bias occurs when certain groups are over- or underrepresented in screening, sample definition, or geographic area. For instance, focusing a survey solely on urban customers may overlook rural or suburban buyers, leading to unrepresentative results for a national campaign.

Similarly, self-selection bias can occur when individuals who choose to participate are meaningfully different from those who do not. Customers with strong positive or negative opinions may be more likely to respond, while those with neutral opinions might remain silent, skewing the results toward the extremes. Incentives may increase participation, but also have the potential to attract only those who are motivated by financial rewards.

One approach to overcoming coverage and sampling bias is to use quotas for specific subgroups, and then weighting the subgroups into their proper proportions, but there is also a risk of over- or under-weighting some groups at the expense of others.

Addressing these issues requires:

  • Random sampling to ensure all relevant groups are included.
  • Alternatively, stratified sampling to ensure proportional representation of key subgroups.
  • Considering demographic and geographic coverage to prevent skewed results.

Avoiding Cultural and Sexual Bias

Cultural and sexual bias is an overlooked issue in survey design. Surveys that fail to acknowledge cultural differences, or that make assumptions about gender and sexuality, can alienate respondents and result in inaccurate or incomplete data. For example, questions that assume traditional family structures or fail to recognize non-binary gender identities can alienate a significant portion of respondents and introduce bias into the research.

Common errors in this area include using language that assumes heteronormative relationships, or ignores cultural differences in lifestyle, values, or product use. A question about alcohol consumption, for example, may not be relevant in cultures where alcohol is forbidden or stigmatized. Questions about childcare or child rearing should not automatically be asked of a female in the household, in the same way that questions about home maintenance ought not to be always asked only of a male in the household.

Over-generalizing cultural experiences or stereotyping is another area to be aware of, and perhaps requires us to be the most self-aware in terms of how we think and phrase questions. For instance, assuming that all respondents from a particular ethnic background share the same cultural preferences, buying behaviors, or brand choices is bias that can distort our research.

To avoid cultural and sexual bias, researchers should:

  • Use inclusive, neutral language that does not assume a specific gender or sexual orientation.
  • Be sensitive to cultural differences, avoiding questions that may be irrelevant or offensive to certain groups.
  • Ensure that surveys are tested on diverse groups before widespread deployment to identify potential biases in question phrasing.

Context and Order Bias

Context bias occurs when responses to a question are influenced by earlier questions or stimuli during the interview. If a respondent is first asked how satisfied they are with customer service, their response to a later question about overall brand satisfaction may be disproportionately influenced by that prior focus on customer service. Similarly, asking an overall satisfaction before asking about attributes or other detailed items can create a biasing effect, hence proper question sequencing can avoid these traps.

Order bias is a specific form of context bias in which the order of questions or response options affects how respondents answer. This can often be mitigated by rotation of question blocks or specific attributes within a ratings section. A primacy effect occurs when respondents favor the first option presented; a recency effect occurs when are respondents persuaded by the last option shown. For example, respondents who are shown too many options may disproportionately choose the first or last one, depending on how the items are presented. In the same way, concepts that are shown first often are typically rated higher than those that are shown in other positions.

To minimize the impact of context and order bias:

  • Randomize the order of questions and response options within a ratings section, unless the options are intended to be chronological.
  • Group related questions to limit the influence of unrelated preceding questions.
  • Use balanced framing to avoid highlighting specific attributes over others.

Summary

Bias is everywhere and cannot be fully eliminated. But, when we are aware of the forms it can take, we can also implement ways to mitigate its potential negative impact. Obviously, the most profound negative impact of bias is that we simply get it wrong— in other words, we either come to an incorrect conclusion, or worse, one that is damaging to our client’s business.

As researchers, it is our responsibility to identify, and be on the lookout for, the negative effects of bias. Don’t be afraid to challenge pre-existing approaches or entrenched ways of thinking. From ensuring good research design and representative sample selection to constructing neutral, balanced questions, researchers have many tools to reduce bias at every stage of the research process!

Table of Bias Types, Common Errors, and Ways to Avoid Them

Below is a summary table of the biases discussed in this article, their definitions, common errors, and ways to avoid them.

Type of Bias Definition Common Errors Ways to Avoid
Bias in Problem Definition Occurs when research problems are framed too narrowly or based on preconceived assumptions Relying too much on past trends or assumptions without exploring broader trends Conduct exploratory research, challenge assumptions, and involve diverse stakeholders in problem framing
Bias in Question Construction Bias introduced by leading, loaded, or double-barreled questions Using questions that assume an answer, contain multiple parts, or suggest a response Use neutral phrasing, test questions for bias, and focus on a single topic per question
Context Bias When responses to a question are influenced by the context set by prior questions or stimuli Previous questions or attributes overly influence subsequent responses Group related questions to avoid unnecessary influence
Order Bias When the order in which questions or options are presented influences how respondents answer Respondents disproportionately choose options based on their position in the list Randomize the order of options and questions within rankings to reduce primacy and recency effects
Sampling Bias Occurs when certain groups are over- or underrepresented in the sample Over-sampling or under-sampling certain populations, leading to unrepresentative data Use random or stratified sampling to ensure the sample reflects the population
Self-Selection Bias When individuals who choose to participate differ significantly from those who do not Highly-opinionated individuals respond more frequently than neutral or indifferent ones Randomly invite participants and ensure balanced representation of the population
Coverage Bias Bias introduced when the sample doesn’t cover the target population adequately (geographically or demographically) Over-sampling one group or geographic area while under-representing others Ensure the sample reflects geographic and demographic diversity
Non-Response Bias Occurs when a significant portion of the selected sample does not participate, leading to skewed results Only highly-motivated or opinionated participants respond, leading to skewed results Use incentives, send follow-up reminders, and apply weighting cautiously
Confirmation Bias Focusing on data that confirms existing beliefs while ignoring contradictory data Only analyzing data that supports the researcher’s hypothesis Develop an analysis plan before data collection and involve multiple reviewers
Analytical Method Bias Bias introduced when selecting analytical methods that emphasize desired outcomes Choosing statistical methods that only show significant or favorable results Use a variety of analytical methods and check for consistency across different approaches
Cultural and Sexual Bias Bias introduced by using language or assumptions that exclude certain gender identities or cultural practices Assuming heteronormative relationships or ignoring cultural diversity in lifestyle and preferences Use inclusive language, be sensitive to cultural differences, and test surveys on diverse groups before broad deployment to identify potential bias in phrasing

 

 

Design Considerations in Quantitative Research

image man and chart

  Welcome to QRCA VIEWS’ inaugural Quant Corner column focusing on the many different aspects of quantitative marketing research. This column is intended to provide accessible, easily-digestible explanations of quantitative —but geared to you, the qualitative researcher. The idea for this column originally sprang from a seminar I gave on Quantitative Methods for Qualitative

Researchers at the January 2024 QRCA Annual Conference in Denver. That seminar (attended by an enthusiastic group of qualitative researchers) generated a lot of interest—and questions—about how to conceptualize and execute quantitative research properly.

Equally important, many QRCA members are also full-service research consultants for their clients. You have probably been working with many of your clients for years, and they rely on your expertise to address a wide range of marketing issues by identifying new insights. As a result, you may find yourself responsible not only for qualitative projects, but also for hybrid (i.e., “mixed methods”) research paths that involve both a qualitative and quantitative phase. Or, perhaps in the future you would like to do more of these kinds of projects to expand your skill set.

Well, fear not! You already have the analytical skills needed to design, gather, and synthesize qualitative data from both individuals and small groups. Conducting larger-scale quantitative projects simply extends your skills to larger target populations and user segments. That’s why you will benefit from understanding the principles behind determining the right design and appropriate sample sizes for a variety of quantitative research studies. With an ever-growing array of DIY, neuroscience, and AI-supported platforms, your ability to offer well-reasoned research approaches to your clients will be appreciated and valued.

In this issue, we’ll tackle some basic challenges in determining the right sample size for your quantitative projects, understanding the role of statistical confidence, and a little bit on experimental design. We will also explore the trade-offs between precision and costs and provide guidelines for determining when sample sizes are “close enough.” We’ll cover some typical use cases such as general opinion studies, attitude and usage studies, segmentation research, idea screening, and concept testing.

Qualitative or Quantitative: Do We Care?

For simplicity, our research design and sample size guidance apply primarily to what is known as “quantitative primary research.” Primary research typically means a proprietary survey commissioned by a specific client and designed to address business or marketing issues for that client. This is in contrast to “secondary” or “syndicated” research, which are audits or surveys that are executed once and then shared by multiple clients (e.g., an “omnibus study”).

Primary research can be further distilled into “exploratory” designs (i.e., smaller samples, with more flexible data collection) and “confirmatory” or “evaluative” designs (i.e., larger samples, with more structured data collection). The term “qualitative” is typically associated with exploratory research, while “quantitative” is typically associated with confirmatory research—but there are exceptions. For example, large-scale quantitative studies are often conducted to explore new market opportunities, such as to discover unmet needs or identify segments. Conversely, in product testing, there may be a small number of highly-trained sensory experts who explore and evaluate new products using comprehensive and precise numeric responses. The line between qualitative and quantitative can sometimes be blurry.

In addition to surveys on business or marketing topics, there are other types of survey research that share the same characteristics, in that they require certain levels of precision which impacts sample size, statistical confidence, and cost. These include surveys of public opinion, political polling, and consumer or business sentiment (e.g., monthly surveys of consumer confidence). While these examples may fall outside a pure primary research definition, they still require the same thought process and approach needed in most proprietary studies.

Importantly, as researchers we are not in the cookie-cutter business: every client and every business or marketing problem is different. It will be up to you to recommend the path forward that makes the most sense for your client.

Setting Research Objectives to Guide Study Design

Problem definition is the most important piece in mapping out your quantitative marketing research plan. All the parameters of your quantitative design will flow from this, including the overall design, sample specifications, required sample sizes, and the precision needed for  business decision-making. Pay close attention to the problem you are being asked to solve. You will typically see the historical context for the project in a “Background” or “Overview” section of the research request. Most research studies were initiated by a series of client events that led to this opportunity. The Background section should provide good context and guidance about the specific issues being faced by the client.

What Kind of Study Will You Need?

Keep an eye out for action standards. An action standard is a “go”/”no go” decision rule linked to your client’s objectives and is based on questions in the survey you ultimately execute. So, if there is an explicit “action standard” associated with the research, you’ll need to shift your focus toward a research design that assesses reactions to ideas, concepts, products, or positionings. For example, a new product idea might have to exceed a known benchmark on purchase interest for the R&D team to proceed to the next phase of product development. If your research shows that the product exceeds the benchmark, the next phase is a “go.”

Conversely, not all research has or needs an action standard. If the research being requested is largely descriptive or diagnostic in nature, an action standard is not warranted. Perhaps your client needs to identify possible opportunities in a new geographic market or discover new and untapped segments. In this case the research, while quantitative, is primarily exploratory, with the intent to understand and learn about a category, products, or users—yet a large sample may be required.

Large exploratory studies might involve understanding consumer needs, desired end-benefits, or buyer behavior (i.e., brands x users x usage situations). Some research may focus on identifying opportunities where new products might be developed (i.e., “white space” opportunities). Other studies may focus solely on whether a brand’s equity can be extended into other (closely or distantly related) categories that share the same imagery or values.

Use Cases: Strategic Research

Large strategic studies—such as market entry strategies, market structure, or brand assessment and repositioning research—aim to inform significant business decisions. These studies require high precision and reliability, typically necessitating larger sample sizes. For large, nationally-representative market studies that will be used to identify opportunities as well as key subgroups for analysis (such as identifying unique segments), you would be well-served to recommend sample sizes that are large—samples of 1,200-1,500 respondents are typical and may go as high as several thousand depending on the need for geographic or demographic coverage. This extends not only to demographic subgroups, but hypothesized segments or other groups of interest to the client. Even with large samples, having many subgroups to analyze can quickly cause large sample sizes to dwindle into smaller subgroup sample sizes. For example, if your client assumes that there are five segments in a 1,200-respondent survey, there might only be 240 respondents per segment (assuming that they are evenly distributed).

Examples of large-scale strategic studies might include:

  • A global consumer company is considering entry into a new geographic region. Little is known about this market, and there is a significant need to understand market potential, consumer preferences, competitive landscape, and optimal entry strategies. The client hypothesizes that at least two segments might be targets that are right for acceptance of their brands.
  • A well-established U.S. packaged goods company is considering repositioning one of its mature products to target a younger demographic. The company needs to understand current brand perceptions, identify new positioning opportunities, and evaluate the potential impact on sales and market share among a younger age target.
  • An awareness and usage (A&U) study is needed to explore consumer behaviors, preferences, and attitudes for a category where the company does not currently compete. The results of the A&Us will be to understand how consumers interact with competitive products and services, their satisfaction levels, and potential areas for improvement before an acquisition is made.

Use Cases: Evaluative Research to Screen Ideas or Test Concepts

Idea screening and concept testing are essential stages in product development, where potential ideas or concepts are evaluated for feasibility and appeal, so that a company’s new product pipeline can stay full. Much like market studies, these studies often require larger samples to minimize respondent burden by having respondents see only a subset of the full complement of concepts under consideration. In these scenarios, “partial factorial” designs are effective uses of sample (i.e., showing 5 out of a possible 25 ideas being evaluated). This allows a client to gather solid general demographic information from the larger sample size, while getting adequate (or “good enough”) evaluations and diagnostic feedback on each concept. Use cases for idea screening and concept evaluation might include:

  • Early-Stage Screening: Initial screenings can use small-to-large sample sizes to identify promising idea “kernels” efficiently. These studies prioritize speed and cost-efficiency over depth of evaluation. They are also often followed by a qualitative phase to gather more insight.
  • Validation Stage: As ideas progress, larger and more representative samples are needed to validate findings and ensure that the concepts resonate with the target market. Minimally, samples of 150–250 are commonly used in these designs.
  • Iterative Testing: Multiple rounds of testing may be required, with sample sizes adjusted based on the stage of development and the precision needed. In these cases, action standards are routinely used.

Figure 1 is a handy reference table that provides guidance on common research studies and appropriate sample sizes for future reference.

Figure 1. Sample Size Comparisons

RESEARCH DESIGNS COMMON SAMPLE SIZES COMMON USE CASES COMMON ANALYSES
Large Strategic Studies 1,500 – 3,000 (or more, depending upon geography and scope) Market entry strategies, awareness and usage, market structure, segmentation, brand repositioning Descriptive statistics, cross-tabulations, segmentation analysis, perceptual mapping, driver analysis
Market Segmentation Studies 1,500 – 2,500 (or more, depending upon geography and scope) Identifying market segments, targeting strategies, new product opportunities, new segments Segmentation analysis, perceptual mapping, driver analysis
Attitude and Usage Studies 400 – 2,500 (depending upon the number of categories being evaluated) Understanding consumer behavior, attitudes toward products Descriptive statistics, cross-tabulations, regression analysis
Idea Screening 150 – 500 evaluations per idea Evaluating multiple concepts, initial product ideas, identifying opportunities for further development Frequency analysis, cross-tabulations, driver analysis
Customer Satisfaction Surveys 500 – 2,000 (either individual studies or per wave) Measuring customer satisfaction, service quality, opportunities for improvement Net Promoter Score (NPS), factor analysis, correlation analysis, driver analysis
Concept/Product Testing 150 – 500 evaluations per concept Testing new product concepts, packaging designs Statistical tests between subgroups, perceptual mapping, conjoint analysis
Communications/
Messaging Testing
300 – 600 evaluations per concept Evaluating advertising messages, marketing communications A/B testing, sentiment analysis, multivariate regression

A good rule of thumb—typically, we strive for +/- 5% MoE or less at 90% confidence.

The Precision vs. Cost Trade-Off

Higher confidence levels require larger sample sizes to ensure that the results are reliable and projectable to the larger population. This is because larger samples reduce the margin of error (MoE). However, as can be seen in Figure 2, as the sample size increases, the precision of the estimate improves—but the incremental statistical benefit of using larger and larger sample sizes declines rapidly. For example, to double the precision from a sample of 100 (with a ±9.8% MoE at 95% confidence), we would actually have to quadruple the sample to 400 (±4.9% MoE at 95% confidence). This is why most national public opinion polls target a relatively precise ±3% margin of error, which requires a sample of approximately 1,200 respondents. To double this precision, a polling company would require 4,800 respondents—at quadruple the cost! 

Figure 2. Sample Size & Cost

The trade-off between sample size, statistical confidence, and costs forces researchers to balance the need for higher confidence levels and precision with the practical constraints of time and budget. Selecting a sample size that provides an acceptable level of confidence while meeting the client’s specific research objectives should always be your goal.

Sample Source & Data Quality Considerations

Excluding customer databases, the vast majority of consumer and B2B research studies today are conducted using online “opt-in” panel sources. There are many reputable providers of online respondents, including those who acquire sample from multiple sources and are effectively sample “aggregators.” While a detailed discussion of online panel data quality falls outside the scope of this article, my advice is to work with reputable companies with a proven track record in the research community.

Most sample providers are diligent about identifying duplicate or suspicious responses and perform identity verification steps—but please conduct your own data validation steps, too. You can limit suboptimal behavior by reviewing open-ended responses, asking questions to catch inattentive respondents, identifying speeders, and asking opposing attributes in grids.

Just remember that biases exist in all forms of research and the samples used to conduct them: some are obvious (e.g., convenience ) while others are hidden (e.g., non-response bias, biased language, biased context). The more you are aware of potential sources (and impact) of bias, the better researcher you will be.

Summary

Understanding the relationship between sample size and statistical precision is crucial for qualitative researchers transitioning to quantitative projects. Sample size directly influences the accuracy and reliability of survey results, with larger samples providing greater precision and smaller margins of error. This precision is essential when making significant business decisions, such as market entry strategies or brand repositioning. However, researchers must also consider the trade-offs between costs and precision.

Larger sample sizes, while offering higher confidence levels, increase costs and resource demands. By balancing the need for precise, reliable data with practical budget constraints, researchers can design robust studies that provide actionable insights. Tailoring sample sizes to specific research objectives—whether for large strategic studies, idea screening, product testing, or messaging evaluation—ensures that the findings are both meaningful and cost-effective. This approach enables researchers to make informed decisions, leveraging quantitative data to complement and enhance their qualitative insights.

 

 

Are You Marketing to the “Moveable Middle”?

Companies that focus on buyers who have bought your brand – even at a minimal level – are significantly more likely to shift more of their total share of consumption to you. First you must find them, and then target them effectively with incremental ad spending.

A recent study by the Mobile Marketing Association focused on a common consumer category and showed that those with a 20%-80% propensity to buy a brand are much more responsive to incremental advertising. Those living in the “tails” of the buyer distribution are either (a) basically non-responsive because they buy so little from you, or (b) are already so loyal that additional ad spending has no incremental effect. Within this middle band of buyers, the power of your incremental targeted advertising is very strong. The bonus: ad campaigns built around the moveable middle also improved reach.

So who are these buyers in the so-called “moveable middle” of your brand’s overall profile of users? Unlike promotionally-driven “brand switchers”, the moveable middle segment represents buyers at many different levels of past purchase behavior who relate to your brand story. Their responsiveness to advertising grows as their share of category needs approaches the center of this distribution. Decreasing marginal returns occur at the tails. The movable middle holds many more attitudinally receptive, persuadable buyers as defined by their mid-range probability of buying your brand.

The result of this recent research shows that the moveable middle of a company’s product user base can be 5x-10x more responsive to advertising than buyers at the tails. This makes incremental advertising and marketing spending against this middle segment of buyers an incredibly fertile area to allocate incremental ad dollars because the return on ad spend (ROAS) is so high. There are several reasons why this is true:

  • Those who are already buying your company’s products or services are not buying all of their category needs solely from you. This volume can be significant, but it is likely hidden from you. This implies that they have much larger volumetric needs than they are directly telling you as one manufacturer.
  • There is a high probability that products in the category have not been effectively differentiated, and benefits or features that are important to buyers have not been captured by you. Additional ad spending highlights those differences and motivates additional purchases. Optimizing your messaging is clearly indicated here, too.
  • Buyers who are already buying 20%-80% of their category needs from you are more receptive to your message. This forces them to exclude other alternatives once your message is received.
  • The movable middle already has the advantage of familiarity with your brand or product. They know the things you can provide. Advertising doesn’t need to work as hard among people who have familiarity. It doesn’t have to generate awareness: rather it simply refreshes you in consumers’ minds.

Marketing and advertising plans focused on the moveable middle almost always yield better response curves to incremental media spend than dollars spent on reach and frequency. Recent academic research has attempted to dispel this notion. They believe that marketers should target broadly (i.e., to all buyers). It is true that buyers continuously enter and exit a category, hence on its face this makes some intuitive sense. However, this thought process (a) ignores that buyer entry and exit can be quite slow, and as a result (b) the ROAS of a reach-only based plan will be low. It takes significant amounts of time and ad spend to generate even small increases in awareness-to-trial conversion. A company may not have a 3-, 5-, or 10-year window to see if their strategy was effective at bringing in new buyers. By then, the company could be out of business!

Certainly, some ad spending must focus on long-term brand building, but with increased direct and digital relationships with customers (1st party data), it makes no sense to ignore your own ability to target those we know are receptive to your product story. Focusing on the movable middle is an “outcomes” or “performance-based” marketing concept that complements your longer-term brand building initiatives. Strategic brand marketing is needed to support the core product story and promote user differentiation, but properly targeted media delivery generates much higher reach-to-conversion, and therefore longer-term retention of the customer to the business. And it has the nice side effect of increasing reach, too.

Caveat: over-targeting or over-promoting can have negative longer-term consequences that can diminish responsiveness to additional media spending: there must be a balance. Exclusively focusing on the movable middle at the expense all other initiatives to build brand equity fails to recognize that new buyers do enter the category from sources other existing buyers. But the evidence that focusing on the moveable middle of your own brand’s buyer distribution is a smart and effective way to start..

At Surveys & Forecasts, LLC we have conducted numerous targeted media and research programs that focus on the movable middle, and proved the ROAS power of incremental spend. For more information, contact us info@safllc.com. We look forward to hearing from you!

Improving Acquisition Success in Family Offices

In the acquisition process, revenue metrics are critical to assess business value. However, the need to assess customer health is often overlooked. In customer-facing businesses, this is especially dangerous.

Without a clear “line of sight” into customer satisfaction and retention, an acquiring company (e.g., family office, VC, or angel investor) may overlook evidence that indicates an acquisition may not result in incremental revenue or synergy with existing businesses.

For businesses that are sold using a revenue multiplier, a significant miscalculation can result in a lower exit price. Both family offices and business owners do not realize the hidden factors that can negatively impact a sale. As the deal size grows, those risks increase.

Conversely, hidden positive factors can surprise to the upside, and support a significantly better acquisition price or sales story. As part of an acquisition and revenue assessment, looking at the entirety of your customer or user base is essential to understand the real value of the business now and into the future.

Who benefits from a revenue and customer health assessment?

Investors and family offices of every size can benefit, but there are three distinct beneficiaries of a revenue optimization and customer health assessment when a business is bought or sold. They are:

  • The business owner who wants to maximize his or her exit value.
  • The potential buyer (family office, angel investor, or VC) who wants to minimize the price paid.
  • Another party who may be in dispute with the owner, and who wants to minimize the price that they pay (or conversely, maximize their ownership share of the business).

Conflicting interests can be addressed through a well-executed revenue and customer health assessment using research. The approach used by Surveys & Forecasts, LLC evaluates revenue potential and customer health in a systematic and independent manner:

  • We employ a variety of tools to estimate prospects for growth, forecast volume, share, and satisfaction.
  • We profile relevant products, brands, and services to assess your competitive position.
  • We combine findings to provide an unbiased estimate of revenue potential. We can combine this with financial professionals who have the tools to assess performance in the absolute and relative to a peer group.
  • We work with clients on a per project or ongoing retainer basis to provide guidance and market intelligence for the business.

Additional detailed information can be found here, To discuss whether a revenue optimization and customer health assessment is appropriate for an acquisition your office is considering, please get in touch at info@safllc.com or at +1.203.255.0505.

Don’t Let A/B Testing Mislead You

A recent conversation on A/B testing with a client revealed an interesting perspective about messaging and positioning. The client, extolling their company’s rigorous A/B testing approach, failed to recognize a simple but scary fact: it is easy to compare multiple versions of a sub-optimal message. In the end, you end up with a “less-worse” version of an already weak message. This is not equivalent to building brand value over time — and building a moat around your brand’s essence.

What were they thinking?

 

The client had overlooked the obvious by ignoring underlying reasons to buy. Instead of testing which alternative was more persuasive based on price, the more important question they should have been asking was: what is the underlying motivation behind purchase? What segments, personas, or buyer types fall into our wheelhouse? Why should our brand be considered in this crowded category? This client, and so many others, seem to miss a simple tenant of marketing: why give away your marketing advantage so early in the game?

 

This client’s products have significant performance advantages over others in the category, yet they were A/B testing multiple executions built around being a lower-cost, value alternative. If they had taken just a little time to understand buyer behavior, they would have realized that price can be a relatively small factor in the buying decision when the brand looms large. 

In this case, A/B testing was fueling a race to the bottom. By choosing the “less worse” option, the client had already decided that they would primarily compete on price, pushing them deeper into a commodity mindset for the customer. 

When misused, A/B testing behaves like a cost-reduction test. There are many instructive lessons here: a well-known case is Maxwell House coffee. Over the course of a many years, the company increased its use of lower quality beans in the blend to cut its COGS. It conducted taste tests to make sure that consumers did not detect a difference when compared to the previous blend. But market share began to fall. Why? Because they never tested it against the original formula. What if there were thousands of Maxwell Houses across the globe instead of Starbucks? In the same way, test between meaningful options, rather than confine your evaluation to a narrow set of sub-optimal choices.

 

Be smart. A/B testing works best when the strategy is well-defined and plays to your advantage. First figure out what that is. Focus on highly persuasive messages that support your brand, rather than identify the best way to discount your business into oblivion. Don’t give away the store when you don’t have to.

Surveys & Forecasts, LLC