Idea Screening in New Product Development

image new product thoughts

To keep growing, both startups and established brands need a new product pipeline. One of the tools that can be used toward that goal is idea screening—the process of evaluating new product ideas at an early stage to help identify promising ideas for future product development. Idea screening should not be thought of as a single siloed process: qualitative approaches are also essential to feed the idea screening process.

But qualitative researchers should have a good understanding of what happens after their work has ended. Quantitative idea screening helps to identify the most promising early-stage opportunities and accelerate them to the next phase of new product development. This could take the form of a minimum viable product, a new formulation, or a line extension. Simply put, taking a systematic approach to idea screening helps a company align scarce marketing resources to the candidates with the most market potential. This article describes an approach that qualitative researchers can use to help their clients build a systematic way to identify new product opportunities. We will outline a practical framework using a set of standard measures. Larger organizations and funding mechanisms (i.e., venture capital firms) use this approach—but it is not only limited to companies with deep pockets. Smaller companies can use this approach, too.

What is the Role of Idea Screening?

Idea screening is a structured way to simultaneously evaluate many ideas and identify those with the greatest potential for success. Idea screening can be used to evaluate vastly different ideas, or ideas within a certain category. This process can also be used to identify general areas where consumer needs are not currently being met. Unlike concept testing (which attempts to evaluate more fully developed advertising concepts with detailed language, images, pricing, or other marketing mix variables), idea screening focuses on the appeal of basic ideas using a limited set of measures. This allows researchers to quickly see which ideas warrant further investment, or not.

  • For startups, idea screening offers a way to prioritize options, ensuring that those with broader appeal move forward. For example, a personal care start-up might have developed a patented eco-friendly formula. This ingredient could be used in many categories (e.g., hand lotion, hair care, personal hygiene). Screening multiple category ideas, each based on this novel ingredient, is a good use case for idea screening. The results can then be used to guide both R&D and marketing strategy before additional investment in prototypes or packaging is needed.
  • Established brands, on the other hand, might use idea screening to extend product lines or expand into adjacent markets. In these cases, alignment with an existing brand’s equity (also called “brand fit”) will be important. For example, a global beverage brand considering a low-sugar line product can screen ideas to see which flavors, packaging, or positioning best align with current brand perceptions.

How Do I Design an Effective Idea Screening Process?

As a qualitative researcher, you already have good visibility into the new product development process. After all, you conducted focus groups or in-depth interviews that identified possible new product opportunities. But what should the client do next if there are multiple competing ideas?

First and foremost, introduce the idea of screening those many ideas, and use a more systematic approach—especially one that relies on consumer or buyer input to help the company prioritize new product development efforts.

Here are four foundational elements to get your idea screening journey off the ground:

  1. Obtain Management Buy-In

Whether you are dealing with a “solopreneur,” a start-up, or a mid-to-large organization, management must agree that this approach will be used to identify new product priority areas. By “buy-in” we mean that management agrees to the process and agrees to deploy the needed resources for product development once the results come in. Absent top management support, the best ideas can be overlooked or deprioritized, leading to missed opportunities.

  1. Define Your Success Criteria

Establishing clear objectives is critical to the success of an idea screening process. Start by defining what success will look like. For example, do you want to find ideas with the highest purchase interest, or is it more important to find distinct ideas that fill a gap in the market? A startup might prioritize broad consumer appeal, while an established brand might focus on ideas that align with its image. Setting “go/no-go” criteria based on benchmarks or previously tested ideas can also ensure that a winner is defined properly and passes basic performance criteria.

  1. Standardized Stimuli

In idea screening, the stimuli must be descriptive but also very simple. At the early idea stage, ideas can be thought of as “nuggets,” with just enough information to convey the idea without excess detail. A brief written description with (or without) a visual representation will suffice. The goal is to get a consumer to focus on the product’s main attributes. All ideas, however, must include an end benefit. That is, what will the buyer receive from the product or service they just read about? Remember, we are not screening two or three ideas—we are often screening ten, twenty, or more.

Here are some very simple hypothetical examples of idea “nuggets” that could be put into an idea screening research study:

  • The Energy You Need, Naturally. Powered by 100% plant-based ingredients for a clean energy boost, this energy bar keeps you going throughout the day without the jitters or a crash.
  • The Cleaner That Does It All. This multi-surface cleaner is safe for wood, glass, and countertops, simplifying your cleaning routine with one product for every room.
  • Stay Cool Anywhere, Anytime. This portable fan has a rechargeable battery and a sleek, compact design, providing refreshing, on-the-go cooling that lasts for days.

Concise, focused stimuli will keep the evaluation clean, consistent, and reliable. Depending on the total number in your study, the ideas can include more detail, but clarity is key: make sure to have enough detail to explain the idea, but not so much as to confuse respondents, or slow things down, or skew the results. Generally, price is not included in idea screening, because the goal is to screen the basic premise of the idea before setting a price.

  1. Consistent Sample Definition & Statistical Precision

Locking in the appropriate sample definition is essential for obtaining reliable results across ideas and, importantly, over time if your approach will be used on an ongoing basis. For broader-appeal products, a general population audience sample is appropriate (i.e., balanced on males/females, ages 21-64, and geographically representative). For niche ideas, such as a high-performance running shoe or users of nutritional supplements, a “booster sample” may be needed (i.e., ensuring that there are large enough samples important subgroups, such as runners or people who take B vitamins). The analysis will benefit by yielding relevant feedback from a large enough sample. Although there are no hard rules, we recommend feedback from a sample of at least 250 respondents per idea to strike a balance between precision and cost.

What Are Evaluative vs. Diagnostic Measures?

Including both “evaluative” and “diagnostic” measures allows for a well-rounded analysis that captures consumer appeal and areas for improvement. What are the differences between these types of measures?

Evaluative Measures

Evaluative measures assess performance and represent a person’s intended behavior. Examples of evaluative measures include purchase intent, intended frequency of use, and expected use occasions. These measures represent what the respondent intends to do based on the stimulus you have exposed. Although evaluative measures are critical to include because they assess performance (i.e., they address “how big is the opportunity”), they don’t necessarily explain why one idea performs better than another. For that we also recommend including “diagnostic” measures.

Diagnostic Measures

Diagnostic measures explain why consumers feel a certain way about an idea, and help to identify strengths and weaknesses. Diagnostic measures can be used to enhance or refine a promising idea that may be missing some key support points. For instance, an idea with high purchase intent but low distinctiveness (uniqueness) might indicate that a large opportunity exists, but that its core selling proposition may not be defendable against competitors. Including open-ended verbatim comments or exploratory questions (while certainly valuable in later stages) should generally be avoided in idea screening because of the extra time required.

What Measures Should I Use?

Evaluating an idea’s potential appeal requires us to focus on a set of key measures that highlight interest, differentiation, and benefits. Here is a breakdown of 10 key metrics we recommend in idea screening:

  1. Purchase Intent

Purchase intent assesses whether consumers would buy the product based solely on its description. This metric is a relatively strong predictor of potential demand and in-market performance, and serves as the primary evaluative measure. This question is typically asked as “How likely would you be to buy this product assuming that it was available where you shop and sold for a reasonable price?”

  1. Distinctiveness

Distinctiveness (also sometimes asked as “uniqueness” or “new and different”) measures whether an idea stands out from competitors. For startups, distinctiveness can be a make-or-break factor: a new beverage brand entering a crowded market, for instance, needs to differentiate itself based on distinctive features, like excessive caffeine (example: Red Bull) or innovative packaging. This question is typically asked as “How unique or different do you consider this product to be when compared to other products currently on the market?”

  1. Relevance

This measure evaluates how well the idea aligns with consumer needs and values. A relevance question would use an agreement scale with phrasing such as “This is a brand for someone like me.” For new brands, relevance is critical in ensuring that a product dovetails with the intended buyer’s mindset or philosophy. For example, a skincare company might focus on ideas that cater to consumers’ eco-friendly desire for all-natural ingredients, aligning brand identity with buyer values.

  1. Replacement vs. Addition

This metric identifies whether the idea replaces an existing product or serves as an additional purchase. This measure also reveals the consumer’s expected usage patterns for the product. This question is typically phrased as “Would you expect to use this product to replace an existing product, or would you use it in addition to products that you currently use?” A replacement product suggests that it meets current needs but offers improved features, performance, or convenience. A product used in addition to other brands may indicate a niche product, special use occasions, or perhaps that it fulfills both existing and untapped needs.

  1. Household Members Who Could Use the Product

Broad household appeal can increase purchase potential. This question is typically asked as “Who in your household, including yourself, is most likely to use this product?” For example, a company making family-friendly meal kits would want to know who in the household sees themselves consuming the product. All things being equal, a product with broad household use would receive a higher rank than products with narrow household use.

  1. Problem-Solving Ability

Problem-solving ability assesses whether the idea effectively addresses a specific pain point. A tech startup might test if their productivity app helps users balance work and personal tasks, providing a clear, targeted solution that could enhance consumer interest. This question can be phased as “Does this product solve a particular problem for you that other products currently do not?”

  1. Frequency of Use

This measure captures the anticipated frequency of use. This is typically asked as “How often do you think you would use the product you just read about?” followed by a frequency scale. Products that consumers expect to use more often tend to generate stronger appeal and justify their value more effectively. Breadth of use (above) is combined with frequency of use to assess overall use opportunities. A product can succeed either by broad but infrequent use, or on narrow but heavier use.

8.Anticipated Use Occasions

Anticipated use assesses the versatility of the idea across different use occasions. This is typically asked as “Which of the following occasions would you be likely to use the product you just read about?” Depending on the category, we would want to explore how consumers will use the product—for example, in a daily routine, or just for special occasions. By identifying potential applications, companies gain insights into the product’s relevance and marketability across different consumer needs and contexts.

  1. Optional: Brand Fit (Assumes a Parent Brand)

When testing new product ideas under an existing brand name, brand fit is important to know. Line extensions should not dilute or damage an existing brand’s equity. For example, a luxury fashion brand might want to explore wellness ideas to complement its premium image. Assessing brand fit means that we want to understand how well the brand extension aligns with the parent brand’s image and reputation. Conversely, some brands can compete in multiple categories if the company’s brand equity extends that far (e.g., Honda cars, generators, and robotics).

  1. Optional: Value for the Money (Can Be Asked With or Without Price Being Shown)

Value perceptions weigh a product’s perceived benefits versus its price. This is especially relevant in competitive categories or among price-sensitive consumers. A startup offering premium organic snacks, for instance, might test whether consumers find the product’s higher price justified by its perceived health benefits. And, obviously, asking a value for the money question without a price being shown is generally not recommended, hence this question would typically be included only if pricing were stated. However, a question about expected price could also be considered.

Scoring of Concepts

While all of the above measures are important, the three most important measures are purchase intent, uniqueness (distinctiveness), and the anticipated frequency of use. Together these measures capture the essence of each idea’s performance. In some situations, when there are many ideas being considered, you may want to consider some form of scoring to identify those that show more relative strength. To accomplish this, weights would be assigned to each measure of interest, and an aggregate score calculated. Assuming we limit this approach to the three core measures mentioned, a possible approach would be to assign a weight of 0.6 to purchase interest, 0.2 to uniqueness or distinctiveness, and 0.2 to anticipated frequency of use. This can be calibrated to a 0-100 point scale and then a relative ranking then calculated.

Summary

Qualitative researchers can play a critical role in helping their clients identify new product ideas by designing and conducting idea screening. When strategically designed, idea screening provides a systematic data-driven approach to identifying promising new product ideas. By combining qualitative insights with quantitative measures and benchmarks, companies can prioritize the ideas with the highest potential. From there they can refine their innovation pipeline and optimize resource allocation. This approach can be used by startups and established brands alike to make smarter, more efficient product development decisions.

Essential Metrics in Idea Screening

Measure Type Description/Purpose
1. Purchase Intent Evaluative Evaluates the likelihood that consumers would buy the product based solely on its description
2. Uniqueness (Distinctiveness) Evaluative Measures how much the idea stands out from existing products in the market
3. Relevance Diagnostic Evaluates how well the idea aligns with consumer needs, preferences, and values
4. Replacement vs. Addition Diagnostic Identifies whether the product will replace an existing item or be an additional purchase
5.  Household Members Who Would Use Diagnostic Assesses whether various members of a household could use the product, expanding its appeal.
6. Problem Solving Ability Diagnostic Evaluates whether the idea effectively addresses a specific pain point or unmet need
7. Anticipated Use Occasions Diagnostic Evaluates how many different uses consumers foresee for the product, indicating its versatility
8. Frequency of Use Evaluative Measures the frequency with which consumers expect to use the product, reflecting its volumetric contribution
9. Brand Fit (Optional) Diagnostic Assesses whether the idea aligns with the established brand image and reputation
10. Value for the Money (Optional) Diagnostic Determines if consumers perceive the product’s benefits as worth its cost

Navigating Research Bias: From Problem Definition to Analysis

Twists & turns of incidence image

  If people suspect bias in a research study, it leads to a lack of confidence in the results. Unfortunately, research bias exists in many forms and in more ways than we would like to admit. In research, bias refers to any factor that abnormally distorts data, results, or conclusions. As researchers, our job is to not only understand bias, but to find ways to either control or offset it.

After all, it is the negative impact of bias that we most want to avoid, such as inaccurate conclusions or misleading insights. Additionally, bias is not limited to any one specific research approach. Sources of bias exist in quantitative and qualitative research and can occur at any stage of the research process, from initial problem definition to making client recommendations. Each of us brings our unique perspective based on personal experiences, and what we believe to be true or not. The insidious thing about bias is that we won’t always anticipate it, nor will we recognize it even as it is happening!

This article explores some of the more well-known types of bias that commonly arise in marketing research studies and offers suggestions on how you can avoid  these errors. We also provide a reference table that lists these biases, summarizes common errors, and provides ways to avoid the negative impact of bias in your research practice. Let’s take a look at some of the more common forms of bias, from initial research design to analysis and interpretation.

Bias in Problem Definition

The foundation of any research project starts with problem definition and the approach you will take to address it. This is the first point at which bias can creep in. A research problem framed too narrowly, or based on preconceived assumptions, can lead to a study that simply validates pre-existing (and potentially wrong) beliefs rather than uncovering new insights.

Incorrect assumptions about the source of a marketing problem are one form of bias. For example, a company’s sales team believes that weak sales of a flagship brand are due to a high price point. This belief led to promotional campaign testing, instead of executing a more broadly-defined brand image study. Had the appropriate research been conducted, management would have learned that the brand actually represents a great value, but that there was little consumer knowledge of the value story. The company spent time and money to solve the wrong problem based on an incorrect assumption.

Over-reliance on industry trends can also skew problem definition. For example, a company’s management team believes that future trends in marketing support more spending with social media influencers. The company decided to spend heavily with several social media influencers. A post-campaign analysis proved that the extra spending never reached their more broadly-defined audience and did not fully capture market diversity.

To mitigate this, it is crucial to:

  • Engage diverse stakeholders who can challenge our own assumptions.
  • Conduct exploratory research to broaden understanding and build hypotheses.
  • Frame your problem definition around the consumer’s mindset, not preconceived beliefs.

Bias in Question Construction

Once the problem is defined, another potential source of bias is in question construction. Subtle changes in phrasing can lead to drastically different responses, and leading questions can direct respondents towards a particular answer. Asking “Wouldn’t you agree that our customer service is exceptional?” encourages a positive and biased response and downplays negative feedback. This is often seen in satisfaction surveys that impact employee compensation, where the recipient of the feedback is coaching the respondent to provide good ratings (i.e., “Please rate us highly!”).

Loaded questions, which embed assumptions, are similarly problematic. For example, asking, “How much do you spend on luxury skincare products each month?” presumes that the respondent purchases products in the luxury skincare category, which can alienate respondents or overstate data.

Double-barreled questions (i.e., two questions in one) are also a common error. For example, “How satisfied are you with our product quality and customer service?” Mixing two separate aspects in one question can confuse respondents and lead to unclear data, as they may have different opinions of the two aspects.

To avoid these errors:

  • Use neutral language in all questions.
  • Pilot test surveys to identify unintended biases.
  • Ensure each question focuses on a single issue, avoiding double-barreled phrasing.

Sample Selection and Coverage Bias

Bias in sample selection or coverage areas can severely distort the findings of marketing research. Sampling bias occurs when certain groups are over- or underrepresented in screening, sample definition, or geographic area. For instance, focusing a survey solely on urban customers may overlook rural or suburban buyers, leading to unrepresentative results for a national campaign.

Similarly, self-selection bias can occur when individuals who choose to participate are meaningfully different from those who do not. Customers with strong positive or negative opinions may be more likely to respond, while those with neutral opinions might remain silent, skewing the results toward the extremes. Incentives may increase participation, but also have the potential to attract only those who are motivated by financial rewards.

One approach to overcoming coverage and sampling bias is to use quotas for specific subgroups, and then weighting the subgroups into their proper proportions, but there is also a risk of over- or under-weighting some groups at the expense of others.

Addressing these issues requires:

  • Random sampling to ensure all relevant groups are included.
  • Alternatively, stratified sampling to ensure proportional representation of key subgroups.
  • Considering demographic and geographic coverage to prevent skewed results.

Avoiding Cultural and Sexual Bias

Cultural and sexual bias is an overlooked issue in survey design. Surveys that fail to acknowledge cultural differences, or that make assumptions about gender and sexuality, can alienate respondents and result in inaccurate or incomplete data. For example, questions that assume traditional family structures or fail to recognize non-binary gender identities can alienate a significant portion of respondents and introduce bias into the research.

Common errors in this area include using language that assumes heteronormative relationships, or ignores cultural differences in lifestyle, values, or product use. A question about alcohol consumption, for example, may not be relevant in cultures where alcohol is forbidden or stigmatized. Questions about childcare or child rearing should not automatically be asked of a female in the household, in the same way that questions about home maintenance ought not to be always asked only of a male in the household.

Over-generalizing cultural experiences or stereotyping is another area to be aware of, and perhaps requires us to be the most self-aware in terms of how we think and phrase questions. For instance, assuming that all respondents from a particular ethnic background share the same cultural preferences, buying behaviors, or brand choices is bias that can distort our research.

To avoid cultural and sexual bias, researchers should:

  • Use inclusive, neutral language that does not assume a specific gender or sexual orientation.
  • Be sensitive to cultural differences, avoiding questions that may be irrelevant or offensive to certain groups.
  • Ensure that surveys are tested on diverse groups before widespread deployment to identify potential biases in question phrasing.

Context and Order Bias

Context bias occurs when responses to a question are influenced by earlier questions or stimuli during the interview. If a respondent is first asked how satisfied they are with customer service, their response to a later question about overall brand satisfaction may be disproportionately influenced by that prior focus on customer service. Similarly, asking an overall satisfaction before asking about attributes or other detailed items can create a biasing effect, hence proper question sequencing can avoid these traps.

Order bias is a specific form of context bias in which the order of questions or response options affects how respondents answer. This can often be mitigated by rotation of question blocks or specific attributes within a ratings section. A primacy effect occurs when respondents favor the first option presented; a recency effect occurs when are respondents persuaded by the last option shown. For example, respondents who are shown too many options may disproportionately choose the first or last one, depending on how the items are presented. In the same way, concepts that are shown first often are typically rated higher than those that are shown in other positions.

To minimize the impact of context and order bias:

  • Randomize the order of questions and response options within a ratings section, unless the options are intended to be chronological.
  • Group related questions to limit the influence of unrelated preceding questions.
  • Use balanced framing to avoid highlighting specific attributes over others.

Summary

Bias is everywhere and cannot be fully eliminated. But, when we are aware of the forms it can take, we can also implement ways to mitigate its potential negative impact. Obviously, the most profound negative impact of bias is that we simply get it wrong— in other words, we either come to an incorrect conclusion, or worse, one that is damaging to our client’s business.

As researchers, it is our responsibility to identify, and be on the lookout for, the negative effects of bias. Don’t be afraid to challenge pre-existing approaches or entrenched ways of thinking. From ensuring good research design and representative sample selection to constructing neutral, balanced questions, researchers have many tools to reduce bias at every stage of the research process!

Table of Bias Types, Common Errors, and Ways to Avoid Them

Below is a summary table of the biases discussed in this article, their definitions, common errors, and ways to avoid them.

Type of Bias Definition Common Errors Ways to Avoid
Bias in Problem Definition Occurs when research problems are framed too narrowly or based on preconceived assumptions Relying too much on past trends or assumptions without exploring broader trends Conduct exploratory research, challenge assumptions, and involve diverse stakeholders in problem framing
Bias in Question Construction Bias introduced by leading, loaded, or double-barreled questions Using questions that assume an answer, contain multiple parts, or suggest a response Use neutral phrasing, test questions for bias, and focus on a single topic per question
Context Bias When responses to a question are influenced by the context set by prior questions or stimuli Previous questions or attributes overly influence subsequent responses Group related questions to avoid unnecessary influence
Order Bias When the order in which questions or options are presented influences how respondents answer Respondents disproportionately choose options based on their position in the list Randomize the order of options and questions within rankings to reduce primacy and recency effects
Sampling Bias Occurs when certain groups are over- or underrepresented in the sample Over-sampling or under-sampling certain populations, leading to unrepresentative data Use random or stratified sampling to ensure the sample reflects the population
Self-Selection Bias When individuals who choose to participate differ significantly from those who do not Highly-opinionated individuals respond more frequently than neutral or indifferent ones Randomly invite participants and ensure balanced representation of the population
Coverage Bias Bias introduced when the sample doesn’t cover the target population adequately (geographically or demographically) Over-sampling one group or geographic area while under-representing others Ensure the sample reflects geographic and demographic diversity
Non-Response Bias Occurs when a significant portion of the selected sample does not participate, leading to skewed results Only highly-motivated or opinionated participants respond, leading to skewed results Use incentives, send follow-up reminders, and apply weighting cautiously
Confirmation Bias Focusing on data that confirms existing beliefs while ignoring contradictory data Only analyzing data that supports the researcher’s hypothesis Develop an analysis plan before data collection and involve multiple reviewers
Analytical Method Bias Bias introduced when selecting analytical methods that emphasize desired outcomes Choosing statistical methods that only show significant or favorable results Use a variety of analytical methods and check for consistency across different approaches
Cultural and Sexual Bias Bias introduced by using language or assumptions that exclude certain gender identities or cultural practices Assuming heteronormative relationships or ignoring cultural diversity in lifestyle and preferences Use inclusive language, be sensitive to cultural differences, and test surveys on diverse groups before broad deployment to identify potential bias in phrasing

 

 

Design Considerations in Quantitative Research

image man and chart

  Welcome to QRCA VIEWS’ inaugural Quant Corner column focusing on the many different aspects of quantitative marketing research. This column is intended to provide accessible, easily-digestible explanations of quantitative —but geared to you, the qualitative researcher. The idea for this column originally sprang from a seminar I gave on Quantitative Methods for Qualitative

Researchers at the January 2024 QRCA Annual Conference in Denver. That seminar (attended by an enthusiastic group of qualitative researchers) generated a lot of interest—and questions—about how to conceptualize and execute quantitative research properly.

Equally important, many QRCA members are also full-service research consultants for their clients. You have probably been working with many of your clients for years, and they rely on your expertise to address a wide range of marketing issues by identifying new insights. As a result, you may find yourself responsible not only for qualitative projects, but also for hybrid (i.e., “mixed methods”) research paths that involve both a qualitative and quantitative phase. Or, perhaps in the future you would like to do more of these kinds of projects to expand your skill set.

Well, fear not! You already have the analytical skills needed to design, gather, and synthesize qualitative data from both individuals and small groups. Conducting larger-scale quantitative projects simply extends your skills to larger target populations and user segments. That’s why you will benefit from understanding the principles behind determining the right design and appropriate sample sizes for a variety of quantitative research studies. With an ever-growing array of DIY, neuroscience, and AI-supported platforms, your ability to offer well-reasoned research approaches to your clients will be appreciated and valued.

In this issue, we’ll tackle some basic challenges in determining the right sample size for your quantitative projects, understanding the role of statistical confidence, and a little bit on experimental design. We will also explore the trade-offs between precision and costs and provide guidelines for determining when sample sizes are “close enough.” We’ll cover some typical use cases such as general opinion studies, attitude and usage studies, segmentation research, idea screening, and concept testing.

Qualitative or Quantitative: Do We Care?

For simplicity, our research design and sample size guidance apply primarily to what is known as “quantitative primary research.” Primary research typically means a proprietary survey commissioned by a specific client and designed to address business or marketing issues for that client. This is in contrast to “secondary” or “syndicated” research, which are audits or surveys that are executed once and then shared by multiple clients (e.g., an “omnibus study”).

Primary research can be further distilled into “exploratory” designs (i.e., smaller samples, with more flexible data collection) and “confirmatory” or “evaluative” designs (i.e., larger samples, with more structured data collection). The term “qualitative” is typically associated with exploratory research, while “quantitative” is typically associated with confirmatory research—but there are exceptions. For example, large-scale quantitative studies are often conducted to explore new market opportunities, such as to discover unmet needs or identify segments. Conversely, in product testing, there may be a small number of highly-trained sensory experts who explore and evaluate new products using comprehensive and precise numeric responses. The line between qualitative and quantitative can sometimes be blurry.

In addition to surveys on business or marketing topics, there are other types of survey research that share the same characteristics, in that they require certain levels of precision which impacts sample size, statistical confidence, and cost. These include surveys of public opinion, political polling, and consumer or business sentiment (e.g., monthly surveys of consumer confidence). While these examples may fall outside a pure primary research definition, they still require the same thought process and approach needed in most proprietary studies.

Importantly, as researchers we are not in the cookie-cutter business: every client and every business or marketing problem is different. It will be up to you to recommend the path forward that makes the most sense for your client.

Setting Research Objectives to Guide Study Design

Problem definition is the most important piece in mapping out your quantitative marketing research plan. All the parameters of your quantitative design will flow from this, including the overall design, sample specifications, required sample sizes, and the precision needed for  business decision-making. Pay close attention to the problem you are being asked to solve. You will typically see the historical context for the project in a “Background” or “Overview” section of the research request. Most research studies were initiated by a series of client events that led to this opportunity. The Background section should provide good context and guidance about the specific issues being faced by the client.

What Kind of Study Will You Need?

Keep an eye out for action standards. An action standard is a “go”/”no go” decision rule linked to your client’s objectives and is based on questions in the survey you ultimately execute. So, if there is an explicit “action standard” associated with the research, you’ll need to shift your focus toward a research design that assesses reactions to ideas, concepts, products, or positionings. For example, a new product idea might have to exceed a known benchmark on purchase interest for the R&D team to proceed to the next phase of product development. If your research shows that the product exceeds the benchmark, the next phase is a “go.”

Conversely, not all research has or needs an action standard. If the research being requested is largely descriptive or diagnostic in nature, an action standard is not warranted. Perhaps your client needs to identify possible opportunities in a new geographic market or discover new and untapped segments. In this case the research, while quantitative, is primarily exploratory, with the intent to understand and learn about a category, products, or users—yet a large sample may be required.

Large exploratory studies might involve understanding consumer needs, desired end-benefits, or buyer behavior (i.e., brands x users x usage situations). Some research may focus on identifying opportunities where new products might be developed (i.e., “white space” opportunities). Other studies may focus solely on whether a brand’s equity can be extended into other (closely or distantly related) categories that share the same imagery or values.

Use Cases: Strategic Research

Large strategic studies—such as market entry strategies, market structure, or brand assessment and repositioning research—aim to inform significant business decisions. These studies require high precision and reliability, typically necessitating larger sample sizes. For large, nationally-representative market studies that will be used to identify opportunities as well as key subgroups for analysis (such as identifying unique segments), you would be well-served to recommend sample sizes that are large—samples of 1,200-1,500 respondents are typical and may go as high as several thousand depending on the need for geographic or demographic coverage. This extends not only to demographic subgroups, but hypothesized segments or other groups of interest to the client. Even with large samples, having many subgroups to analyze can quickly cause large sample sizes to dwindle into smaller subgroup sample sizes. For example, if your client assumes that there are five segments in a 1,200-respondent survey, there might only be 240 respondents per segment (assuming that they are evenly distributed).

Examples of large-scale strategic studies might include:

  • A global consumer company is considering entry into a new geographic region. Little is known about this market, and there is a significant need to understand market potential, consumer preferences, competitive landscape, and optimal entry strategies. The client hypothesizes that at least two segments might be targets that are right for acceptance of their brands.
  • A well-established U.S. packaged goods company is considering repositioning one of its mature products to target a younger demographic. The company needs to understand current brand perceptions, identify new positioning opportunities, and evaluate the potential impact on sales and market share among a younger age target.
  • An awareness and usage (A&U) study is needed to explore consumer behaviors, preferences, and attitudes for a category where the company does not currently compete. The results of the A&Us will be to understand how consumers interact with competitive products and services, their satisfaction levels, and potential areas for improvement before an acquisition is made.

Use Cases: Evaluative Research to Screen Ideas or Test Concepts

Idea screening and concept testing are essential stages in product development, where potential ideas or concepts are evaluated for feasibility and appeal, so that a company’s new product pipeline can stay full. Much like market studies, these studies often require larger samples to minimize respondent burden by having respondents see only a subset of the full complement of concepts under consideration. In these scenarios, “partial factorial” designs are effective uses of sample (i.e., showing 5 out of a possible 25 ideas being evaluated). This allows a client to gather solid general demographic information from the larger sample size, while getting adequate (or “good enough”) evaluations and diagnostic feedback on each concept. Use cases for idea screening and concept evaluation might include:

  • Early-Stage Screening: Initial screenings can use small-to-large sample sizes to identify promising idea “kernels” efficiently. These studies prioritize speed and cost-efficiency over depth of evaluation. They are also often followed by a qualitative phase to gather more insight.
  • Validation Stage: As ideas progress, larger and more representative samples are needed to validate findings and ensure that the concepts resonate with the target market. Minimally, samples of 150–250 are commonly used in these designs.
  • Iterative Testing: Multiple rounds of testing may be required, with sample sizes adjusted based on the stage of development and the precision needed. In these cases, action standards are routinely used.

Figure 1 is a handy reference table that provides guidance on common research studies and appropriate sample sizes for future reference.

Figure 1. Sample Size Comparisons

RESEARCH DESIGNS COMMON SAMPLE SIZES COMMON USE CASES COMMON ANALYSES
Large Strategic Studies 1,500 – 3,000 (or more, depending upon geography and scope) Market entry strategies, awareness and usage, market structure, segmentation, brand repositioning Descriptive statistics, cross-tabulations, segmentation analysis, perceptual mapping, driver analysis
Market Segmentation Studies 1,500 – 2,500 (or more, depending upon geography and scope) Identifying market segments, targeting strategies, new product opportunities, new segments Segmentation analysis, perceptual mapping, driver analysis
Attitude and Usage Studies 400 – 2,500 (depending upon the number of categories being evaluated) Understanding consumer behavior, attitudes toward products Descriptive statistics, cross-tabulations, regression analysis
Idea Screening 150 – 500 evaluations per idea Evaluating multiple concepts, initial product ideas, identifying opportunities for further development Frequency analysis, cross-tabulations, driver analysis
Customer Satisfaction Surveys 500 – 2,000 (either individual studies or per wave) Measuring customer satisfaction, service quality, opportunities for improvement Net Promoter Score (NPS), factor analysis, correlation analysis, driver analysis
Concept/Product Testing 150 – 500 evaluations per concept Testing new product concepts, packaging designs Statistical tests between subgroups, perceptual mapping, conjoint analysis
Communications/
Messaging Testing
300 – 600 evaluations per concept Evaluating advertising messages, marketing communications A/B testing, sentiment analysis, multivariate regression

A good rule of thumb—typically, we strive for +/- 5% MoE or less at 90% confidence.

The Precision vs. Cost Trade-Off

Higher confidence levels require larger sample sizes to ensure that the results are reliable and projectable to the larger population. This is because larger samples reduce the margin of error (MoE). However, as can be seen in Figure 2, as the sample size increases, the precision of the estimate improves—but the incremental statistical benefit of using larger and larger sample sizes declines rapidly. For example, to double the precision from a sample of 100 (with a ±9.8% MoE at 95% confidence), we would actually have to quadruple the sample to 400 (±4.9% MoE at 95% confidence). This is why most national public opinion polls target a relatively precise ±3% margin of error, which requires a sample of approximately 1,200 respondents. To double this precision, a polling company would require 4,800 respondents—at quadruple the cost! 

Figure 2. Sample Size & Cost

The trade-off between sample size, statistical confidence, and costs forces researchers to balance the need for higher confidence levels and precision with the practical constraints of time and budget. Selecting a sample size that provides an acceptable level of confidence while meeting the client’s specific research objectives should always be your goal.

Sample Source & Data Quality Considerations

Excluding customer databases, the vast majority of consumer and B2B research studies today are conducted using online “opt-in” panel sources. There are many reputable providers of online respondents, including those who acquire sample from multiple sources and are effectively sample “aggregators.” While a detailed discussion of online panel data quality falls outside the scope of this article, my advice is to work with reputable companies with a proven track record in the research community.

Most sample providers are diligent about identifying duplicate or suspicious responses and perform identity verification steps—but please conduct your own data validation steps, too. You can limit suboptimal behavior by reviewing open-ended responses, asking questions to catch inattentive respondents, identifying speeders, and asking opposing attributes in grids.

Just remember that biases exist in all forms of research and the samples used to conduct them: some are obvious (e.g., convenience ) while others are hidden (e.g., non-response bias, biased language, biased context). The more you are aware of potential sources (and impact) of bias, the better researcher you will be.

Summary

Understanding the relationship between sample size and statistical precision is crucial for qualitative researchers transitioning to quantitative projects. Sample size directly influences the accuracy and reliability of survey results, with larger samples providing greater precision and smaller margins of error. This precision is essential when making significant business decisions, such as market entry strategies or brand repositioning. However, researchers must also consider the trade-offs between costs and precision.

Larger sample sizes, while offering higher confidence levels, increase costs and resource demands. By balancing the need for precise, reliable data with practical budget constraints, researchers can design robust studies that provide actionable insights. Tailoring sample sizes to specific research objectives—whether for large strategic studies, idea screening, product testing, or messaging evaluation—ensures that the findings are both meaningful and cost-effective. This approach enables researchers to make informed decisions, leveraging quantitative data to complement and enhance their qualitative insights.

 

 

Understanding the Qualtrics Layoffs

 

image wall street sign

I was sorry to see that Qualtrics recently laid off 780 positions (October 2023), coming on the heels of 270 layoffs back in January of 2023. This represents about 20% of the Qualtrics workforce. Having once gone through that painful experience in my career, I remember the anxiety and stress it caused when the floor dropped out from underneath me. I hope that everyone affected is able to find new opportunities as quickly as possible.

News articles from tech publications have explained the layoffs as a contraction following COVID-driven hiring and staffing up to meet demand – but Qualtrics it is not Amazon and doesn’t compete in the direct-to-consumer space, so the comparison doesn’t quite line up. So what forces are at play that may have resulted in these layoffs? I see a few inter-related things: 

  • Marketing research isn’t a high-growth business, and survey research in particular is a mature one. Big firm research growth has slowed due to a proliferation of DIY platforms, more reliance on digital evaluation (e.g., MTA, social media listening), and less need for user input at earlier stages of product development. The growth in questionnaire-based survey research is less than 2% per year.

  • The Qualtrics “experience management” strategy included horizontal expansion into other areas of the enterprise, such as human resources, that run on feedback. The growth rate of this strategy has also slowed. Small- and mid-cap organizations represent a less attractive segment because they don’t do as much research or tracking, and their projects are typically much smaller.

  • A major revenue source at Qualtrics is satisfaction tracking, especially programs built around NPS. You’ll recall that in 2003, NPS was touted as “the one number you need to grow” your business. Gartner has predicted that more than 75% of organizations will abandon NPS as a measure of success by 2025 due to a lack of correlation with metrics like sales or retention.
  • NPS programs also have great margins and are insulated (that is, once up and running, they are hard to dislodge). But with NPS programs dissolving, Qualtrics must make up that revenue with ad hoc projects and compete directly in the traditional survey space with capable lower-cost providers.
  • Qualtrics plans to spend $500 million on AI over the next four years to leverage “the world’s largest database of human sentiment”. But with AI more ubiquitous, this new strategy could be a major drag on earnings. And exactly whose sentiment will be used to train the models and shared with the rest of the world – the proprietary data of their clients? And by leaning hard into AI, even less staff may be required.
  • Perhaps the most obvious reason for the recent layoffs at Qualtrics is that Silver Lake et al (which completed its acquisition in June 2023) needs to see a return on its $12.5 billion investment. Cutting staff is the easiest lever to pull, especially if growth has slowed. That will make the balance sheet look healthy, even if growth prospects are muted.

You might recall, back in November 2018, SAP purchased Qualtrics for a hefty $8 billion. That union was touted as a way to accelerate a new “XM category”. The goal was to combine experiential and operational data to power the “experience economy”. But “experience management” didn’t seem to gain momentum,  and with a clash of cultures, SAP quickly spit out the frog it had swallowed – something I had predicted in a post back in 2018. Once all of these hard times have passed, I expect Qualtrics to be refloated as an IPO by 2026 or so. That will please the private equity folks.

For more information about our custom research services and new product development programs, please get in touch at info@safllc.com.

The Digital Marketing Alienation Problem

Consumer alienation can happen quickly. Does anyone remember New Coke? More often, it happens slowly, almost imperceptibly, because alienation is obscured by our broader human experience which is found in our collective consciousness.

While consumers seek novelty and variety, there is a saturation point. Evidence indicates that buyers are increasingly detached from brands, products, and even entire categories because of the way they are digitally over-marketed.

What do we mean by “consumer alienation”? In brief, it is the process by which consumers lose interest in brands and services due to (1) the failure of marketers to communicate with buyers and prospects in a way that is trustworthy and respectful; and (2) the emotional connection between buyers and brands has been damaged. Digital over-marketing only compounds these problems.

When companies excessively use digital marketing approaches, we (collectively as marketers) do nothing to cement the emotional bonds we hope to establish. Rather, we put people into predictable “sales funnels” (which are quite transparent to consumers, by the way) that treat consumers very robotically.

When hundred of companies use the same approach; consumers are overwhelmed. A consumer quickly realizes that they are cogs in a much bigger marketing machine (many brands x many campaigns x many time periods). Then, without a clear message and against a backdrop of digital noise, brands lose the stickiness needed to build ROI over the long term. As prospects fall further into the sales funnel, they filter out more, and fall further away from your brand. At Surveys & Forecasts, LLC we focus on important marketing developments like these with many of our clients.

Digital Marketing Creates Consumer Alienation

I suppose that we can’t blame marketers. Digital marketing reaches a target audience pretty efficiently and can promote your business when the message is clear. However, marketers are tempted to use all available resources (e.g., personal information) to stimulate consumers buying. This morphs into “depersonalization”. Consumers then feel alienated because personal information and preferences are used excessively without clear consent.. When every company and brand uses their information to market to them, trust in all brands is quickly eroded.

A brilliant colleague of mine has conducted many studies to prove that marketers can achieve significant increases in ROAS when media dollars are targeted at moderately active buyers within a category (i.e., the “moveable middle”). Yet one wonders about the linkage between heavily digitally targeted (or perhaps over-conditioned) buyers and the impact on their emotional connection to the brand long-term. Does over-targeting and over-marketing create alienation?

Sales Funnels and Trust

Sales funnels are powerful tools for marketers, but they have their drawbacks. They cause consumer alienation and can reduce trust between companies and customers by aggressively encouraging buyers to take incremental steps towards a purchase decision. Typically this starts with low-cost items (i.e., freemium trials) but quickly moves to full-priced subscriptions or premium services. This process is known as “nudging”, because it nudges you into buying more than you might otherwise want. This assumes that consumers will behave robotically rather than as intelligent beings who make reasonably rational buying decisions. The approach is fundamentally cynical, and has consequences for companies, brands, and society-at-large. Increasingly, consumers are asking: am I being manipulated yet again?

Is Technology a Solution?

Tech has the potential to improve digital marketing practices. AI and machine learning can be used to target consumers in a more modulated, ethical, and dare I say emotional way(!) with more deeply personalized, yet appropriate, marketing messages. Tech and the use of AI is already leading to consumers being over-served content that they don’t want or need. Social media platforms like TikTok, Facebook, and Instagram offer little transparency around ad serving or data collection practices.

Marketers have access to an unprecedented amount of personal data about consumers. Should they use all of it? While I am a free thinker, to avoid manipulation perhaps some regulation is needed to place limits how much information marketers can collect from users — and what strategies they are allowed to use when targeting consumers. Consumers have the power to vote with their feet and their dollars by choosing brands that respect their privacy and do not digitally abuse them.

Avoid alienating your customers with excessive digital marketing efforts by making sure that you understand what consumer alienation really is and know how it affects relationships between your buyers and your brands. Digital marketing methods can be used to create positive experiences, but only when they’re ethical, responsible, and not excessive.

To talk more about customer alienation — please reach out.. For more information about our services and customer feedback programs, please visit the Surveys & Forecasts, LLC website or get in touch at info@safllc.com.

Surveys & Forecasts, LLC