Five Tips for Writing Great Concepts

Five Tips for Writing Great Concepts

f you’re a researcher, you’ve no doubt heard about “concepts”. Concepts are ideas that can come from many places, such as R&D, in reaction to competitive activity, or as “blue sky” what-if explorations. Management consultant Peter Drucker was known for saying that companies have just two functions: marketing and innovation. If so, a concept is where these two functions intersect.

The fundamental purpose of concept testing is to help companies allocate scarce new product development resources in the most effective manner possible. While not an exact science, concept testing is the best way to evaluate the merits of an individual idea. So here are some simple guidelines to keep in mind when creating and testing concepts.

#1: Stick To A Standard Format

Using a standardized format helps minimize bias caused by differences in idea presentation, letting you compare across time. Use the same format to represent new ideas, flankers, line extensions, or repositioning of existing products.

#2: Avoid Subtle Differences

As a rule, subtle differences in concepts (i.e., “tweaking”) do not matter, yet brand managers will obsess over them. Our firm has tested hundreds of concepts and a mistake that is continually repeated is assuming that consumers either care about, or can react to, subtle differences in the wording or features and benefits. Small wording changes are meaningless and glossed over by the average consumer.

#3: Don’t Slam The Competition

Research consistently shows that consumers dislike brand comparisons, and especially those that attack a competitor directly. When creating a concept, it’s perfectly fine to focus on the benefits and positive story that your product or service offers, but avoid negative attacks on the competition.

#4: Keep It Pithy

In an age of ever-increasing distraction, consumers do not have the time or interest in reading an exhaustive concept description. Particularly in an online format, and even when using a double opt-in consumer panel as your sample, biometric data consistently shows that most respondents simply scan rather than read.

#5: The Use of Images

An image is an extremely powerful tool to support your concept or new product idea, and can be used for a multitude of purposes: to show product function, convey a persona, use occasions, set the tone, and emotion. However, the use of images in the context of a concept testing system for a company, where there is a need for comparing ideas across time/studies, is open for debate. Images can overpower factual details of a concept, and make subsequent comparisons more difficult. In early stage testing, images are best left out and introduced once the core idea has been identified (i.e., for positioning or advertising concept research).

To learn more, download our brief pdf on this topic here.

Pocket Guide Chapter: Product Testing

Pocket Guide Chapter: Product Testing

Product tests are designed to evaluate and diagnose product performance.

When Used

Product testing is typically performed (1) after concept screening or testing has identified a winning idea; (2) after a product development phase, in which R&D, sensory tests, or employee panels have identified a new product candidate; (3) at any point to assess consumer reactions to product variations (e.g., cost-reduced, improved performance, etc.); or (4) for competitive claims purposes.

Stimuli

The stimuli used in product testing varies widely, depending on the type of test and the number of product variations under consideration. Stimuli can range from conceptual product mock-ups (which are not handled) to fully functional, branded products that are evaluated in a real-world setting. To assess “pure performance”, products are exposed without extensive packaging graphics, branding, pricing, or other identifying information. If branding needs to be assessed (concept-product fit test), then branded information is included. Usage, preparation, or safety instructions (if needed) are also provided.

Designs

There are two basic types of product tests: monadic tests, and comparison tests. In monadic tests, the respondent is presented with one product, much like a consumer would be in the real world. Conversely, comparison tests involve evaluating two (or more) products in either a head-to-head or sequential fashion, and are often used as screening studies.

For more information on product testing, download our free section, from the Pocket Guide to Basic Marketing Research Tools, here.

Pocket Guide Chapter: Concept Screening

Pocket Guide Chapter: Concept Screening

Concept/idea screening tests are research designs that reduce (i.e., screen) a large number of conceptual ideas (e.g., 15, 20, or more) into the group worth pursuing vs. those that should be rejected.

When Used

Concept screening is typically undertaken after (1) a segmentation or market study that has identified new marketing opportunities; (2) exploratory qualitative research that reveals a consumer need; (3) group ideation or brainstorming sessions; or (4) R&D/product development has identified a significant number of possible new product ideas. However, concept screening can be conducted at any time there are enough ideas to test.

Stimuli

Because the objective in concept screening is to identify winning ideas from a large pool of candidates, the screening process and concept format must be efficient. Unlike most concept tests, screening designs expose many ideas to each respondent. The number of ideas exposed varies based on the number to be tested. Concepts can represent completely new ideas, line extensions, or new uses/repositionings of existing products. Mechanically, concepts for screening tests are more basic than those used in traditional concept research. Specifically:

  • Concepts are brief (e.g., 3-4 sentences), and factually state the problem, usage situation, or need, and then how the product meets the need or solves the problem.
  • Versus traditional concepts, the state-of-finish for concepts used in screening is basic/low. The amount of detail varies, depending on the types of ideas or the category.
  • Concepts may or may not be branded, or include a basic visual (e.g., B&W line drawing), price, quantity/size, or packaging information. Generally, these are not included.

Screening Designs

The two common designs are “pure” vs. “diagnostic” screening. Pure screening is strictly evaluative (i.e., no diagnostics). It is typically used when there are many ideas to test and they are in basic form (i.e., a few sentences and low state-of-finish), thus permitting one respondent to see them all. For each respondent, concept exposure is randomized, with each concept rated and ranked on:

  • Purchase interest
  • Expected frequency of use
  • Uniqueness, believability
  • Optional: need fulfillment, superiority, relevance

In diagnostic screening, both evaluative and diagnostic measures are collected. Again, multiple concept exposure occurs, but in randomized groups of 3-6, depending upon the total number of concepts (i.e., incomplete block design). Concepts in diagnostic screening tests are in a higher state-of-finish than those used in pure screening. Each concept is rated (not ranked) on the same as the above, plus:

  • Voluntary positives (e.g., likes, advantages)
  • Voluntary negatives (e.g., dislikes, disadvantages)
  • Attribute ratings (limited list, usually 5-8 items)
  • Optional measures, time permitting (need fulfillment, superiority, etc.)

For more information on concept screening, download our free section, from the Pocket Guide to Basic Marketing Research Tools, here.

Pocket Guide Chapter: Focus Groups

Pocket Guide Chapter: Focus Groups

Focus groups are perhaps the best-known marketing research technique. Focus groups leverage the dynamics of group interaction to generate qualitative (i.e., non-projectable) feedback on marketing-related issues, and to develop hypotheses for future testing. They are not projectable to the larger population being studied.

Focus groups are often misunderstood and frequently misused by news organizations and political operatives. A TV host that asks people to raise their hands for “yes” or “no” is not a focus group; that is theater.

Focus groups are used at many different stages of the marketing process and can be conducted among virtually any audience. Typical uses include:

  • Exploring consumer attitudes, motivations, and buying behaviors
  • Identify insights and to build consumer language
  • Feedback on ideas, advertising, formulations, or packaging
  • Internally generate ideas for strategic or organizational purposes

Focus groups can be full groups or mini-groups. Full groups typically consist of 10 respondents plus a moderator, and last two hours. Full groups are well-suited for discussions that require more extensive exploration of issues, that employ group exercises, or when there are numerous stimuli. In full groups, the relatively large number of respondents requires that the moderator be skilled at managing different personalities/points of view, and the ability to play respondents off of one another in a collegial way.

Mini-groups are a scaled-back version of full groups, typically consisting of 4-6 respondents, plus a moderator. They are shorter, typically 1½ hours or less. Versus full groups, mini-groups are well-suited to topics that require more individualized questioning (e.g., understanding motivations), or when recruiting barriers exist (e.g., medical specialists, industrial buyers).

Pros: Focus groups are a fast, direct feedback tool in a highly adaptable format. They are excellent for hypothesis development, and getting marketing teams involved in the research process.

Cons: There is a strong tendency to “run” with focus group findings, (especially when they are positive) and bypass subsequent quantitative verification. The researcher needs to manage expectations.

The above is an abbreviated excerpt. “Focus Groups” is but one of the chapters in the Pocket Guide to Basic Marketing Research Tools that covers a number of popular research methods. To get your copy of this chapter, please download here.

Pocket Guide Chapter: In-Depth Interviews

Pocket Guide Chapter: In-Depth Interviews

As the name implies, in-depth interviews (“in-depths”, or “one-on-ones”) use a single moderator-single respondent format, and are designed to generate highly detailed feedback at the individual respondent level. In-depth interview techniques vary, but they are grounded in social and clinical psychology.

When Used

In-depths can be used for similar reasons as focus groups, and at any point in the marketing process when a topic (1) needs to be explored in great depth or detail, or (2) in situations when focus groups are inappropriate or impractical.

Most often, they are used to develop a detailed understanding of consumer attitudes, motivations, and buying behaviors. Sensitive topics (e.g., finances, relationship issues, personal hygiene) might only be approached on a one-to-one basis. In-depths are valuable in understanding the purchase decision-making process, as well as purchase influence (e.g., husband-wife “dyads”, or family “triads”).

They are used with physicians, pharmacists, attorneys, or business competitors or when focus groups among these types of professionals create a self-conscious or adversarial reaction. The in-depth format eliminates these distractions, letting respondents focus on the questions being posed.

Materials & Stimuli

Like focus groups, the primary stimulus for in-depths is the moderator’s guide. However, the discussion guide is often much more detailed and specific. The guide may contain specific question-answer exchanges, and follow a choreographed sequence of discussion areas.

As in focus groups, the guide reflects input from the moderator and client, as well agency researchers and external consultants. And, while the same types of stimuli used in focus groups can be used with in-depths, the following also applies:

  • With consumers, there may be use of psychological, motivational, and projective techniques to help ‘peel back’ the layers of an issue, and to get past any initial reluctance to share deeper feelings.
  • In technical categories (e.g., medical or pharmaceuticals) information may need to be presented in detail and studied by the respondent. For example, in the case of pharmaceuticals) the modes of action, indications/contraindications, uses, and dosing or administration information.
  • Depending on the category, moderators may be specialized (or trained in an area of interest), as in-depth discussions can be highly technical.

For more information on in-depth interviews, download our free section, from the Pocket Guide to Basic Marketing Research Tools here.

Bob Walker & Colleagues Present to 120+ Research Professionals at The ARF’s Leadership Lab in NY

Bob Walker & Colleagues Present to 120+ Research Professionals at The ARF’s Leadership Lab in NY

Since 1936, the Advertising Research Foundation has been the standard-bearer for unbiased quality in research on advertising, media, and marketing. Over the past 10 years, the ARF’s Foundations of Quality (FOQ) initiative has published 10 peer-reviewed papers dedicated to best practices in research and data quality. I should know: I was a co-author of two of those papers.

Now those FOQ insights are being brought to life in through The ARF’s Leadership Lab series of workshops and lectures on key research topics. Each are designed to inform and educate those in the marketing and advertising research industry about key aspects of the research process.

Nearly 120 students attended today’s event, including a few well-know industry experts. Turns out that even they must keep their skills sharp!

So, on Wednesday, February 14th (yes, Valentine’s Day) I was pleased (for a 2nd time) to be one of five research experts presenting a half-day crash course on how to design, execute, and analyze consumer survey research so that professionals new (and not so new) to the research field can understand what “good design” looks like. This includes sampling and weighting, scales and applicability for mobile, and appropriate statistical tests. We also covered the use of emojis in survey research as possible scale replacements. The overarching goal was to help our fellow researchers understand the nuances of data quality, and help them positive impact their organizations through superior insights and marketing success.

I am so very pleased to have been part of today’s program, which included yours truly (Bob Walker, CEO & Founder, Surveys & Forecasts LLC); John Bremer, Head of Research Science, The NPD Group; Randall Thomas, SVP, Research Methods, GfK; and Nancy Brigham, SVP, Global Head of Sampling and Research-on-Research, IPSOS Interactive Services. The event was beautifully coordinated by Chris Bacon, EVP, Global Research Quality & Innovation, from The ARF, and the wonderful ARF staff who work tirelessly behind the scenes. Kudos to all!

Not only are these individuals brilliant, published researchers, but they have become friends and are people who all respect one another and share their knowledge freely. The morning wasn’t without some amusing moments: I had too many slides, John couldn’t resist showing his dogs, Randy broke into a few rock’n roll riffs, and Chris utilized emojis and handed out Whitman Samplers and M&Ms to mollify the crowd. Nancy simply seemed bemused by it all…

We all look forward working together with the ARF in the future on research education, including the possibility of more hands-on case studies, webinars, and workshops. The industry is thirsty for real, practical knowledge, and the ARF is the best place for that to occur. It remains an anchor and a center of gravity for all things related to research.

Congratulations to all on a great day today – a job well done, and much more to come in 2018 and beyond!!

Is Your Research Platform Actively Stealing Your Clients?

Is Your Research Platform Actively Stealing Your Clients?

Last year I attended a conference held by a research software company. It was quite an event, featuring celebrities, writers, and well-known thought leaders. The massive conference hotel was somehow able to pack 3,000 attendees into a single room for the general sessions. During breaks, the hallways were teeming with bodies, all swimming in different directions, some swarming to the coffee stations, while others were headed to a quiet corner to make a call, catch up on email, or see an old friend. It seemed that everyone was abuzz with excitement on that 1st day.

The morning session on the 2nd day was more muted for marketing research firms (like mine) who have relied on this software platform for many years. As the CEO bounced around the stage like a child, extolling the virtues of the platform’s latest interface updates, he managed to say that there really was no need to work with an outside research firm or expert — just bring the software in-house. Hand the tasks over to a staffer: there’s really nothing to it. As someone with extensive research and methods expertise, I was disheartened. I immediately asked myself: why am I here?

I had flown a thousand miles on my own account, and fully intended to evangelize the product message upon returning home. After all, I had used this platform for nearly a decade, and suggested many feature improvements and bug alerts that made it a better product. The goodwill that I had extended to this company, in the spirit of partnership, was obviously misguided. I never realized that the company’s ultimate goal was not to support my business, but to take my business away.

More recently, I began to notice that when I added client seats to my license (so that they could provide survey changes directly, or have access to online reporting), they would begin to receive marketing materials, including email invitations for product trials and attend conferences. This is totally unethical, violates privacy laws, and is effectively spamming. I never gave this company permission to contact, or market to, my clients directly. Many of my clients have been cultivated over years, and some are at the highest levels of their companies. But apparently, this software company believes that they own all of the data in their system – even if that includes the names and contact info of people who never opted in. I have heard the same complaint from other users of this software platform.

Yet clearly, this is an effective strategy: the company does not need to scour for prospects. The email addresses are simply there for the taking; it’s just too tempting not to. And recently, after aggressive direct marketing, one of my clients decided to bring their software in-house. After all, how hard could it be? And this software company’s ever-growing consulting arm could certainly use some extra billable hours. Well done!

You might want to check with your research software vendors and ask them: are you marketing directly to my clients by accessing my account? Are you harvesting email addresses from my contact list? If so, fire them immediately.

In the world of software, you have many great alternatives. Most of them are ethical, too.

S&F Conducts Major Study for 1stdibs on Design Trends

S&F Conducts Major Study for 1stdibs on Design Trends

NEW YORK, Jan. 17, 2018 /PRNewswire/ — 1stdibs, the leading global marketplace for collectors and dealers of beautiful things, today revealed the findings of its first Interior Designer Trends Survey, which focused on interior trends that will dominate in 2018, 2017 trends that will fade and common mistakes clients make when redesigning a space. Research firm Surveys & Forecasts, LLC, sampled the opinions of top designers from around the world who are part of the 1stdibs Trade Program. This program provides exclusive benefits, such as discounted trade pricing and complimentary concierge services, to interior designers and architects.

The commissioned survey looked at changes in home design that designers will be watching for this year, as well as top fads from 2017 that are losing steam. Among the most surprising findings was the turn away from minimalist styles and washed-out, mostly white interiors, which had been among the most popular looks.

“1stdibs is fortunate to have 40,000 of the most talented interior designers take part in our trade program,” said Sarah Liebel, GM of the 1stdibs Trade Program. “This group is responsible for putting together some of the most beautiful spaces throughout the world, and we are thrilled that we are able to share their predictions for interior design in 2018.”

Between December 19, 2017, and January 2, 2018, researchers with Surveys & Forecasts, LLC, a full-service strategic research consultancy based in South Norwalk, CT, conducted more than 630 online interviews with interior designers who are part of the 1stdibs Trade Program, which consists of 40,000 registered designers.

The 5th P

The 5th P

Have you seen the movie “The Fifth Element”? This now 20-year old epic still intrigues me: a futuristic, campy, sci-fi story about an unwitting cabdriver Korben Dallas (Bruce Willis) and the carrot top heroine Leeloo (Milla Jovovich). Leeloo and Korbin embark on a quest to retrieve four precious stones that will save the world from destruction. After they are retrieved, the stones are placed on a giant sundial, designed to repel the death beam. The four elements are revealed: earth, wind, fire, and water. But where is the fifth element? It is, of course, Korben declaring his love for Leeloo. The elements unite and the world is saved!

I watched this movie (again) the other night, and it got me thinking about our world of marketing mix elements. Really, it’s true, my life is very dull.

We are taught in business school about the 4P’s: product, price, promotion, and place. But isn’t there a fifth element — a rather obvious one?

Of course, the 4P’s still exist, albeit in a very different and highly fragmented form from just a few years ago. Given the pace of technological change, the impact of digital, and our ability to target, the net impact of the marketing mix seems to have had an opposite effect. Consumers face a fire hose of stimuli blasted through dozens of media and distribution channels. It’s no wonder that more and more research studies involve issues of SKU reduction. Do you really need to sign up for your 200th e-newsletter?

So, what does that tell us about the broader state of consumer marketing?

In marketing and research, we sometimes hear about an additional “P” that has somehow been overlooked, such as “packaging”, “personal selling”, or “process”. But aren’t these simply extensions of the existing 4P’s?

The true missing element — the fifth element — is people. I’m not sure why there weren’t 5 P’s from the start. After all, do we assume product + price + promotion + place = the instant success recipe?

Modern consumer markets are complex and competitive, with dozens of brands from which consumers can choose. What separates the winners from the losers? It is the marketer’s ability to reach through to the consumer — to people — at an emotional level, over time (let’s not forget – it takes significant investment), through acquired distinctiveness and brand salience. Product differentiation is less and less of a factor.

Reaching people, and appealing to their emotional center, their social connections, and their hopes and dreams is the key lever in marketing. While the trip to the grocery store can be described as a series of product transactions, those transactions occur in the much larger context of a living person that may just be trying to get through his or her day. That is the world in which all brands live and compete.

As marketers and researchers, let’s always remember that.

Happy holidays!

Moving From Customer Satisfaction to Customer Intervention

Moving From Customer Satisfaction to Customer Intervention

If you work in marketing management or marketing research you are no doubt familiar with customer satisfaction (CS) programs. Some CS programs, created in the hope of helping businesses become more customer-centric, fail to deliver against this noble objective.

As a result, “customer satisfaction” reporting systems have reached an inflection point. We must move away from rote “report card” thinking to much more nimble feedback systems that support real-time response and intervention. We need a rapid, feedback-driven interaction model based on a “customer response system”, or CRS. This approach is NOT equivalent to assessing a customer’s “experience” or “journey”: this is a problem-solution model based on continuous improvement principles. Refer to W. Edwards Deming for a deeper understanding.

Below are 10 areas to consider before building a customer response system (CRS). If you have a customer satisfaction program already in place, consider these ideas to improve the effectiveness of your company’s program. Or, you also can call us to help you build one.

#1 Management Buy-In

CRS programs that have the endorsement of senior management have the greatest chance of success. A company must be invested in a never-ending journey to improve its products and services to the benefit of the end-customer.

#2 Key Touch Points

A comprehensive assessment of all possible customer touch points needs to be made across the organization. As the number of discrete touch points increases, fewer questions should be asked per point.

#3 Link Measures To Processes

Broad measures are unusable for decision-making because they fail to provide the linkage between a problem and the process that created it. Your CRS program’s goal should be to provide granular feedback to help improve the overall system.

#4 Minimize Feedback Time Lag

Strive for immediate feedback whenever possible after every transaction or consumer touch point. In psychological experiments, memory decay occurs in a matter of minutes.

#5 Strike A Balance

Every company must find the optimal balance between respondent burden and actionability. Broad measures are insufficient to provide precise guidance. A person’s “willingness to recommend” is an abstraction and usually inappropriate in most categories.

#6 Link Touchpoint Measures

Think longitudinally and link data points together using a unique ID. Feedback must be obtained at multiple touch points, yet by minimizing the number of questions per interaction, we have a higher probability of obtaining high-quality data across the feedback chain.

#7 Append Transactional Data

Do not ask customers questions that you already have on file. CRS programs should append descriptive data to the customer record. Consider reverse populating your data warehouse or CRM system with CRS variables to determine improvement longitudinally.

#8 Issue Immediate Alerts & Promote Recontact

Whether using a DIY survey tool or an enterprise-wide platform, use alerts and triggers to rectify problems. Alerts give you an opportunity to pick up the phone and call the customer – and in so doing, forge stronger bonds with customers.

#9 Emphasize Light Users & New Customers

Break out light buyers and new customers when analyzing CRS data. Measures are affected more by changes in new or lighter buyers than by loyal customers because there are more of them.

#10 Reporting

Since you are interviewing your own customers, you are only seeing a single slice of your market. Aggregate-level reporting can lull a company into a false sense of security by masking outliers. Always report the number of alerts by discrete category for validity and impact.

Surveys & Forecasts, LLC