The New World of Reaction-Based Marketing

The New World of Reaction-Based Marketing

Those of us who have spent some time in research departments tend to think in linear terms. By that I mean that there is a “classic” sequence to follow to understand customer needs, new product opportunities, line extensions, new advertising, etc. For example, we might start with a strategic study to understand buyer needs and behavior, identify segments or personas, follow that with benefit screening or concept testing to assess interest, then move into advertising concepts, and then marry that with the product development track with R&D, address any deficiencies, move into a test market, and then a national introduction.

Not. Those days are long gone. There is no appetite for “research” or “insights” in the classic flow referred to above.

This is most obvious when you look at the revenue of major research firms, which have grown anemically the last five years. While it is important to understand customer/buyer needs, research can’t add nearly as much value until it understands the digital landscape. Joel Rubinson and Bill Harvey have written about this eloquently. Those of us who consider ourselves insights experts or researchers must come to grips with the fact that most companies have no interest in spending much time conversing with customers, even when it has strategic value.

Most companies feel the need to respond or react to what is happening right now, in real time. In fact, for many companies, response time is the only thing that matters. We are in an age of data lakes, auctions, programmatic and ROI – a world of reaction-based marketing. In this world, a brand demands that for every nickel it spends, a nickel in sales should be generated. Companies are not interested in convincing you that their product is superior, or meets your needs, or fits your lifestyle unless they get paid back. Nor are they interested in the protective benefits of long-term brand building. This is a finance-centric rather than marketing-centric philosophy.

Reaction-based marketing has four primary characteristics which distinguish it from traditional marketing and brand building

  • ROI is the primary KPI used to measure marketing success.
  • Decisions are engineering-driven, not consumer needs-driven.
  • Decisions are event-based, not strategically- or equity-driven.
  • The cost of failure is less than the cost of testing.

A great example is Amazon, which alone has created these exact marketing conditions. It is a complete ecosystem for testing all elements of the marketing mix (excluding distribution, which it owns). Yet has Amazon not created the perfect ecosystem for driving brands into commodity status? A great example is alkaline batteries: Duracell currently cells 24 AAA batteries for $16. Amazon sells 48 AAA batteries for $15. I wonder who wins that battle?

As researchers and insights experts, where we add value is the missing link between all of the automated ecosystems that are competing for the consumer’s attention, and how the consumer thinks and feels. That market is wide open.

Why are Customer Satisfaction Research Experiences so bad?

Why are Customer Satisfaction Research Experiences so bad?

Automated surveys are everywhere, triggered by retail and online purchases, a flight you took, or your dentist. If you are like me, you probably get wee bit cranky when you experience a badly conceptualized customer feedback interview. We seem to be swimming in an ocean of really, really bad feedback programs. Why is this happening, and why does it seem to be happening on such a massive scale?

 The two primary causes of poor survey quality are CRM auto-generated surveys and poorly executed DIY. There isn’t much you can do about this. What you can do is make sure that your programs are world-class.

The most effective customer satisfaction programs embody thoughtfulness, intelligence, and even a little irreverence. Too often, programs miss the mark and companies can actually damage the relationships they are trying to nurture with poorly executed programs.

In my experience, customer satisfaction programs can implode for multiple reasons:

  • Questions assume that customers can accurately isolate all elements of their purchase experience. Marketing researchers and data scientists spend a great deal of time identifying all of the dimensions that need to be evaluated. But most customers are incapable of remembering more than a few things (even if they were experienced moments before). Expecting hyper-granular customer feedback is unrealistic.
  • The emotional state of the buyer at the time of purchase is ignored. If we know someone’s emotional state before or during a purchase experience, we can better understand how customers are interacting with us as marketers – and do something about it. If I have a surly register clerk, a rude gate agent, or am in a poor frame of mind, my ability to provide useful feedback is compromised. Conversely, if I am always greeted with a smile at Starbucks do I care if my latte doesn’t have enough whipped cream?
  • The longitudinal (holistic) relationship with your customer is overlooked. In addition to the current wave of feedback, have you bothered to look at the same customer over time? Have you acknowledged the customer relationship in your questions or survey flow? Are you nurturing or alienating? Questions that capture the longitudinal dimension can help business operations improve.
  • Questions are mistimed with product use (e.g., questions are asked before the product is used, or asked when not enough time has passed for the product to be appropriately assessed). If I am asked to complete a survey off of a cash register receipt, but the questions are about products I’ve yet to use, how am I expected to report on my level of satisfaction?
  • Framing questions around the product or buying occasion and not the customer. It’s not about the product, it’s about your customer. Did the customer feel valued? Were they treated with dignity and respect? Staples recently sent me a feedback request labeled my “paper towel experience”. This is awful.
  • Don’t assume that a purchase is in a category where repeat buying is routine. This includes recommendations for additional purchases that the retailer would like me to consider. My favorite solicitation was after I had purchased a car battery. Amazon proudly suggested other car batteries I might want to buy because well, you know, you can never have too many car batteries.
  • Don’t focus on a single score, or assume that buyers will, on an unsolicited basis, recommend products to others. I have written about this in previous posts. Consultants continue to push this “single score” narrative. It is plain wrong. Yet companies are willing to pay for this sage guidance.

Companies should not feel compelled to collect feedback after every purchase or experience. This is unnecessary and saturates the customer with far too many requests for feedback. This causes damage not only to the company, but the data of everyone else who need feedback. Companies are best served by collecting data on an Nth-name transaction basis and letting sampling theory do the rest.

The compulsion to collect feedback after each and every interaction harms data quality – and weakens the bonds of the buyer-seller relationship.

Customer satisfaction programs deliver important KPI’s to assess performance and, programs should be conducted because it makes a material difference.

But we must avoid the temptation to mindlessly automate customer satisfaction programs. The goal is to make the customer happy. The way things are going, it’s causing more harm than good.
If you’d like, let’s continue the discussion.

 

 

 

 

 

 

 

 

 

How Survey Research Can Drive ROAS

How Survey Research Can Drive ROAS

Marketing research – notably strategic survey research – has not always provided useful insights for informing a company’s digital strategy. This is especially true for media targeting and ad spending. Why is this?
 
Well, simply put, insights – even from large-scale survey research studies (e.g., segmentation, or brand equity/perception research) – are typically conducted in a silo, and researchers tend to forget about the notion of linking survey sample sources (i.e., opt-in panels) to available external variables that could be used to make smart media planning decisions. And, because there are relatively few well-known use cases that can act as example, survey research is overlooked as a novel way to build out a digital marketing or media plan.
The utility of survey research would be vastly improved if it could provide the granular linkage needed for more precise targeting. Fortunately, this is changing fast.
Many survey research sample sources are now onboarded with pre-existing segmentation codes from 3rd party providers. This opens up entirely new ways to leverage survey research results – and directly shape targeting and media buying decisions in digital.
 
Demographic/psychographic coding is nothing new, and the results of pioneers like Claritas’ PRIZM were mixed. They were marginally useful in categories that had clear geo-demographic skews, such as those linked to income or ZIP code (e.g., autos, high-end appliances, etc.).
 
Be aware that external 3rd-party segmentation codes alone add almost no marginal value. In the case of a well-done segmentation study, distinct segments will likely emerge and 3rd-party codes would be redundant with what can be gleaned from the study itself. But… hang in there with me for another minute.
What does add significant value is how we go about piping the insights from the survey research side to the structured coding on the other side where media decisions must be made.
External 3rd-party codes connect us to the digital world. Once the linkage is built, digital targeting and ad spend opportunities become more clear. For example, Neustar’s E1 segmentation (172 segments, gasp) gives survey research new ways to add value in the digital world. Survey respondents appended with segment codes can be profiled on key survey questions (and vice versa). For example, do we really want to target those highly promotion sensitive people in segments 19, 72, and 113? This helps researchers identify optimal targets for digital and levels of spend. Our survey guidance helps both in targeting (e.g., LiveRamp) and allocating digital inventory (e.g., Trade Desk).
And media choices needn’t be entirely digital – they simply need to be addressable.
Media choices also include addressable TV and can address context. Another coding scheme from RMT assigns motivational profiles to both addressable TV content and individual respondents. Survey research can then combine respondent-level data with motivational profiles, and use that to identify TV shows that are in alignment with their belief systems and personal motivational outlook. Pretty cool.
 
Importantly, survey research is really the only tool that can effectively assess the issue of creative, which most experts say accounts for up to 80% of the impact of media spend.
 
For me, personally, survey research can clearly move up in prominence into a more useful advisory role. Survey research becomes a very useful tool in the marketer’s toolkit to drive brand growth and improve efficiency/performance/ROAS. One of the thought leaders in our industry, Joel Rubinson, has more to say here.
 
CPG/FMCG has had these tools available for a while now – notably retail scanner data, frequent shopper data, first party data, and customized purchase panels. They provide a steady data laboratory for CPG/FMCG experimentation and insights.
 
But I think the more interesting and exciting opportunity is in non-CPG/FMCG, especially for categories that do not have retail scanning, or 3rd-party verified POS/sales reporting.
Survey-based approaches can absolutely be used to shape digital messaging, targeting, and media planning/buying for non-CPG if done correctly.
So, whew, crazy right? Here are some thoughts/consideration factors as you begin to explore the possibility of utilizing survey research for digital targeting and media spending.
  • If you have only first-party data, onboard external segment codes if you can. This is the most immediate way to ease into the use of targeting methods, and you can begin to experiment and test. Don’t waste your time on A/B testing, which is great for refinement. Focus instead on targeting – that’s where the gold is!
  • Understand demographic and lifestyle identifiers (e.g., Neustar’s E1 as an example) that may have already been on-boarded by your retail scanning data provider or survey research sample providers.

  • In non-CPG/FMCG, look for opt-in sample source providers with segments already on-boarded, such as Dynata or Numerator.

  • If you have a very small brand, it makes sense to find those few segments allied with your brand. Your small brand may have a large share in key segments: identify segments with higher shares and target them. This can generate vast improvements in ROAS.

  • But don’t over-segment! Some segmentation approaches are absurdly huge, such as the popular Neustar E1 scheme (172 segments!). If you can, condense and work with a more modest approach, perhaps 20 or 30, as a first step.

  • For non-CPG/FMCG, work with a reputable researcher and conduct well-designed research with very large sample sizes to get adequate segment representation (remember, the goal is to link backwards).

  • Utilize demonstrably proven survey questions, such as constant sum, replacement vs. addition, share by use occasion, and so on to assess volumetrics. Note that we approach non-CPG/FMCG in the same way as CPG but replace scanning with survey data.

  • Take your high return segments and model them onto the larger universe of individuals aka “look alikes”.

  • Utilize the improvement in ROAS to reinvest in other “look-alike” segments, or in other geographies.
 
These are new and exciting opportunities! I believe that they will lead to a resurgence of survey research as a new method for optimizing digital targeting and media planning.
Get in touch to discuss further.
The ROI of Customer Satisfaction

The ROI of Customer Satisfaction

I have seen many CSAT programs change a company’s culture by quantifying problems and isolating their causes, thus boosting retention and profitability, and moving from reactive to proactive.
Conversely, some companies don’t think that customer satisfaction (or “CSAT”) programs can add value because “we know our customers”. This comment conveys a misunderstanding of what a well-designed CSAT program is, and the value that it can bring to an organization.
 
In the short term, maintaining the “status quo” is a cheaper alternative, but it avoids the broader discussion about total (opportunity) costs. How much revenue are you leaving on the table by assuming that you know what the customer wants?
Here’s a basic example. Let’s say you run a $50MM company. What would you be willing to spend to prevent 10% of your customers from leaving?
 
At a 30% gross margin, you saved $1.5MM in profit. A CSAT program that costs $50K a year has an ROI of 30x! Now do you get it?
 
Even if we are conservative, a 5% reduction in defection produces $750K in savings, and an ROI of 15x – still impressive! By improved problem detection, by alerting key people about problems in real-time, thus cutting response time, we help mitigate customer defections and avoid a significant amount of lost business.
 
What are the fundamental problems with what I call a “status quo” approach? Here are a few:
  • Markets are changing. Your competitors are not standing still; they will continue to innovate, merge with others, or be acquired. Markets themselves morph from regional to national to global, and regulatory frameworks change.
  • Customers are changing. New customers replace old, and this year’s buyers are demographically, attitudinally, and behaviorally different from last year’s buyers, who will be different next year’s. How are you planning for that?
  • Expectations are changing. Customers are constantly evaluating their choice options within categories and making both rational and emotional buying decisions.
Maintaining the “status quo” is NOT a strategy: it is a reactive footing that forces you to play defense. In a status quo culture, you are not actively problem-solving on behalf of customers, nor are you focused on meeting their future needs!
Consider a couple of scenarios in which a CSAT program could add value:
If you run a smaller company, most of the company’s employees (including management) are interacting with customers every day. The company also gets feedback, albeit subjectively or anecdotally, every day. Corrections to sales or production processes can be done rapidly, and the customer is presumably happy. But even in smaller companies, there is limited institutional memory (i.e., a standard way to handle a problem or exception). One solution may reside with Fred in finance, another with Pat in production, or someone else entirely. There are no benchmarks to compare performance (other than sales). It is likely that the same problem will surface repeatedly because line staff or did not communicate with each other, or it might appear in another form in another department (i.e., a parts shortage caused by an inventory error). Unless management is alerted, larger “aha” discoveries are missed. This can cost hundreds of thousands of dollars in lost revenue.
If you run a large company or a major division, the gulf between customer feedback and management grows wider. News about problems may not reach management because they are viewed as unremarkable. And a company doesn’t have to be huge for these dynamics to occur. The evidence shows that by the time a company reaches just 100 employees, it behaves much like a multi-national enterprise. In a small company, it is everyone’s responsibility to fix a problem; in a large organization, it becomes someone else’s responsibility. The opportunity loss becomes even greater because there is no system in place to alert key staff or a specific department. As a result, millions of dollars in revenue can be lost.
A well-designed CSAT program that alerts the appropriate people or department can add significant value. At Surveys & Forecasts, LLC we offer basic CSAT programs (with key staff alerts, dashboards, and an annual review) for just $1,000 a month.
Get in touch to learn more! We’d love to work with you and help you improve satisfaction, retention, and save your organization some significant money.
This “T” is for Testing

This “T” is for Testing

Small-to-medium sized businesses (SMBs) can use a simple reference model for their marketing and customer insights efforts. By focusing on what I call the “Three T’s” (targeting, testing, and tracking), your business operations will be continually guided and improved by staying focused on your core target, supported by continuous testing of new products and services, and by objectively monitoring your progress over time. Today let’s take a closer look at one of the these areas: testing.

Instill a Testing Mindset

 
Testing can involve a multitude of variables, but as an SMB you need to think about two fundamental dimensions when considering a test of any kind: tests that affect your brand, and tests to determine whether your ad spend is working.
 
My colleague Joel Rubinson makes this key distinction between what he calls performance vs. brand marketing. Each supports the other; they are not in opposition to one another. In simple terms, performance marketing is focused primarily on media allocation and the optimization of your ad spending (ROAS), while brand marketing is focused primarily on finding the best ways to communicate the fundamental premise of your brand, such as your brand’s features, benefits, and desired end-user target.
 
It is in this context that I want you to think about two types of testing: what I will call brand concept testing of the brand promise (in various concept formats); and performance testing, the most commonly used form being A/B testing.
 
Brand Concept Testing
 
When we want to communicate the essence of a brand or an idea to a prospective customer, we do it with a stimulus known as a concept. A concept is an idea before it is marketed solely for testing purposes, so that we can understand consumer reactions to it. We test concepts to reduce the risk of making a mistake when launching an idea, to find the best way to describe an idea, and how to best communicate to our target audience.
 
A concept needs to communicate a compelling end-benefit or a solution to a problem using language that a target consumer can not only understand, but internalize and relate to emotionally. Concepts can differ significantly in their language, layout, image content, and other characteristics. The format of your concept will vary depending on the type of information you need for your brand and the type of test you are planning.
There are concepts written specifically for screening purposes that have very little detail or descriptive information (on purpose). There are concepts that go a little further, with more descriptive information, but still short of being fully developed. And then there are concepts that are close to finished advertising, much like you would see on a landing page or in print media. Here are three types of brand concepts that you should know:
 
Kernels are ideas or benefits presented as statements.
 
Kernels are used in screening tests to efficiently identify winners vs. losers. Kernels are evaluated on just a few measures (e.g., purchase intent, uniqueness, superiority), and each respondent sees all kernels, and each is assessed on all measures. Kernels can be attributes, benefits, or distinct ideas. This type of test is also called a “benefit screen”.
 
If kernels are distinct ideas, the analysis focuses on the top performers and profiling them on demographics, geographical areas, or other methods such as attitudinal scoring. If the stimuli are end-benefits or positioning statements, we can use tools to identify underlying themes that might convey an even bigger thematic idea.
 
White card concepts are simply words on a page without high-quality images or fluffy language.
 
White card concepts are typically comprised of 4-8 sentences, factually stating the problem, usage situation, or need; they also state the end-benefit, solution, or final result provided. White card concepts can be existing products, stand-alone ideas, line extensions, or new uses and repositionings. They can include price, flavors, sizes, dosing, brand name, packaging information, and even a basic visual (i.e., a B&W drawing). Because the goal is to test the waters in a bit more detail, some diagnostic questions are included – but the number of questions is limited because we are typically evaluating multiple concept ideas.
 
Full concepts are used to capture more complete reactions, and when fine-tuning your messaging or language is essential prior to a launch or ad spending.
Full concepts often have the benefit of qualitative insights to develop language, positioning, tone, or emotion of an ideas that showed promise in previous screening work. Full concept testing can be done “monadically” (i.e., the respondent sees one idea at a time in its own cell of respondents), or in a sequential design.
 
Full concepts are longer, written to include all that might be conveyed in the time an ad is exposed (e.g., 15- or 30-seconds). They can also be more elaborate in their situational set-up or premise, use of demonstration cases, or other info.

A/B Testing

You might think that A/B testing always provides a clear choice, but there is usually more to the story than the difference between two variables.
The A vs. B variants you are testing might be affected by a series of previous decisions made long before either A or B were evaluated head-to-head. For example, if A is your current campaign that includes search, PR, and Facebook ads, and B does not leverage the campaign you are now running, your test is already biased against B. Or, perhaps your objective was impressions, but one option delivered much higher conversions. So, interpreting results can quickly become more complicated than perhaps they first appeared.
But, for argument sake, let’s assume that A and B start from the same point, and neither will be biased by previous advertising or spending level decisions. If so, A/B testing can be interpreted unbiasedly, and executed within any number of environments, such as CRM systems (HubSpot, Salesforce, etc.), dedicated A/B testing environments such as Central Control or Unbounce, and even some popular web hosting platforms provide an opportunity to conduct simple A/B tests.
 
Mechanically, the ad testing company you work with will develop two (or more) landing pages (A, B, C, etc.) and visitors to your site will be randomly redirected to one of those variants. Google Analytics and other web traffic statistics can be utilized to determine which variant is most effective at, for example: lowering bounce rates, achieving conversions, increasing CTRs, or other metrics you choose. A/B test designs can also revolve around content of the landing page, the overall site experience, or changes to ad spend, placement, location, context/environment, and more (see above brand concept formats).
 
Scratching the Surface
 
There are a multitude of different testing and design options for you to consider as an SMB. I have given you a taste, so get out there and test! Working with a marketing insights and research expert is your best guarantee that the type of concept and testing environment is designed, executed, and analyzed effectively. At Surveys & Forecasts, LLC we have worked with many different companies to help them develop optimal brand strategies and concepts, identify which execution best communicates their brand’s proposition, and which marketing program is most effective for their limited ad dollars.

 

 

Surveys & Forecasts, LLC