Customer Value vs. Valued

Customer Valued?

In our age of automation and AI, marketers risk becoming detached from how their business practices, products, and services are perceived by their customers. Marketers have it all wrong by focusing solely on customer satisfaction or a willingness to recommend: these measures do not capture the customer’s relationship-based perspective. Customer satisfaction or willingness to recommend are thin, shallow, “last click” based, and largely transactionally-oriented.

The customer can be momentarily satisfied with their last transaction (product or service): it delivered a promised benefit. But that is, by definition a transactional experience, and often devoid of any emotional richness.

In working through these distinctions with our clients, the notion of customer value (or customer lifetime value, aka “CLV”) can be the polar opposite of whether the customer feels valued.

From a financial or marketing perspective, a company’s approach to customer value is formulaic: extract maximum value from the customer. The metrics take many forms: ROI, ROAS, narrowly targeting, and upselling to name a few. But this “share of requirements” approach (i.e., siphoning off more revenue from the same customer) is entirely transactionally driven. There is no “emotional stickiness” created with the customer from a single transaction, or even from multiple successful transactions.

From the customer’s perspective, feeling valued creates an emotional connection which is deeply internalized and an incredibly powerful sticky magnet for retention.

As a CMO, how would you answer this question from the customer: “Does this company appreciate my business?” If you don’t know, consider path-to-purchase or other in-depth insights work. Test new programs. Test reward structures. In what ways can your company demonstrate to the customer that their business matters – that they themselves are valued?

Even highly satisfied customers can be quickly dislodged by a competitor who can, for example:

  • Offer products at a lower price
  • Offer products with more features/benefits
  • Offer products that appeals to more users
  • Offer products that have more use cases
  • Deliver products in less time
  • Leverages variety-seeking behavior or changing tastes

In email automation and drip campaigns, we have learned that with more personalization comes better response – and a greater likelihood of consumers responding to offers. But everything now comes through as personalized (except for the occasional misfire, like “Dear {FirstName}”), which slowly erodes that competitive edge. But all of this plays in the transactional space, devoid of motivation or emotion.

In our client work on customer value, we have noticed that measures related to the customer’s perception of his/her worth to the company is a far better predictor of customer retention than “shallow” measures of satisfaction or willingness to recommend. One measure (NPS), in particular, is especially weak in this area. Correlations with purchasing are always the lowest. This really should come as no surprise, since it is a thin ‘report card’. Management teams gain some comfort by following the herd who also uses NPS, but it provides little actionable insight into the underlying value equation.

We urge CMO’s and marketing leaders to think about longer-term results and to focus on the customer’s perception of their relationship, rather than the transactional value extraction approach embraced by many marketing organizations today.

As Peter Drucker famously noted, the purpose of business is not to make a profit; it is to create a customer.

Relevance – The Missing Ingredient in Digital

image young woman

A quick Google search of the words “relevance” and “marketing” turned up very few useful or informative hits. I found this surprising. Too much digital communication (email, banner ads, YouTube teasers, etc) fails to connect to the consumer in a meaningful, relevant way, which I classify as:

  • Emphasis on noise over meaningful communication (“spray and pray”)
  • Failure to truly understand the decision maker’s pain points
  • Absence of clear product differentiation in communication
  • No linkage between pain points and solutions offered by the marketer
  • Missing emotional connection with the decision maker

Relevance can be a squishy term because what may be relevant to the marketer is not necessarily relevant to the consumer. Too much digital content is devoid of the connection between the product (or service) and real customer needs. Advertising language is often lifted from the marketer’s vocabulary and not from the customer. That’s because no one has bothered to speak to the customer to hear what is relevant. The approach is “Here are the facts – the consumer will obviously get it!”

In digital, we hear about “performance marketing” and “brand marketing”, and these are certainly useful constructs in the business of optimizing digital spending, but more fundamentally we are missing major opportunities to demonstrate our role as “market makers” between customers and sellers. Marketers assume that all features or characteristics are relevant, when in reality too many are not.

Many advertisements on YouTube, for example, don’t connect because the narrator or situation fails to describe the product or link to an end-benefit (even after our attention exceeds the first 5-10 seconds). The same is true for linear or embedded ads on TV or radio. The branding is often held back until the very end. At that point, the advertising has either served to confuse the viewer or waste their time by failing to connect any relevant branding with the story that that was told in the previous 25 seconds. In many cases the storytelling or virtue signaling is more prominent than the brand itself. The consumer must process images, messages, and a story line into something personally relevant that then, in turn, must somehow be linked to a brand benefit. Automobiles, pharmaceuticals, and health care advertising frequently wander into these dead ends. This approach is a complete waste of ad dollars.

Conversely, some features are immediately relevant because they connect to obvious end-benefits. Amazon’s One-Click checkout feature or FreshDirect’s automatic re-ordering are great examples. They mimic the in-store checkout experience: I hand my credit card to the register clerk and don’t have to think again. Amazon and FreshDirect don’t have to talk about it: One-Click has multiple end-benefits: I don’t have to fumble for a credit card, enter a delivery address, and my window is already known. In short, I don’t have to think at all – and can get back to the more important work I was doing before I placed my order. Amazon and FreshDirect become directly relevant because they save me time – something of great value to us all.

Industry experts, the Advertising Research Foundation, and others all generally agree that content and creative account for as much as 70% of the impact of advertising. Too many of us are focused on the shiny object of ROI and targeting, when in reality what consumers want is something that is relevant and meaningful and that makes their lives better.

Don’t forget this fundamental tenant of advertising: do your research, uncover unmet needs, and make it relevant!

Reframing Marketing Research Spending as an Option-Creating Investment

If you have been in business long enough, you know that the hard work of research is sometimes seen as optional or discretionary by some management teams because it’s hard to calculate the true ROI of research. But we’re thinking about this all wrong. Companies should be thinking about research as a way to separate winners from losers, and move the winners to market as quickly as possible at the lowest possible total cost. So let’s flip it around and consider the value of research using an investment framework.

Business spending and investments fall into three broad buckets, which are:

Infrastructure investments that include the costs of standing the business up and keeping it running at a baseline level. This includes the sunk costs of office space, utilities, computers, distribution centers, manufacturing, and support staff. The business cannot run without them, and the ROI cannot be easily calculated because it’s the paid-in capital needed to get the flywheel spinning.

Variable cost investments include all short-term spending to promote the company’s products. ROI calculations work best in these situations because there is a beginning, a middle, and an end to the spending and the program that is being run. The ROI question is: when I spend a dollar, how much will I get back (in the near term)? For example, advertisers and media companies obsessively focus on maximizing ROI by targeting (i.e., MTA), which is amazing but does little to identify promising ideas or address business strategy: it’s simply optimizing ad spend.

Option creating investments are by far the most interesting! These investments are made for marketing research. An “option creating investment” lets me put a little money down on the table to give me the option of owning something later that is worth much more. If I spend money for an “option” but it’s not going to pay off, then I walk away and let the option expire. Alternatively, if I have a winner, I am in the money. If I put $2 million down and I get back $20 million, my ROI is 10x and it’s time to exercise my option. The product moves over to the ROI category, supported by variable cost investments.

The other option is, of course, launching a product that you did not test and watching it fail spectacularly. All you have to be is right. But if you’re wrong and you’re the CEO, you might be looking for a new job!

Here’s a quick example. Let’s say we have 10 ideas, and each one of them costs $5K to test. Half of them move on to an R&D product development phase at $75K each, and these all move on to a product evaluation phase at $15K each. Two of these then move forward to test market at $500K, but only one of them performs well enough for a regional launch costing $2MM. All in (including the losers) I have spent $3.5MM.

The launched product achieves $10MM in Year 1 sales at a gross margin of 60%, or $6MM. My ROI (including the cost of all my losers) is 171%. If I have two winners, my ROI is even better at 343%. And I am not breaking out the cost of research alone, which is much smaller – I am including all of the costs associated with the launch.

Option creating investments can also be made in customer satisfaction research to identify additional ideas to insert into your screening programs. Over the course of time, the amount of money you may spend in research testing will be rounding error compared to the amount of money made by the winners.

Knowing what won’t work is as valuable as knowing what will by researching effectively. Well-designed research will continuously feed successful business performance and yield great ROI!

 

My thanks to Jay Kingley of Centricity for helping to shape this thinking!

Curation: The Next Wave of Marketing

Choice Overload vs. Curation 

Whenever we go to Amazon, or Netflix, or any other site, we are immediately presented with dozens, if not hundreds, of choices. Many of these choices are randomly selected by the retailer based on past purchase behavior across the buyer’s digital mesh. Across multiple devices, the company knows our age, sex, and geographical location, and perhaps can algorithmically make some deterministic assumptions about what we like or don’t like.

But that has yet to translate into something that is presented to the customer as a reasonable choice set. It is no wonder that consumers feel bombarded by choice. They are simply overwhelmed.

We are presented, every day, in multiple contexts, with too many choices. We are presented with too many choices when we read digital publications. We are presented with too many choices when we look at the social media feeds of LinkedIn or Facebook. We are presented with too many choices when looking for a TV show or a movie. Humans are simply not capable of synthesizing hundreds, if not thousands, of choice alternatives when they are presented as a mass (mess?) of individual decisions. Our cognitive capability collapses under the weight of all of the choice decisions that must be made when presented with too much choice.

Companies generally, and advertisers and media assets in particular, have failed to make the leap from choice to curation. This is a huge opportunity for marketers in simplifying the marketing message, making the overall customer experience that much less burdensome and taxing, and draws the consumer closer to the value proposition that attracted the consumer in the first place.

As a general rule, consumers do not like other people making decisions for them. A good case in point is grocery shopping. Yet, in urban environments, direct delivery makes much more sense due to the many obstacles for grocery shopping in congested cities. A grocery shopper doesn’t have to fight city traffic, load up a car, drive to and from their apartment, or leave and subsequently enter parking garages to get the week’s groceries. One of my former clients, FreshDirect, learned early on that their business model wasn’t solely built around the ability to deliver high-quality produce at reasonable prices. The secret ingredient were their drivers. The drivers knew their customers at a personal level, and were able to create a curated experience by making sure that certain things were done to the customer’s exact specifications.

Why is curation so hard? E-commerce has not figured this out at all. Not long ago I ordered tires for my road bike from Amazon. On a subsequent login, Amazon suggested other road bike tires I might be interested in — for a product category purchased annually (at best). The lack of synchronization between recommendations, purchase frequency, and my likely need was stunningly dumb. Yes, Amazon is enormous, yes they make lots of money, but they still have not moved the needle on the concept of curation in any meaningful sense. What if Amazon had a viable competitor that really understood curation?

On the flipside, one of my favorite examples of curation is Spotify. Once again, one would think that Apple (iTunes) would have figured this out long ago, but Spotify is a wonder. If I want music for concentration, there is a curated playlist. If I want calming classical in the background, there is a curated playlist. Do they get it right all the time? No, but they are pretty close most of the time.  And I don’t mind if they miss. AN 80% hit rate is pretty good to me. At least there are humans involved in the decision-making process. OK, yes, perhaps also an algorithm, but at least it is a collaborative effort.

Marketers would be well advised to start thinking about how to anticipate the kinds of products and services that customers will be looking for in a world where choice is overly abundant. Curation is one of the ways that marketers can demonstrate that they are tuned in to what customers are seeking, rather than blindly and programmatically jamming messages at them without any thought to the choice overload that they create. Does the marketer want to convey something meaningful, or add more noise? So far it has been the latter.

I hope that more marketing and advertising initiatives will consider the notion that humans are very, very good at intuiting what other humans might like or enjoy. The concept of curation can form a  much-needed bridge between the antiseptic world of algorithmic decision-making and true human connection.

Simple Ways to Start Your Analysis

Simple Ways to Start Your Analysis

Looking for a quick way to get started understanding the results of your research?

Projects that are survey research-based can be daunting. So can projects that involve the analysis of sales, promotional activity, advertising, or other marketing-related activity.

We live in a world of complexity and big data. Simple guidelines and a keen eye can reveal patterns that you might have otherwise overlooked. Here are a few tips to help you start analyzing your project:

Take a walk through your data.

Scroll through the data and see where values  “pop” – that is, where are they high and where they are low? Do your tables flow in the same way that you think about your business? If so, you will begin to see numbers that imply relationships. As a result, visual outliers can become major insights.

Compare those who are interested versus not.

In the research business, we refer to this as “acceptor-rejecter” analysis. If, for example, you have a five-point purchase scale, group the “fours” and “fives” and compare them to “ones” or “twos”. Throw the neutrals in with the rejecters to compare positives vs. everyone else. Are there larger differences? If so, What do you infer?

Mine the gap.

The benefit of acceptors vs. rejecters is that you are looking more vs. less extreme. The difference between them is valuable in identifying a compelling story. Typically, this is done in the form of point gaps. A large gap between acceptors and rejecters points to an insight.

Sort your data.

If you have attributes of various features or benefits, sort them from high to low and compare the acceptors and rejecters. Or compare demographic groups, such as Millennials vs. Baby Boomers. Sort them on ratings or point gaps. Larger point gaps can identify attributes that are choice drivers.

Think linearly.

Array groups you are interested in analyzing by order of magnitude. For example, a variable like education is easy: college educated vs. not. For income, create low, moderate, and high income groups, and compare across. The same is true for other continuous variables, like age. Be clever, use medians and not means.

Your eyes will easily see patterns, especially if interest is correlated with your dependent measures.

These little baby hacks will get you on your way!

The New World of Reaction-Based Marketing

The New World of Reaction-Based Marketing

Those of us who have spent some time in research departments tend to think in linear terms. By that I mean that there is a “classic” sequence to follow to understand customer needs, new product opportunities, line extensions, new advertising, etc. For example, we might start with a strategic study to understand buyer needs and behavior, identify segments or personas, follow that with benefit screening or concept testing to assess interest, then move into advertising concepts, and then marry that with the product development track with R&D, address any deficiencies, move into a test market, and then a national introduction.

Not. Those days are long gone. There is no appetite for “research” or “insights” in the classic flow referred to above.

This is most obvious when you look at the revenue of major research firms, which have grown anemically the last five years. While it is important to understand customer/buyer needs, research can’t add nearly as much value until it understands the digital landscape. Joel Rubinson and Bill Harvey have written about this eloquently. Those of us who consider ourselves insights experts or researchers must come to grips with the fact that most companies have no interest in spending much time conversing with customers, even when it has strategic value.

Most companies feel the need to respond or react to what is happening right now, in real time. In fact, for many companies, response time is the only thing that matters. We are in an age of data lakes, auctions, programmatic and ROI – a world of reaction-based marketing. In this world, a brand demands that for every nickel it spends, a nickel in sales should be generated. Companies are not interested in convincing you that their product is superior, or meets your needs, or fits your lifestyle unless they get paid back. Nor are they interested in the protective benefits of long-term brand building. This is a finance-centric rather than marketing-centric philosophy.

Reaction-based marketing has four primary characteristics which distinguish it from traditional marketing and brand building

  • ROI is the primary KPI used to measure marketing success.
  • Decisions are engineering-driven, not consumer needs-driven.
  • Decisions are event-based, not strategically- or equity-driven.
  • The cost of failure is less than the cost of testing.

A great example is Amazon, which alone has created these exact marketing conditions. It is a complete ecosystem for testing all elements of the marketing mix (excluding distribution, which it owns). Yet has Amazon not created the perfect ecosystem for driving brands into commodity status? A great example is alkaline batteries: Duracell currently cells 24 AAA batteries for $16. Amazon sells 48 AAA batteries for $15. I wonder who wins that battle?

As researchers and insights experts, where we add value is the missing link between all of the automated ecosystems that are competing for the consumer’s attention, and how the consumer thinks and feels. That market is wide open.

Why are Customer Satisfaction Research Experiences so bad?

Why are Customer Satisfaction Research Experiences so bad?

Automated surveys are everywhere, triggered by retail and online purchases, a flight you took, or your dentist. If you are like me, you probably get wee bit cranky when you experience a badly conceptualized customer feedback interview. We seem to be swimming in an ocean of really, really bad feedback programs. Why is this happening, and why does it seem to be happening on such a massive scale?

 The two primary causes of poor survey quality are CRM auto-generated surveys and poorly executed DIY. There isn’t much you can do about this. What you can do is make sure that your programs are world-class.

The most effective customer satisfaction programs embody thoughtfulness, intelligence, and even a little irreverence. Too often, programs miss the mark and companies can actually damage the relationships they are trying to nurture with poorly executed programs.

In my experience, customer satisfaction programs can implode for multiple reasons:

  • Questions assume that customers can accurately isolate all elements of their purchase experience. Marketing researchers and data scientists spend a great deal of time identifying all of the dimensions that need to be evaluated. But most customers are incapable of remembering more than a few things (even if they were experienced moments before). Expecting hyper-granular customer feedback is unrealistic.
  • The emotional state of the buyer at the time of purchase is ignored. If we know someone’s emotional state before or during a purchase experience, we can better understand how customers are interacting with us as marketers – and do something about it. If I have a surly register clerk, a rude gate agent, or am in a poor frame of mind, my ability to provide useful feedback is compromised. Conversely, if I am always greeted with a smile at Starbucks do I care if my latte doesn’t have enough whipped cream?
  • The longitudinal (holistic) relationship with your customer is overlooked. In addition to the current wave of feedback, have you bothered to look at the same customer over time? Have you acknowledged the customer relationship in your questions or survey flow? Are you nurturing or alienating? Questions that capture the longitudinal dimension can help business operations improve.
  • Questions are mistimed with product use (e.g., questions are asked before the product is used, or asked when not enough time has passed for the product to be appropriately assessed). If I am asked to complete a survey off of a cash register receipt, but the questions are about products I’ve yet to use, how am I expected to report on my level of satisfaction?
  • Framing questions around the product or buying occasion and not the customer. It’s not about the product, it’s about your customer. Did the customer feel valued? Were they treated with dignity and respect? Staples recently sent me a feedback request labeled my “paper towel experience”. This is awful.
  • Don’t assume that a purchase is in a category where repeat buying is routine. This includes recommendations for additional purchases that the retailer would like me to consider. My favorite solicitation was after I had purchased a car battery. Amazon proudly suggested other car batteries I might want to buy because well, you know, you can never have too many car batteries.
  • Don’t focus on a single score, or assume that buyers will, on an unsolicited basis, recommend products to others. I have written about this in previous posts. Consultants continue to push this “single score” narrative. It is plain wrong. Yet companies are willing to pay for this sage guidance.

Companies should not feel compelled to collect feedback after every purchase or experience. This is unnecessary and saturates the customer with far too many requests for feedback. This causes damage not only to the company, but the data of everyone else who need feedback. Companies are best served by collecting data on an Nth-name transaction basis and letting sampling theory do the rest.

The compulsion to collect feedback after each and every interaction harms data quality – and weakens the bonds of the buyer-seller relationship.

Customer satisfaction programs deliver important KPI’s to assess performance and, programs should be conducted because it makes a material difference.

But we must avoid the temptation to mindlessly automate customer satisfaction programs. The goal is to make the customer happy. The way things are going, it’s causing more harm than good.
If you’d like, let’s continue the discussion.

 

 

 

 

 

 

 

 

 

How Survey Research Can Drive ROAS

How Survey Research Can Drive ROAS

Marketing research – notably strategic survey research – has not always provided useful insights for informing a company’s digital strategy. This is especially true for media targeting and ad spending. Why is this?
 
Well, simply put, insights – even from large-scale survey research studies (e.g., segmentation, or brand equity/perception research) – are typically conducted in a silo, and researchers tend to forget about the notion of linking survey sample sources (i.e., opt-in panels) to available external variables that could be used to make smart media planning decisions. And, because there are relatively few well-known use cases that can act as example, survey research is overlooked as a novel way to build out a digital marketing or media plan.
The utility of survey research would be vastly improved if it could provide the granular linkage needed for more precise targeting. Fortunately, this is changing fast.
Many survey research sample sources are now onboarded with pre-existing segmentation codes from 3rd party providers. This opens up entirely new ways to leverage survey research results – and directly shape targeting and media buying decisions in digital.
 
Demographic/psychographic coding is nothing new, and the results of pioneers like Claritas’ PRIZM were mixed. They were marginally useful in categories that had clear geo-demographic skews, such as those linked to income or ZIP code (e.g., autos, high-end appliances, etc.).
 
Be aware that external 3rd-party segmentation codes alone add almost no marginal value. In the case of a well-done segmentation study, distinct segments will likely emerge and 3rd-party codes would be redundant with what can be gleaned from the study itself. But… hang in there with me for another minute.
What does add significant value is how we go about piping the insights from the survey research side to the structured coding on the other side where media decisions must be made.
External 3rd-party codes connect us to the digital world. Once the linkage is built, digital targeting and ad spend opportunities become more clear. For example, Neustar’s E1 segmentation (172 segments, gasp) gives survey research new ways to add value in the digital world. Survey respondents appended with segment codes can be profiled on key survey questions (and vice versa). For example, do we really want to target those highly promotion sensitive people in segments 19, 72, and 113? This helps researchers identify optimal targets for digital and levels of spend. Our survey guidance helps both in targeting (e.g., LiveRamp) and allocating digital inventory (e.g., Trade Desk).
And media choices needn’t be entirely digital – they simply need to be addressable.
Media choices also include addressable TV and can address context. Another coding scheme from RMT assigns motivational profiles to both addressable TV content and individual respondents. Survey research can then combine respondent-level data with motivational profiles, and use that to identify TV shows that are in alignment with their belief systems and personal motivational outlook. Pretty cool.
 
Importantly, survey research is really the only tool that can effectively assess the issue of creative, which most experts say accounts for up to 80% of the impact of media spend.
 
For me, personally, survey research can clearly move up in prominence into a more useful advisory role. Survey research becomes a very useful tool in the marketer’s toolkit to drive brand growth and improve efficiency/performance/ROAS. One of the thought leaders in our industry, Joel Rubinson, has more to say here.
 
CPG/FMCG has had these tools available for a while now – notably retail scanner data, frequent shopper data, first party data, and customized purchase panels. They provide a steady data laboratory for CPG/FMCG experimentation and insights.
 
But I think the more interesting and exciting opportunity is in non-CPG/FMCG, especially for categories that do not have retail scanning, or 3rd-party verified POS/sales reporting.
Survey-based approaches can absolutely be used to shape digital messaging, targeting, and media planning/buying for non-CPG if done correctly.
So, whew, crazy right? Here are some thoughts/consideration factors as you begin to explore the possibility of utilizing survey research for digital targeting and media spending.
  • If you have only first-party data, onboard external segment codes if you can. This is the most immediate way to ease into the use of targeting methods, and you can begin to experiment and test. Don’t waste your time on A/B testing, which is great for refinement. Focus instead on targeting – that’s where the gold is!
  • Understand demographic and lifestyle identifiers (e.g., Neustar’s E1 as an example) that may have already been on-boarded by your retail scanning data provider or survey research sample providers.

  • In non-CPG/FMCG, look for opt-in sample source providers with segments already on-boarded, such as Dynata or Numerator.

  • If you have a very small brand, it makes sense to find those few segments allied with your brand. Your small brand may have a large share in key segments: identify segments with higher shares and target them. This can generate vast improvements in ROAS.

  • But don’t over-segment! Some segmentation approaches are absurdly huge, such as the popular Neustar E1 scheme (172 segments!). If you can, condense and work with a more modest approach, perhaps 20 or 30, as a first step.

  • For non-CPG/FMCG, work with a reputable researcher and conduct well-designed research with very large sample sizes to get adequate segment representation (remember, the goal is to link backwards).

  • Utilize demonstrably proven survey questions, such as constant sum, replacement vs. addition, share by use occasion, and so on to assess volumetrics. Note that we approach non-CPG/FMCG in the same way as CPG but replace scanning with survey data.

  • Take your high return segments and model them onto the larger universe of individuals aka “look alikes”.

  • Utilize the improvement in ROAS to reinvest in other “look-alike” segments, or in other geographies.
 
These are new and exciting opportunities! I believe that they will lead to a resurgence of survey research as a new method for optimizing digital targeting and media planning.
Get in touch to discuss further.
The ROI of Customer Satisfaction

The ROI of Customer Satisfaction

I have seen many CSAT programs change a company’s culture by quantifying problems and isolating their causes, thus boosting retention and profitability, and moving from reactive to proactive.
Conversely, some companies don’t think that customer satisfaction (or “CSAT”) programs can add value because “we know our customers”. This comment conveys a misunderstanding of what a well-designed CSAT program is, and the value that it can bring to an organization.
 
In the short term, maintaining the “status quo” is a cheaper alternative, but it avoids the broader discussion about total (opportunity) costs. How much revenue are you leaving on the table by assuming that you know what the customer wants?
Here’s a basic example. Let’s say you run a $50MM company. What would you be willing to spend to prevent 10% of your customers from leaving?
 
At a 30% gross margin, you saved $1.5MM in profit. A CSAT program that costs $50K a year has an ROI of 30x! Now do you get it?
 
Even if we are conservative, a 5% reduction in defection produces $750K in savings, and an ROI of 15x – still impressive! By improved problem detection, by alerting key people about problems in real-time, thus cutting response time, we help mitigate customer defections and avoid a significant amount of lost business.
 
What are the fundamental problems with what I call a “status quo” approach? Here are a few:
  • Markets are changing. Your competitors are not standing still; they will continue to innovate, merge with others, or be acquired. Markets themselves morph from regional to national to global, and regulatory frameworks change.
  • Customers are changing. New customers replace old, and this year’s buyers are demographically, attitudinally, and behaviorally different from last year’s buyers, who will be different next year’s. How are you planning for that?
  • Expectations are changing. Customers are constantly evaluating their choice options within categories and making both rational and emotional buying decisions.
Maintaining the “status quo” is NOT a strategy: it is a reactive footing that forces you to play defense. In a status quo culture, you are not actively problem-solving on behalf of customers, nor are you focused on meeting their future needs!
Consider a couple of scenarios in which a CSAT program could add value:
If you run a smaller company, most of the company’s employees (including management) are interacting with customers every day. The company also gets feedback, albeit subjectively or anecdotally, every day. Corrections to sales or production processes can be done rapidly, and the customer is presumably happy. But even in smaller companies, there is limited institutional memory (i.e., a standard way to handle a problem or exception). One solution may reside with Fred in finance, another with Pat in production, or someone else entirely. There are no benchmarks to compare performance (other than sales). It is likely that the same problem will surface repeatedly because line staff or did not communicate with each other, or it might appear in another form in another department (i.e., a parts shortage caused by an inventory error). Unless management is alerted, larger “aha” discoveries are missed. This can cost hundreds of thousands of dollars in lost revenue.
If you run a large company or a major division, the gulf between customer feedback and management grows wider. News about problems may not reach management because they are viewed as unremarkable. And a company doesn’t have to be huge for these dynamics to occur. The evidence shows that by the time a company reaches just 100 employees, it behaves much like a multi-national enterprise. In a small company, it is everyone’s responsibility to fix a problem; in a large organization, it becomes someone else’s responsibility. The opportunity loss becomes even greater because there is no system in place to alert key staff or a specific department. As a result, millions of dollars in revenue can be lost.
A well-designed CSAT program that alerts the appropriate people or department can add significant value. At Surveys & Forecasts, LLC we offer basic CSAT programs (with key staff alerts, dashboards, and an annual review) for just $1,000 a month.
Get in touch to learn more! We’d love to work with you and help you improve satisfaction, retention, and save your organization some significant money.
This “T” is for Testing

This “T” is for Testing

Small-to-medium sized businesses (SMBs) can use a simple reference model for their marketing and customer insights efforts. By focusing on what I call the “Three T’s” (targeting, testing, and tracking), your business operations will be continually guided and improved by staying focused on your core target, supported by continuous testing of new products and services, and by objectively monitoring your progress over time. Today let’s take a closer look at one of the these areas: testing.

Instill a Testing Mindset

 
Testing can involve a multitude of variables, but as an SMB you need to think about two fundamental dimensions when considering a test of any kind: tests that affect your brand, and tests to determine whether your ad spend is working.
 
My colleague Joel Rubinson makes this key distinction between what he calls performance vs. brand marketing. Each supports the other; they are not in opposition to one another. In simple terms, performance marketing is focused primarily on media allocation and the optimization of your ad spending (ROAS), while brand marketing is focused primarily on finding the best ways to communicate the fundamental premise of your brand, such as your brand’s features, benefits, and desired end-user target.
 
It is in this context that I want you to think about two types of testing: what I will call brand concept testing of the brand promise (in various concept formats); and performance testing, the most commonly used form being A/B testing.
 
Brand Concept Testing
 
When we want to communicate the essence of a brand or an idea to a prospective customer, we do it with a stimulus known as a concept. A concept is an idea before it is marketed solely for testing purposes, so that we can understand consumer reactions to it. We test concepts to reduce the risk of making a mistake when launching an idea, to find the best way to describe an idea, and how to best communicate to our target audience.
 
A concept needs to communicate a compelling end-benefit or a solution to a problem using language that a target consumer can not only understand, but internalize and relate to emotionally. Concepts can differ significantly in their language, layout, image content, and other characteristics. The format of your concept will vary depending on the type of information you need for your brand and the type of test you are planning.
There are concepts written specifically for screening purposes that have very little detail or descriptive information (on purpose). There are concepts that go a little further, with more descriptive information, but still short of being fully developed. And then there are concepts that are close to finished advertising, much like you would see on a landing page or in print media. Here are three types of brand concepts that you should know:
 
Kernels are ideas or benefits presented as statements.
 
Kernels are used in screening tests to efficiently identify winners vs. losers. Kernels are evaluated on just a few measures (e.g., purchase intent, uniqueness, superiority), and each respondent sees all kernels, and each is assessed on all measures. Kernels can be attributes, benefits, or distinct ideas. This type of test is also called a “benefit screen”.
 
If kernels are distinct ideas, the analysis focuses on the top performers and profiling them on demographics, geographical areas, or other methods such as attitudinal scoring. If the stimuli are end-benefits or positioning statements, we can use tools to identify underlying themes that might convey an even bigger thematic idea.
 
White card concepts are simply words on a page without high-quality images or fluffy language.
 
White card concepts are typically comprised of 4-8 sentences, factually stating the problem, usage situation, or need; they also state the end-benefit, solution, or final result provided. White card concepts can be existing products, stand-alone ideas, line extensions, or new uses and repositionings. They can include price, flavors, sizes, dosing, brand name, packaging information, and even a basic visual (i.e., a B&W drawing). Because the goal is to test the waters in a bit more detail, some diagnostic questions are included – but the number of questions is limited because we are typically evaluating multiple concept ideas.
 
Full concepts are used to capture more complete reactions, and when fine-tuning your messaging or language is essential prior to a launch or ad spending.
Full concepts often have the benefit of qualitative insights to develop language, positioning, tone, or emotion of an ideas that showed promise in previous screening work. Full concept testing can be done “monadically” (i.e., the respondent sees one idea at a time in its own cell of respondents), or in a sequential design.
 
Full concepts are longer, written to include all that might be conveyed in the time an ad is exposed (e.g., 15- or 30-seconds). They can also be more elaborate in their situational set-up or premise, use of demonstration cases, or other info.

A/B Testing

You might think that A/B testing always provides a clear choice, but there is usually more to the story than the difference between two variables.
The A vs. B variants you are testing might be affected by a series of previous decisions made long before either A or B were evaluated head-to-head. For example, if A is your current campaign that includes search, PR, and Facebook ads, and B does not leverage the campaign you are now running, your test is already biased against B. Or, perhaps your objective was impressions, but one option delivered much higher conversions. So, interpreting results can quickly become more complicated than perhaps they first appeared.
But, for argument sake, let’s assume that A and B start from the same point, and neither will be biased by previous advertising or spending level decisions. If so, A/B testing can be interpreted unbiasedly, and executed within any number of environments, such as CRM systems (HubSpot, Salesforce, etc.), dedicated A/B testing environments such as Central Control or Unbounce, and even some popular web hosting platforms provide an opportunity to conduct simple A/B tests.
 
Mechanically, the ad testing company you work with will develop two (or more) landing pages (A, B, C, etc.) and visitors to your site will be randomly redirected to one of those variants. Google Analytics and other web traffic statistics can be utilized to determine which variant is most effective at, for example: lowering bounce rates, achieving conversions, increasing CTRs, or other metrics you choose. A/B test designs can also revolve around content of the landing page, the overall site experience, or changes to ad spend, placement, location, context/environment, and more (see above brand concept formats).
 
Scratching the Surface
 
There are a multitude of different testing and design options for you to consider as an SMB. I have given you a taste, so get out there and test! Working with a marketing insights and research expert is your best guarantee that the type of concept and testing environment is designed, executed, and analyzed effectively. At Surveys & Forecasts, LLC we have worked with many different companies to help them develop optimal brand strategies and concepts, identify which execution best communicates their brand’s proposition, and which marketing program is most effective for their limited ad dollars.

 

 

Surveys & Forecasts, LLC