Customer Value vs. Valued

Customer Valued?

In our age of automation and AI, marketers risk becoming detached from how their business practices, products, and services are perceived by their customers. Marketers have it all wrong by focusing solely on customer satisfaction or a willingness to recommend: these measures do not capture the customer’s relationship-based perspective. Customer satisfaction or willingness to recommend are thin, shallow, “last click” based, and largely transactionally-oriented.

The customer can be momentarily satisfied with their last transaction (product or service): it delivered a promised benefit. But that is, by definition a transactional experience, and often devoid of any emotional richness.

In working through these distinctions with our clients, the notion of customer value (or customer lifetime value, aka “CLV”) can be the polar opposite of whether the customer feels valued.

From a financial or marketing perspective, a company’s approach to customer value is formulaic: extract maximum value from the customer. The metrics take many forms: ROI, ROAS, narrowly targeting, and upselling to name a few. But this “share of requirements” approach (i.e., siphoning off more revenue from the same customer) is entirely transactionally driven. There is no “emotional stickiness” created with the customer from a single transaction, or even from multiple successful transactions.

From the customer’s perspective, feeling valued creates an emotional connection which is deeply internalized and an incredibly powerful sticky magnet for retention.

As a CMO, how would you answer this question from the customer: “Does this company appreciate my business?” If you don’t know, consider path-to-purchase or other in-depth insights work. Test new programs. Test reward structures. In what ways can your company demonstrate to the customer that their business matters – that they themselves are valued?

Even highly satisfied customers can be quickly dislodged by a competitor who can, for example:

  • Offer products at a lower price
  • Offer products with more features/benefits
  • Offer products that appeals to more users
  • Offer products that have more use cases
  • Deliver products in less time
  • Leverages variety-seeking behavior or changing tastes

In email automation and drip campaigns, we have learned that with more personalization comes better response – and a greater likelihood of consumers responding to offers. But everything now comes through as personalized (except for the occasional misfire, like “Dear {FirstName}”), which slowly erodes that competitive edge. But all of this plays in the transactional space, devoid of motivation or emotion.

In our client work on customer value, we have noticed that measures related to the customer’s perception of his/her worth to the company is a far better predictor of customer retention than “shallow” measures of satisfaction or willingness to recommend. One measure (NPS), in particular, is especially weak in this area. Correlations with purchasing are always the lowest. This really should come as no surprise, since it is a thin ‘report card’. Management teams gain some comfort by following the herd who also uses NPS, but it provides little actionable insight into the underlying value equation.

We urge CMO’s and marketing leaders to think about longer-term results and to focus on the customer’s perception of their relationship, rather than the transactional value extraction approach embraced by many marketing organizations today.

As Peter Drucker famously noted, the purpose of business is not to make a profit; it is to create a customer.

Reframing Marketing Research Spending as an Option-Creating Investment

If you have been in business long enough, you know that the hard work of research is sometimes seen as optional or discretionary by some management teams because it’s hard to calculate the true ROI of research. But we’re thinking about this all wrong. Companies should be thinking about research as a way to separate winners from losers, and move the winners to market as quickly as possible at the lowest possible total cost. So let’s flip it around and consider the value of research using an investment framework.

Business spending and investments fall into three broad buckets, which are:

Infrastructure investments that include the costs of standing the business up and keeping it running at a baseline level. This includes the sunk costs of office space, utilities, computers, distribution centers, manufacturing, and support staff. The business cannot run without them, and the ROI cannot be easily calculated because it’s the paid-in capital needed to get the flywheel spinning.

Variable cost investments include all short-term spending to promote the company’s products. ROI calculations work best in these situations because there is a beginning, a middle, and an end to the spending and the program that is being run. The ROI question is: when I spend a dollar, how much will I get back (in the near term)? For example, advertisers and media companies obsessively focus on maximizing ROI by targeting (i.e., MTA), which is amazing but does little to identify promising ideas or address business strategy: it’s simply optimizing ad spend.

Option creating investments are by far the most interesting! These investments are made for marketing research. An “option creating investment” lets me put a little money down on the table to give me the option of owning something later that is worth much more. If I spend money for an “option” but it’s not going to pay off, then I walk away and let the option expire. Alternatively, if I have a winner, I am in the money. If I put $2 million down and I get back $20 million, my ROI is 10x and it’s time to exercise my option. The product moves over to the ROI category, supported by variable cost investments.

The other option is, of course, launching a product that you did not test and watching it fail spectacularly. All you have to be is right. But if you’re wrong and you’re the CEO, you might be looking for a new job!

Here’s a quick example. Let’s say we have 10 ideas, and each one of them costs $5K to test. Half of them move on to an R&D product development phase at $75K each, and these all move on to a product evaluation phase at $15K each. Two of these then move forward to test market at $500K, but only one of them performs well enough for a regional launch costing $2MM. All in (including the losers) I have spent $3.5MM.

The launched product achieves $10MM in Year 1 sales at a gross margin of 60%, or $6MM. My ROI (including the cost of all my losers) is 171%. If I have two winners, my ROI is even better at 343%. And I am not breaking out the cost of research alone, which is much smaller – I am including all of the costs associated with the launch.

Option creating investments can also be made in customer satisfaction research to identify additional ideas to insert into your screening programs. Over the course of time, the amount of money you may spend in research testing will be rounding error compared to the amount of money made by the winners.

Knowing what won’t work is as valuable as knowing what will by researching effectively. Well-designed research will continuously feed successful business performance and yield great ROI!

 

My thanks to Jay Kingley of Centricity for helping to shape this thinking!

Simple Ways to Start Your Analysis

Simple Ways to Start Your Analysis

Looking for a quick way to get started understanding the results of your research?

Projects that are survey research-based can be daunting. So can projects that involve the analysis of sales, promotional activity, advertising, or other marketing-related activity.

We live in a world of complexity and big data. Simple guidelines and a keen eye can reveal patterns that you might have otherwise overlooked. Here are a few tips to help you start analyzing your project:

Take a walk through your data.

Scroll through the data and see where values  “pop” – that is, where are they high and where they are low? Do your tables flow in the same way that you think about your business? If so, you will begin to see numbers that imply relationships. As a result, visual outliers can become major insights.

Compare those who are interested versus not.

In the research business, we refer to this as “acceptor-rejecter” analysis. If, for example, you have a five-point purchase scale, group the “fours” and “fives” and compare them to “ones” or “twos”. Throw the neutrals in with the rejecters to compare positives vs. everyone else. Are there larger differences? If so, What do you infer?

Mine the gap.

The benefit of acceptors vs. rejecters is that you are looking more vs. less extreme. The difference between them is valuable in identifying a compelling story. Typically, this is done in the form of point gaps. A large gap between acceptors and rejecters points to an insight.

Sort your data.

If you have attributes of various features or benefits, sort them from high to low and compare the acceptors and rejecters. Or compare demographic groups, such as Millennials vs. Baby Boomers. Sort them on ratings or point gaps. Larger point gaps can identify attributes that are choice drivers.

Think linearly.

Array groups you are interested in analyzing by order of magnitude. For example, a variable like education is easy: college educated vs. not. For income, create low, moderate, and high income groups, and compare across. The same is true for other continuous variables, like age. Be clever, use medians and not means.

Your eyes will easily see patterns, especially if interest is correlated with your dependent measures.

These little baby hacks will get you on your way!

The New World of Reaction-Based Marketing

The New World of Reaction-Based Marketing

Those of us who have spent some time in research departments tend to think in linear terms. By that I mean that there is a “classic” sequence to follow to understand customer needs, new product opportunities, line extensions, new advertising, etc. For example, we might start with a strategic study to understand buyer needs and behavior, identify segments or personas, follow that with benefit screening or concept testing to assess interest, then move into advertising concepts, and then marry that with the product development track with R&D, address any deficiencies, move into a test market, and then a national introduction.

Not. Those days are long gone. There is no appetite for “research” or “insights” in the classic flow referred to above.

This is most obvious when you look at the revenue of major research firms, which have grown anemically the last five years. While it is important to understand customer/buyer needs, research can’t add nearly as much value until it understands the digital landscape. Joel Rubinson and Bill Harvey have written about this eloquently. Those of us who consider ourselves insights experts or researchers must come to grips with the fact that most companies have no interest in spending much time conversing with customers, even when it has strategic value.

Most companies feel the need to respond or react to what is happening right now, in real time. In fact, for many companies, response time is the only thing that matters. We are in an age of data lakes, auctions, programmatic and ROI – a world of reaction-based marketing. In this world, a brand demands that for every nickel it spends, a nickel in sales should be generated. Companies are not interested in convincing you that their product is superior, or meets your needs, or fits your lifestyle unless they get paid back. Nor are they interested in the protective benefits of long-term brand building. This is a finance-centric rather than marketing-centric philosophy.

Reaction-based marketing has four primary characteristics which distinguish it from traditional marketing and brand building

  • ROI is the primary KPI used to measure marketing success.
  • Decisions are engineering-driven, not consumer needs-driven.
  • Decisions are event-based, not strategically- or equity-driven.
  • The cost of failure is less than the cost of testing.

A great example is Amazon, which alone has created these exact marketing conditions. It is a complete ecosystem for testing all elements of the marketing mix (excluding distribution, which it owns). Yet has Amazon not created the perfect ecosystem for driving brands into commodity status? A great example is alkaline batteries: Duracell currently cells 24 AAA batteries for $16. Amazon sells 48 AAA batteries for $15. I wonder who wins that battle?

As researchers and insights experts, where we add value is the missing link between all of the automated ecosystems that are competing for the consumer’s attention, and how the consumer thinks and feels. That market is wide open.

The ROI of Customer Satisfaction

The ROI of Customer Satisfaction

I have seen many CSAT programs change a company’s culture by quantifying problems and isolating their causes, thus boosting retention and profitability, and moving from reactive to proactive.
Conversely, some companies don’t think that customer satisfaction (or “CSAT”) programs can add value because “we know our customers”. This comment conveys a misunderstanding of what a well-designed CSAT program is, and the value that it can bring to an organization.
 
In the short term, maintaining the “status quo” is a cheaper alternative, but it avoids the broader discussion about total (opportunity) costs. How much revenue are you leaving on the table by assuming that you know what the customer wants?
Here’s a basic example. Let’s say you run a $50MM company. What would you be willing to spend to prevent 10% of your customers from leaving?
 
At a 30% gross margin, you saved $1.5MM in profit. A CSAT program that costs $50K a year has an ROI of 30x! Now do you get it?
 
Even if we are conservative, a 5% reduction in defection produces $750K in savings, and an ROI of 15x – still impressive! By improved problem detection, by alerting key people about problems in real-time, thus cutting response time, we help mitigate customer defections and avoid a significant amount of lost business.
 
What are the fundamental problems with what I call a “status quo” approach? Here are a few:
  • Markets are changing. Your competitors are not standing still; they will continue to innovate, merge with others, or be acquired. Markets themselves morph from regional to national to global, and regulatory frameworks change.
  • Customers are changing. New customers replace old, and this year’s buyers are demographically, attitudinally, and behaviorally different from last year’s buyers, who will be different next year’s. How are you planning for that?
  • Expectations are changing. Customers are constantly evaluating their choice options within categories and making both rational and emotional buying decisions.
Maintaining the “status quo” is NOT a strategy: it is a reactive footing that forces you to play defense. In a status quo culture, you are not actively problem-solving on behalf of customers, nor are you focused on meeting their future needs!
Consider a couple of scenarios in which a CSAT program could add value:
If you run a smaller company, most of the company’s employees (including management) are interacting with customers every day. The company also gets feedback, albeit subjectively or anecdotally, every day. Corrections to sales or production processes can be done rapidly, and the customer is presumably happy. But even in smaller companies, there is limited institutional memory (i.e., a standard way to handle a problem or exception). One solution may reside with Fred in finance, another with Pat in production, or someone else entirely. There are no benchmarks to compare performance (other than sales). It is likely that the same problem will surface repeatedly because line staff or did not communicate with each other, or it might appear in another form in another department (i.e., a parts shortage caused by an inventory error). Unless management is alerted, larger “aha” discoveries are missed. This can cost hundreds of thousands of dollars in lost revenue.
If you run a large company or a major division, the gulf between customer feedback and management grows wider. News about problems may not reach management because they are viewed as unremarkable. And a company doesn’t have to be huge for these dynamics to occur. The evidence shows that by the time a company reaches just 100 employees, it behaves much like a multi-national enterprise. In a small company, it is everyone’s responsibility to fix a problem; in a large organization, it becomes someone else’s responsibility. The opportunity loss becomes even greater because there is no system in place to alert key staff or a specific department. As a result, millions of dollars in revenue can be lost.
A well-designed CSAT program that alerts the appropriate people or department can add significant value. At Surveys & Forecasts, LLC we offer basic CSAT programs (with key staff alerts, dashboards, and an annual review) for just $1,000 a month.
Get in touch to learn more! We’d love to work with you and help you improve satisfaction, retention, and save your organization some significant money.
This “T” is for Testing

This “T” is for Testing

Small-to-medium sized businesses (SMBs) can use a simple reference model for their marketing and customer insights efforts. By focusing on what I call the “Three T’s” (targeting, testing, and tracking), your business operations will be continually guided and improved by staying focused on your core target, supported by continuous testing of new products and services, and by objectively monitoring your progress over time. Today let’s take a closer look at one of the these areas: testing.

Instill a Testing Mindset

 
Testing can involve a multitude of variables, but as an SMB you need to think about two fundamental dimensions when considering a test of any kind: tests that affect your brand, and tests to determine whether your ad spend is working.
 
My colleague Joel Rubinson makes this key distinction between what he calls performance vs. brand marketing. Each supports the other; they are not in opposition to one another. In simple terms, performance marketing is focused primarily on media allocation and the optimization of your ad spending (ROAS), while brand marketing is focused primarily on finding the best ways to communicate the fundamental premise of your brand, such as your brand’s features, benefits, and desired end-user target.
 
It is in this context that I want you to think about two types of testing: what I will call brand concept testing of the brand promise (in various concept formats); and performance testing, the most commonly used form being A/B testing.
 
Brand Concept Testing
 
When we want to communicate the essence of a brand or an idea to a prospective customer, we do it with a stimulus known as a concept. A concept is an idea before it is marketed solely for testing purposes, so that we can understand consumer reactions to it. We test concepts to reduce the risk of making a mistake when launching an idea, to find the best way to describe an idea, and how to best communicate to our target audience.
 
A concept needs to communicate a compelling end-benefit or a solution to a problem using language that a target consumer can not only understand, but internalize and relate to emotionally. Concepts can differ significantly in their language, layout, image content, and other characteristics. The format of your concept will vary depending on the type of information you need for your brand and the type of test you are planning.
There are concepts written specifically for screening purposes that have very little detail or descriptive information (on purpose). There are concepts that go a little further, with more descriptive information, but still short of being fully developed. And then there are concepts that are close to finished advertising, much like you would see on a landing page or in print media. Here are three types of brand concepts that you should know:
 
Kernels are ideas or benefits presented as statements.
 
Kernels are used in screening tests to efficiently identify winners vs. losers. Kernels are evaluated on just a few measures (e.g., purchase intent, uniqueness, superiority), and each respondent sees all kernels, and each is assessed on all measures. Kernels can be attributes, benefits, or distinct ideas. This type of test is also called a “benefit screen”.
 
If kernels are distinct ideas, the analysis focuses on the top performers and profiling them on demographics, geographical areas, or other methods such as attitudinal scoring. If the stimuli are end-benefits or positioning statements, we can use tools to identify underlying themes that might convey an even bigger thematic idea.
 
White card concepts are simply words on a page without high-quality images or fluffy language.
 
White card concepts are typically comprised of 4-8 sentences, factually stating the problem, usage situation, or need; they also state the end-benefit, solution, or final result provided. White card concepts can be existing products, stand-alone ideas, line extensions, or new uses and repositionings. They can include price, flavors, sizes, dosing, brand name, packaging information, and even a basic visual (i.e., a B&W drawing). Because the goal is to test the waters in a bit more detail, some diagnostic questions are included – but the number of questions is limited because we are typically evaluating multiple concept ideas.
 
Full concepts are used to capture more complete reactions, and when fine-tuning your messaging or language is essential prior to a launch or ad spending.
Full concepts often have the benefit of qualitative insights to develop language, positioning, tone, or emotion of an ideas that showed promise in previous screening work. Full concept testing can be done “monadically” (i.e., the respondent sees one idea at a time in its own cell of respondents), or in a sequential design.
 
Full concepts are longer, written to include all that might be conveyed in the time an ad is exposed (e.g., 15- or 30-seconds). They can also be more elaborate in their situational set-up or premise, use of demonstration cases, or other info.

A/B Testing

You might think that A/B testing always provides a clear choice, but there is usually more to the story than the difference between two variables.
The A vs. B variants you are testing might be affected by a series of previous decisions made long before either A or B were evaluated head-to-head. For example, if A is your current campaign that includes search, PR, and Facebook ads, and B does not leverage the campaign you are now running, your test is already biased against B. Or, perhaps your objective was impressions, but one option delivered much higher conversions. So, interpreting results can quickly become more complicated than perhaps they first appeared.
But, for argument sake, let’s assume that A and B start from the same point, and neither will be biased by previous advertising or spending level decisions. If so, A/B testing can be interpreted unbiasedly, and executed within any number of environments, such as CRM systems (HubSpot, Salesforce, etc.), dedicated A/B testing environments such as Central Control or Unbounce, and even some popular web hosting platforms provide an opportunity to conduct simple A/B tests.
 
Mechanically, the ad testing company you work with will develop two (or more) landing pages (A, B, C, etc.) and visitors to your site will be randomly redirected to one of those variants. Google Analytics and other web traffic statistics can be utilized to determine which variant is most effective at, for example: lowering bounce rates, achieving conversions, increasing CTRs, or other metrics you choose. A/B test designs can also revolve around content of the landing page, the overall site experience, or changes to ad spend, placement, location, context/environment, and more (see above brand concept formats).
 
Scratching the Surface
 
There are a multitude of different testing and design options for you to consider as an SMB. I have given you a taste, so get out there and test! Working with a marketing insights and research expert is your best guarantee that the type of concept and testing environment is designed, executed, and analyzed effectively. At Surveys & Forecasts, LLC we have worked with many different companies to help them develop optimal brand strategies and concepts, identify which execution best communicates their brand’s proposition, and which marketing program is most effective for their limited ad dollars.

 

 

Surveys & Forecasts, LLC