Understanding the Qualtrics Layoffs

 

image wall street sign

I was sorry to see that Qualtrics recently laid off 780 positions (October 2023), coming on the heels of 270 layoffs back in January of 2023. This represents about 20% of the Qualtrics workforce. Having once gone through that painful experience in my career, I remember the anxiety and stress it caused when the floor dropped out from underneath me. I hope that everyone affected is able to find new opportunities as quickly as possible.

News articles from tech publications have explained the layoffs as a contraction following COVID-driven hiring and staffing up to meet demand – but Qualtrics it is not Amazon and doesn’t compete in the direct-to-consumer space, so the comparison doesn’t quite line up. So what forces are at play that may have resulted in these layoffs? I see a few inter-related things: 

  • Marketing research isn’t a high-growth business, and survey research in particular is a mature one. Big firm research growth has slowed due to a proliferation of DIY platforms, more reliance on digital evaluation (e.g., MTA, social media listening), and less need for user input at earlier stages of product development. The growth in questionnaire-based survey research is less than 2% per year.

  • The Qualtrics “experience management” strategy included horizontal expansion into other areas of the enterprise, such as human resources, that run on feedback. The growth rate of this strategy has also slowed. Small- and mid-cap organizations represent a less attractive segment because they don’t do as much research or tracking, and their projects are typically much smaller.

  • A major revenue source at Qualtrics is satisfaction tracking, especially programs built around NPS. You’ll recall that in 2003, NPS was touted as “the one number you need to grow” your business. Gartner has predicted that more than 75% of organizations will abandon NPS as a measure of success by 2025 due to a lack of correlation with metrics like sales or retention.
  • NPS programs also have great margins and are insulated (that is, once up and running, they are hard to dislodge). But with NPS programs dissolving, Qualtrics must make up that revenue with ad hoc projects and compete directly in the traditional survey space with capable lower-cost providers.
  • Qualtrics plans to spend $500 million on AI over the next four years to leverage “the world’s largest database of human sentiment”. But with AI more ubiquitous, this new strategy could be a major drag on earnings. And exactly whose sentiment will be used to train the models and shared with the rest of the world – the proprietary data of their clients? And by leaning hard into AI, even less staff may be required.
  • Perhaps the most obvious reason for the recent layoffs at Qualtrics is that Silver Lake et al (which completed its acquisition in June 2023) needs to see a return on its $12.5 billion investment. Cutting staff is the easiest lever to pull, especially if growth has slowed. That will make the balance sheet look healthy, even if growth prospects are muted.

You might recall, back in November 2018, SAP purchased Qualtrics for a hefty $8 billion. That union was touted as a way to accelerate a new “XM category”. The goal was to combine experiential and operational data to power the “experience economy”. But “experience management” didn’t seem to gain momentum,  and with a clash of cultures, SAP quickly spit out the frog it had swallowed – something I had predicted in a post back in 2018. Once all of these hard times have passed, I expect Qualtrics to be refloated as an IPO by 2026 or so. That will please the private equity folks.

For more information about our custom research services and new product development programs, please get in touch at info@safllc.com.

Reframing Marketing Research Spending as an Option-Creating Investment

If you have been in business long enough, you know that the hard work of research is sometimes seen as optional or discretionary by some management teams because it’s hard to calculate the true ROI of research. But we’re thinking about this all wrong. Companies should be thinking about research as a way to separate winners from losers, and move the winners to market as quickly as possible at the lowest possible total cost. So let’s flip it around and consider the value of research using an investment framework.

Business spending and investments fall into three broad buckets, which are:

Infrastructure investments that include the costs of standing the business up and keeping it running at a baseline level. This includes the sunk costs of office space, utilities, computers, distribution centers, manufacturing, and support staff. The business cannot run without them, and the ROI cannot be easily calculated because it’s the paid-in capital needed to get the flywheel spinning.

Variable cost investments include all short-term spending to promote the company’s products. ROI calculations work best in these situations because there is a beginning, a middle, and an end to the spending and the program that is being run. The ROI question is: when I spend a dollar, how much will I get back (in the near term)? For example, advertisers and media companies obsessively focus on maximizing ROI by targeting (i.e., MTA), which is amazing but does little to identify promising ideas or address business strategy: it’s simply optimizing ad spend.

Option creating investments are by far the most interesting! These investments are made for marketing research. An “option creating investment” lets me put a little money down on the table to give me the option of owning something later that is worth much more. If I spend money for an “option” but it’s not going to pay off, then I walk away and let the option expire. Alternatively, if I have a winner, I am in the money. If I put $2 million down and I get back $20 million, my ROI is 10x and it’s time to exercise my option. The product moves over to the ROI category, supported by variable cost investments.

The other option is, of course, launching a product that you did not test and watching it fail spectacularly. All you have to be is right. But if you’re wrong and you’re the CEO, you might be looking for a new job!

Here’s a quick example. Let’s say we have 10 ideas, and each one of them costs $5K to test. Half of them move on to an R&D product development phase at $75K each, and these all move on to a product evaluation phase at $15K each. Two of these then move forward to test market at $500K, but only one of them performs well enough for a regional launch costing $2MM. All in (including the losers) I have spent $3.5MM.

The launched product achieves $10MM in Year 1 sales at a gross margin of 60%, or $6MM. My ROI (including the cost of all my losers) is 171%. If I have two winners, my ROI is even better at 343%. And I am not breaking out the cost of research alone, which is much smaller – I am including all of the costs associated with the launch.

Option creating investments can also be made in customer satisfaction research to identify additional ideas to insert into your screening programs. Over the course of time, the amount of money you may spend in research testing will be rounding error compared to the amount of money made by the winners.

Knowing what won’t work is as valuable as knowing what will by researching effectively. Well-designed research will continuously feed successful business performance and yield great ROI!

 

My thanks to Jay Kingley of Centricity for helping to shape this thinking!

Curation: The Next Wave of Marketing

Choice Overload vs. Curation 

Whenever we go to Amazon, or Netflix, or any other site, we are immediately presented with dozens, if not hundreds, of choices. Many of these choices are randomly selected by the retailer based on past purchase behavior across the buyer’s digital mesh. Across multiple devices, the company knows our age, sex, and geographical location, and perhaps can algorithmically make some deterministic assumptions about what we like or don’t like.

But that has yet to translate into something that is presented to the customer as a reasonable choice set. It is no wonder that consumers feel bombarded by choice. They are simply overwhelmed.

We are presented, every day, in multiple contexts, with too many choices. We are presented with too many choices when we read digital publications. We are presented with too many choices when we look at the social media feeds of LinkedIn or Facebook. We are presented with too many choices when looking for a TV show or a movie. Humans are simply not capable of synthesizing hundreds, if not thousands, of choice alternatives when they are presented as a mass (mess?) of individual decisions. Our cognitive capability collapses under the weight of all of the choice decisions that must be made when presented with too much choice.

Companies generally, and advertisers and media assets in particular, have failed to make the leap from choice to curation. This is a huge opportunity for marketers in simplifying the marketing message, making the overall customer experience that much less burdensome and taxing, and draws the consumer closer to the value proposition that attracted the consumer in the first place.

As a general rule, consumers do not like other people making decisions for them. A good case in point is grocery shopping. Yet, in urban environments, direct delivery makes much more sense due to the many obstacles for grocery shopping in congested cities. A grocery shopper doesn’t have to fight city traffic, load up a car, drive to and from their apartment, or leave and subsequently enter parking garages to get the week’s groceries. One of my former clients, FreshDirect, learned early on that their business model wasn’t solely built around the ability to deliver high-quality produce at reasonable prices. The secret ingredient were their drivers. The drivers knew their customers at a personal level, and were able to create a curated experience by making sure that certain things were done to the customer’s exact specifications.

Why is curation so hard? E-commerce has not figured this out at all. Not long ago I ordered tires for my road bike from Amazon. On a subsequent login, Amazon suggested other road bike tires I might be interested in — for a product category purchased annually (at best). The lack of synchronization between recommendations, purchase frequency, and my likely need was stunningly dumb. Yes, Amazon is enormous, yes they make lots of money, but they still have not moved the needle on the concept of curation in any meaningful sense. What if Amazon had a viable competitor that really understood curation?

On the flipside, one of my favorite examples of curation is Spotify. Once again, one would think that Apple (iTunes) would have figured this out long ago, but Spotify is a wonder. If I want music for concentration, there is a curated playlist. If I want calming classical in the background, there is a curated playlist. Do they get it right all the time? No, but they are pretty close most of the time.  And I don’t mind if they miss. AN 80% hit rate is pretty good to me. At least there are humans involved in the decision-making process. OK, yes, perhaps also an algorithm, but at least it is a collaborative effort.

Marketers would be well advised to start thinking about how to anticipate the kinds of products and services that customers will be looking for in a world where choice is overly abundant. Curation is one of the ways that marketers can demonstrate that they are tuned in to what customers are seeking, rather than blindly and programmatically jamming messages at them without any thought to the choice overload that they create. Does the marketer want to convey something meaningful, or add more noise? So far it has been the latter.

I hope that more marketing and advertising initiatives will consider the notion that humans are very, very good at intuiting what other humans might like or enjoy. The concept of curation can form a  much-needed bridge between the antiseptic world of algorithmic decision-making and true human connection.

Why are Customer Satisfaction Research Experiences so bad?

Why are Customer Satisfaction Research Experiences so bad?

Automated surveys are everywhere, triggered by retail and online purchases, a flight you took, or your dentist. If you are like me, you probably get wee bit cranky when you experience a badly conceptualized customer feedback interview. We seem to be swimming in an ocean of really, really bad feedback programs. Why is this happening, and why does it seem to be happening on such a massive scale?

 The two primary causes of poor survey quality are CRM auto-generated surveys and poorly executed DIY. There isn’t much you can do about this. What you can do is make sure that your programs are world-class.

The most effective customer satisfaction programs embody thoughtfulness, intelligence, and even a little irreverence. Too often, programs miss the mark and companies can actually damage the relationships they are trying to nurture with poorly executed programs.

In my experience, customer satisfaction programs can implode for multiple reasons:

  • Questions assume that customers can accurately isolate all elements of their purchase experience. Marketing researchers and data scientists spend a great deal of time identifying all of the dimensions that need to be evaluated. But most customers are incapable of remembering more than a few things (even if they were experienced moments before). Expecting hyper-granular customer feedback is unrealistic.
  • The emotional state of the buyer at the time of purchase is ignored. If we know someone’s emotional state before or during a purchase experience, we can better understand how customers are interacting with us as marketers – and do something about it. If I have a surly register clerk, a rude gate agent, or am in a poor frame of mind, my ability to provide useful feedback is compromised. Conversely, if I am always greeted with a smile at Starbucks do I care if my latte doesn’t have enough whipped cream?
  • The longitudinal (holistic) relationship with your customer is overlooked. In addition to the current wave of feedback, have you bothered to look at the same customer over time? Have you acknowledged the customer relationship in your questions or survey flow? Are you nurturing or alienating? Questions that capture the longitudinal dimension can help business operations improve.
  • Questions are mistimed with product use (e.g., questions are asked before the product is used, or asked when not enough time has passed for the product to be appropriately assessed). If I am asked to complete a survey off of a cash register receipt, but the questions are about products I’ve yet to use, how am I expected to report on my level of satisfaction?
  • Framing questions around the product or buying occasion and not the customer. It’s not about the product, it’s about your customer. Did the customer feel valued? Were they treated with dignity and respect? Staples recently sent me a feedback request labeled my “paper towel experience”. This is awful.
  • Don’t assume that a purchase is in a category where repeat buying is routine. This includes recommendations for additional purchases that the retailer would like me to consider. My favorite solicitation was after I had purchased a car battery. Amazon proudly suggested other car batteries I might want to buy because well, you know, you can never have too many car batteries.
  • Don’t focus on a single score, or assume that buyers will, on an unsolicited basis, recommend products to others. I have written about this in previous posts. Consultants continue to push this “single score” narrative. It is plain wrong. Yet companies are willing to pay for this sage guidance.

Companies should not feel compelled to collect feedback after every purchase or experience. This is unnecessary and saturates the customer with far too many requests for feedback. This causes damage not only to the company, but the data of everyone else who need feedback. Companies are best served by collecting data on an Nth-name transaction basis and letting sampling theory do the rest.

The compulsion to collect feedback after each and every interaction harms data quality – and weakens the bonds of the buyer-seller relationship.

Customer satisfaction programs deliver important KPI’s to assess performance and, programs should be conducted because it makes a material difference.

But we must avoid the temptation to mindlessly automate customer satisfaction programs. The goal is to make the customer happy. The way things are going, it’s causing more harm than good.
If you’d like, let’s continue the discussion.

 

 

 

 

 

 

 

 

 

The ROI of Customer Satisfaction

The ROI of Customer Satisfaction

I have seen many CSAT programs change a company’s culture by quantifying problems and isolating their causes, thus boosting retention and profitability, and moving from reactive to proactive.
Conversely, some companies don’t think that customer satisfaction (or “CSAT”) programs can add value because “we know our customers”. This comment conveys a misunderstanding of what a well-designed CSAT program is, and the value that it can bring to an organization.
 
In the short term, maintaining the “status quo” is a cheaper alternative, but it avoids the broader discussion about total (opportunity) costs. How much revenue are you leaving on the table by assuming that you know what the customer wants?
Here’s a basic example. Let’s say you run a $50MM company. What would you be willing to spend to prevent 10% of your customers from leaving?
 
At a 30% gross margin, you saved $1.5MM in profit. A CSAT program that costs $50K a year has an ROI of 30x! Now do you get it?
 
Even if we are conservative, a 5% reduction in defection produces $750K in savings, and an ROI of 15x – still impressive! By improved problem detection, by alerting key people about problems in real-time, thus cutting response time, we help mitigate customer defections and avoid a significant amount of lost business.
 
What are the fundamental problems with what I call a “status quo” approach? Here are a few:
  • Markets are changing. Your competitors are not standing still; they will continue to innovate, merge with others, or be acquired. Markets themselves morph from regional to national to global, and regulatory frameworks change.
  • Customers are changing. New customers replace old, and this year’s buyers are demographically, attitudinally, and behaviorally different from last year’s buyers, who will be different next year’s. How are you planning for that?
  • Expectations are changing. Customers are constantly evaluating their choice options within categories and making both rational and emotional buying decisions.
Maintaining the “status quo” is NOT a strategy: it is a reactive footing that forces you to play defense. In a status quo culture, you are not actively problem-solving on behalf of customers, nor are you focused on meeting their future needs!
Consider a couple of scenarios in which a CSAT program could add value:
If you run a smaller company, most of the company’s employees (including management) are interacting with customers every day. The company also gets feedback, albeit subjectively or anecdotally, every day. Corrections to sales or production processes can be done rapidly, and the customer is presumably happy. But even in smaller companies, there is limited institutional memory (i.e., a standard way to handle a problem or exception). One solution may reside with Fred in finance, another with Pat in production, or someone else entirely. There are no benchmarks to compare performance (other than sales). It is likely that the same problem will surface repeatedly because line staff or did not communicate with each other, or it might appear in another form in another department (i.e., a parts shortage caused by an inventory error). Unless management is alerted, larger “aha” discoveries are missed. This can cost hundreds of thousands of dollars in lost revenue.
If you run a large company or a major division, the gulf between customer feedback and management grows wider. News about problems may not reach management because they are viewed as unremarkable. And a company doesn’t have to be huge for these dynamics to occur. The evidence shows that by the time a company reaches just 100 employees, it behaves much like a multi-national enterprise. In a small company, it is everyone’s responsibility to fix a problem; in a large organization, it becomes someone else’s responsibility. The opportunity loss becomes even greater because there is no system in place to alert key staff or a specific department. As a result, millions of dollars in revenue can be lost.
A well-designed CSAT program that alerts the appropriate people or department can add significant value. At Surveys & Forecasts, LLC we offer basic CSAT programs (with key staff alerts, dashboards, and an annual review) for just $1,000 a month.
Get in touch to learn more! We’d love to work with you and help you improve satisfaction, retention, and save your organization some significant money.
Fail Forward

Fail Forward

There have been some recent news articles about the role of marketing insights and research, most notably Jeff Bezos of Amazon, who in his 2018 letter to shareholders said: “No customer was asking for Echo. This was definitely us wandering. Market research doesn’t help. If you had gone to a customer in 2013 and said “Would you like a black, always-on cylinder in your kitchen about the size of a Pringles can that you can talk to and ask questions, that also turns on your lights and plays music?” I guarantee you they’d have looked at you strangely and said “No, thank you.” Really? Give me a break.
 
Over the years, we have heard similar things from other executives. In a blog post covered by Bob Lederer, he relayed the reaction of Robert Granader who blogged about the reaction of several analysts at his firm, which included this gem: “Amazon surely had market research indicating that customers wanted hands-free music, home connectivity, multi-functional devices, and faster/easier searching for information. It was then Amazon’s job to develop a product to serve all of those needs. Market research tells you what you need to know, but the company has to decide how to act on it. Without market research, you’re flying blind. If Amazon didn’t know that customers wanted all of these things, there’s no way the development of the Echo would have been as successful as it was.”
 
He goes on to say “Most companies don’t have the time to “wander” through their next market, acquisition, or fundraising, so they rely on market research — because sometimes you need to get it right the first time.” [my emphasis]
 
I find the hubris of charismatic types like Bezos annoying and amusing at the same time, if that’s possible. The problem that comments like these pose is that they are being made by people with the power to persuade, which morphs into denigration. So, unfortunately, these individuals are uninformed and lack understanding about what research is and how it can be used.
 
If research is not adding value, then you’re not doing the right type of research, or you’re not asking the right kinds of questions. It’s really that simple.
 
We tend to think of research as a series of steps that must be conducted in sequence before an insight or an “aha” moment occurs. Wrong – insights can come in any sequence or order, and from any source, but objective research results are the agent of one or more controlled experiments. There is also a false belief that, simply by virtue of conducting research, it will always lead to some magical “aha” moment that is immediately actionable. This is also absurd. No one should go into any experiment with the thought of exiting the experiment with all of the tools in hand needed to build or innovate, or create, or market.
 
The whole point of research is experimentation. To test, and to often sequentially fail. We conduct experiments to prove or disprove an hypothesis, an assumption, a hunch. My old marketing research professor Russell Haley used to quip, “In business, all you have to be is right”. But what if you’re not? Do you want to shoulder that risk?
 
If in testing an hypothesis we conclude that something doesn’t appeal to consumers, or isn’t a success, or fails to meet an action standard, that doesn’t mean that research was a useless exercise. That’s entirely the point of research!
 
We should be failing forward, failing repeatedly, and hopefully failing faster to get us to the next opportunity awaiting us!
 
If we are failing forward, then we are in a continuous future-seeking mode to identify opportunities to innovate. This could be a new product or service; new opportunities for distribution; new markets, segments, or usage occasions; or other activity focused on delivering products and services to the marketplace.
 
Peter Drucker famously said that all companies have two functions: innovation and marketing. Innovation implies experimentation, and experimentation will certainly produce some results that are undesirable or unexpected – and they will be classified as “failures” – which is, again, missing the point entirely.
 
If we do not stay objective about what it is we are trying to do (in testing, experimenting, and conducting research), then research is always burdened with an emotional weight it does not deserve. In the same way, if research is improperly designed or executed, or if the research function is improperly staffed, research itself will fail much of the time. Our goal is to immunize the research function from the results it delivers.
 
The outcome that we desire is for research to simply be the playing field by which a marketing variable can be objectively evaluated, devoid of emotion or prejudice. This is much harder than it seems. Research can be conducted to confirm a decision that has already been made, or to attempt to settle an argument. Under these constraints, research has no possible chance of succeeding. When this happens, neither side really believes in research.
 
Innovation and marketing, and by extension insights and research, are critical interpreters in all well-run organizations. So don’t be afraid to fail. Fail forward – and keep going.
Surveys & Forecasts, LLC