Qualtrics & SurveyMonkey: A Halloween IPO Tale

Qualtrics & SurveyMonkey: A Halloween IPO Tale

Happy Halloween!

As expected, Qualtrics filed its Form S-1 registration statement (IPO) on October 19, 2018 under the oh-so-sexy symbol ‘XM’. Qualtrics, backed by VC firms Sequoia and Accel, has been valued at as much as $2.5 billion; soon we’ll hear what the market says.

XM’s IPO was certainly hurried along by the recent public offering of SVMK (SurveyMonkey). How quickly expectations fade: SVMK began trading on September 26, 2018, as high as $20 and closed at $17.24. Today (October 31, 2018) SVMK closed at $10.71 — a drop of nearly 40% in a month. BOO!

Still, SVMK’s market cap of $1.31B is nothing to scream at.

Interestingly, both companies are roughly the same “core” size (i.e., revenue from subscriptions), although they got there in very different ways. In 2017, SVMK subscription revenue was $219 million and Qualtrics was $213 million. However, Qualtrics adds nicely to their subscription business with service bureau-style programming, sample, and integration services that combined are $77 million (26% of total revenue) for a grand total of $290 million. SVMK is making a significant effort to expand their team and enterprise offerings.

Qualtrics is a different story. While the operating picture is a bit ghoulish, the real story is Qualtrics’ growth rate. According to the S-1, as of June 30, 2018, Qualtrics had an overall net operating loss of -$3.4 million, slightly ahead of its 2017 loss of -$3.7 million (amounting to about -2% of revenue). Conversely, Qualtrics’ subscription revenue grew nearly 50% in the full year ending 2017, and 42% in the first six months of 2018. And Qualtrics’ total revenue grew 52% in the year ending 2017, and 40% for the first six months of 2018. The street likes a Cinderella story (sorry, wrong genre), and Qualtrics is certainly that. But maybe I am missing something.

Still, SVMK revenue grew at a reasonable 6% between 2016 and 2017, and for the first six months of 2018 grew somewhat faster at 14%. But this was not enough to overcome a gaping operating loss of -$27 million (the first six months of 2018).

Given weak financials and a significantly depressed stock price, SVMK looks like a juicy acquisition target. Likely prospects include Facebook (limited survey capability but a vast supply of sample) and Google Forms (part of G-Suite with basic functionality). Are scary spirits propping up the stock price?

With rapid acceleration, costs at Qualtrics have also grown fast. As revenue from labor-intensive “professional services” grows, labor cost growth must outpace revenue. These costs are not as scalable. Qualtrics’ “professional services” gross margin (currently at -20%) drags subscription margins down (which are 80%). Sales and marketing expenses are a whopping 50% of revenue (with R&D at just 15%). License fees are not funding more R&D; instead, fees largely support the sales team, marketing, and the Qualtrics Summit. That doesn’t seem like a very sustainable business model to me. Maybe I am missing something.

Qualtrics says that their “business model relies on rapidly and efficiently landing new customers and expanding our relationship with them over time.” Translation: XM’s cross-sell tentacles quickly reach across organizational boundaries wherever feedback is involved. Qualtrics has smartly leveraged their survey data collection engine across corporate silos with similar data and benchmarking needs (i.e., effectively the back-end of Research Core).

This templated approach is attractive for “large customers” (those with $100K+ in subscription revenue) who seek a familiar look and feel, a shared platform, and a tiered pricing model. Large customer growth has been very fast: +60% in the six months ending June 30, 2018, impressive by any measure. Qualtrics is also able to leverage its technology investment by modularizing its code base.

Qualtrics claims to have created a “new category of software, Experience Management, or XM™, which enables organizations to address the challenges and opportunities presented by the experience economy.” This is largely magical thinking for the IPO syndicators and future investors. In reality, there are only so many whales. The pod is fairly static and growth of 50% YOY is simply unsustainable in any industry. Maybe I am missing something.

Qualtrics has a few options: headcount and cost reduction, price increases, and perhaps packages for lower-end customers (and hope to upsell/cross-sell). These don’t seem likely. Rather, as Qualtrics continues its rapid move into consulting (and away from DIY subscriptions), they will be attractive as an acquisition target for companies in software, accounting, and management consulting. And Qualtrics fits well into a blockchain-enabled, supply-chain world.

No doubt, the IPO will make the founders very rich. But with a sweet acquisition offer, the management team may find itself working for a new employer sooner than they think. But maybe I am missing something.

BOO! That’s a scary story indeed!

Qualtrics & SurveyMonkey: A Halloween IPO Tale

Qualtrics & SurveyMonkey: A Halloween IPO Tale

Happy Halloween!

As expected, Qualtrics filed its Form S-1 registration statement (IPO) on October 19, 2018 under the oh-so-sexy symbol ‘XM’. Qualtrics, backed by VC firms Sequoia and Accel, has been valued at as much as $2.5 billion; soon we’ll hear what the market says.

XM’s IPO was certainly hurried along by the recent public offering of SVMK (SurveyMonkey). How quickly expectations fade: SVMK began trading on September 26, 2018, as high as $20 and closed at $17.24. Today (October 31, 2018) SVMK closed at $10.71 — a drop of nearly 40% in a month. BOO!

Still, SVMK’s market cap of $1.31B is nothing to scream at.

Interestingly, both companies are roughly the same “core” size (i.e., revenue from subscriptions), although they got there in very different ways. In 2017, SVMK subscription revenue was $219 million and Qualtrics was $213 million. However, Qualtrics adds nicely to their subscription business with service bureau-style programming, sample, and integration services that combined are $77 million (26% of total revenue) for a grand total of $290 million. SVMK is making a significant effort to expand their team and enterprise offerings.

Qualtrics is a different story. While the operating picture is a bit ghoulish, the real story is Qualtrics’ growth rate. According to the S-1, as of June 30, 2018, Qualtrics had an overall net operating loss of -$3.4 million, slightly ahead of its 2017 loss of -$3.7 million (amounting to about -2% of revenue). Conversely, Qualtrics’ subscription revenue grew nearly 50% in the full year ending 2017, and 42% in the first six months of 2018. And Qualtrics’ total revenue grew 52% in the year ending 2017, and 40% for the first six months of 2018. The street likes a Cinderella story (sorry, wrong genre), and Qualtrics is certainly that. But maybe I am missing something.

Still, SVMK revenue grew at a reasonable 6% between 2016 and 2017, and for the first six months of 2018 grew somewhat faster at 14%. But this was not enough to overcome a gaping operating loss of -$27 million (the first six months of 2018).

Given weak financials and a significantly depressed stock price, SVMK looks like a juicy acquisition target. Likely prospects include Facebook (limited survey capability but a vast supply of sample) and Google Forms (part of G-Suite with basic functionality). Are scary spirits propping up the stock price? With rapid acceleration, costs at Qualtrics have also grown fast. As revenue from labor-intensive “professional services” grows, labor cost growth must outpace revenue. These costs are not as scalable. Qualtrics’ “professional services” gross margin (currently at -20%) drags subscription margins down (which are 80%). Sales and marketing expenses are a whopping 50% of revenue (with R&D at just 15%). License fees are not funding more R&D; instead, fees largely support the sales team, marketing, and the Qualtrics Summit. That doesn’t seem like a very sustainable business model to me. Maybe I am missing something.

Qualtrics says that their “business model relies on rapidly and efficiently landing new customers and expanding our relationship with them over time.” Translation: XM’s cross-sell tentacles quickly reach across organizational boundaries wherever feedback is involved. Qualtrics has smartly leveraged their survey data collection engine across corporate silos with similar data and benchmarking needs (i.e., effectively the back-end of Research Core).
This templated approach is attractive for “large customers” (those with $100K+ in subscription revenue) who seek a familiar look and feel, a shared platform, and a tiered pricing model. Large customer growth has been very fast: +60% in the six months ending June 30, 2018, impressive by any measure. Qualtrics is also able to leverage its technology investment by modularizing its code base.

Qualtrics claims to have created a “new category of software, Experience Management, or XM™, which enables organizations to address the challenges and opportunities presented by the experience economy.” This is largely magical thinking for the IPO syndicators and future investors. In reality, there are only so many whales. The pod is fairly static and growth of 50% YOY is simply unsustainable in any industry. Maybe I am missing something.

Qualtrics has a few options: headcount and cost reduction, price increases, and perhaps packages for lower-end customers (and hope to upsell/cross-sell). These don’t seem likely. Rather, as Qualtrics continues its rapid move into consulting (and away from DIY subscriptions), they will be attractive as an acquisition target for companies in software, accounting, and management consulting. And Qualtrics fits well into a blockchain-enabled, supply-chain world.
No doubt, the IPO will make the founders very rich. But with a sweet acquisition offer, the management team may find itself working for a new employer sooner than they think. But maybe I am missing something.

BOO! That’s a scary story indeed!
Blockchain Can’t Solve Marketing Research’s Biggest Problem

Blockchain Can’t Solve Marketing Research’s Biggest Problem

Blockchain provides the basis for a dynamic shared ledger that can reduce time when recording transactions, intermediary costs, and fraud. In the last couple of years, I’ve seen an increasing number of presentations on the value of blockchain. In industries where digital record-keeping is lacking, an immutable ledger (guaranteeing the chain of custody between parties) can be an enormously powerful tool.

 

Now it is much more than a concept – blockchain is being implemented around the globe. As the development of the “physical cloud” evolves, blockchain will thrive as more processes are truly automated, presenting fewer vulnerabilities and opportunities for fraud. Supply chains will have less buffer inventory, and more materials will be harvested just-in-time to feed fluctuating demand.

 

Yet in the marketing research industry, blockchain faces significant challenges. Current efforts, like those in other industries, are primarily focused on accounting benefits and fraud prevention. In particular, online consumer panel companies are dealing with huge amounts of fraud: they have been paying out millions in incentives for surveys with no data! How does this happen? Bots, click farms, and illegal software can all circumvent legitimate data collection efforts. One of the worst is a program called Coby. Once installed, it hides behind a VPN. Coby brags that it can generate personal information “to protect your privacy”, can complete Captcha prompts, completes surveys that have just enough variability, and generates email to fool panel companies. No wonder research companies are panicked.

 

Data privacy advocates say that blockchain will allow consumers to take control of their personal data “assets” such demographics or financial data. Online consumer panelists allow access to their anonymized personal information using tokens, which are digital permission slips with a limited lifespan. One a token is exchanged, the anonymized information (including survey responses) is passed to the survey research company. Then the token expires and the transaction is immutable. From a data privacy standpoint, these are positive developments.

 

Conversely, while blockchain is good for fattening research company profits, it does nothing to address the biggest issue in the marketing research industry: survey participation and non-response bias. Non-response needs significantly more attention, and is a major omission in blockchain discussions. One could argue (and I would agree) that putting the individual in charge of their own information is essential (and is at the heart of GDPR). In doing so, we may reduce the number of inappropriate requests for survey participation. Perhaps this will increase the likelihood that individuals will participate in the future.

 

For the short term, blockchain may solve part of the data quality problem. Can blockchain restore trust, and foster greater cooperation? Time will tell – and it will take a lot of time.

Survey Research in the Shadow of Big Data

Survey Research in the Shadow of Big Data

Data that’s really, really big
No one goes around saying, “Gee, look at how big that data is!” Well, maybe some people, but they’re weird. At present, there is no unified definition of ‘big data’. Various stakeholders have diverse or self-serving definitions. A major software company defines big data as “a process” in which we “apply computing power to massive and highly complex datasets”. This definition implies a need for computer software – gosh, how convenient.

A more thoughtful approach was recently taken by two British researchers. For data to be considered “big”, they argue, it must have two of the following three characteristics:

  • Size: big data is massively large in volume.
  • Complexity: big data has highly complex multi-dimensional data structures.
  • Technology: big data requires advanced analysis and reporting tools.

OK, not bad. Note that in this definition there are no relatives or absolutes (i.e., that a dataset must be bigger than ‘x’). But we also know intuitively that the data housed at Amazon, Walmart, or Google probably meets these requirements. I like this approach, because it does not reflexively imply a large number of subjects. Big data could be miles deep, yet just one respondent wide (think, for example, about continuous measures of emotion or cognition). Biometric data, for example, easily fits.

Reg Baker (Market Strategies International) has said that ‘big data’ is a term that describes “datasets so large and complex that they cannot be processed or analyzed with conventional software systems”. Rather than size, complexity, or technology, he focused on sources:

  • Transactions (sales, web traffic/search)
  • Social media (likes, endorsements, comments, reviews)
  • Internet of Things (IoT) (sensors or servo-mechanisms)

Perhaps so. If thinking only of sources, I would add biometric/observational data to this list. These data are inherently different: they are narrow and still complex. Observational data might include experiences, ethnography, weather/celestial data, or sources that involve continuously measured activity. Biometric data includes all manner of physiological (sensor) measurement that is then analyzed using highly sophisticated software. In biometric research, the number of observations (subjects) is often < 30, yet the number of data elements captured per subject is enormous.

So, when is data “big”?
A lay person would say that big data implies “massiveness”, so while not wrong, ‘big data’ is somewhat of a misnomer. We need to think of big data in a three-dimensional way. Big data requires “massiveness” in three areas:

  • Data elements (i.e., variables)
  • Observation units (i.e., subjects)
  • Longitudinal units (i.e., time)

Big data typically has a longitudinal aspect (i.e., data collected continuously or over multiple time periods) with frequent updates (i.e., repeat purchase). Additionally, the tools needed to analyze big data (i.e., neural networks, SEM, time series) are significantly different tools than those used for less complex datasets (i.e., spreadsheets). Much like the word “fast”, the word “big” will evolve, too.

Better, cheaper, faster – or just bigger?
In the last 5-10 years we have seen a shift away from reliance on survey research data and analysis, towards a greater belief that ‘big data’ will tell us everything we need to know about trends, customers, and brands. This is reflected in the following data and analysis trends:

  • From data/analytics that are scarce and costly, to ubiquitous and cheap. When data is everywhere, and basically free, we assume that there must be more we can do with it.
  • From testing specific hypotheses, to relationships discovered by data mining. This is the million-monkey-typewriter hypothesis. Here are some amusing examples.
  • From seeking feedback directly, to presuming needs from data patterns. This implies more weight on correlations and modeling than conversation.
  • From a foundation of sampling theory and statistical testing, to the presumption of normality (and that all differences are meaningful given ‘big data’ status).
  • From data gathered by design, to data “found” in other processes (so, for example, GPS data in a transaction record).

The above is not “wrong” per se; rather, it represents a shift away from critical thinking. ‘Big data’ is shrouded in hype and over-promise. Marketing management’s dreams of never ending insights from big data are just that. Dashboards, visualizations, and marketing mix models are alluring representations of ‘big data’ – some are beautiful and artistic. Yet, isn’t the goal to use ‘big data’ to drive profitable decision-making?

‘Big data’ and survey research – BFFs, like, forever
Survey research must share the bed with ‘big data’, though they will continue to fight over the sheets. Big data can free the survey researcher from having to spend time collecting merely descriptive data (for which human memory is notoriously foggy) and that might otherwise reside in transactional databases. This lets survey research do what it does best: gather opinions and reactions to stimuli.

Over time we will find out that ‘big data’ does a wonderful job of recording behavior, but does less well at predicting. In the near term, there will be a redeployment of resources away from primary and survey research. Some companies will rely on big data too heavily, and in avoiding direct discussions with customers, will suffer.

Opportunity will be there for companies who actively listen, rather than a purely modeled approach. Bridging the gap between self-reported, biometric, and observed behaviors is likely to become the next really “big” thing. Happy Halloween!!

Impact of Great Creative in Sales Forecasts

Impact of Great Creative in Sales Forecasts

Let’s review some standard sales forecast inputs…
When conducting a sales forecasting exercise, there are a number of fairly standard inputs that we have to feed into a forecasting model. In addition to the identification of a target population, and obviously its size, we also have to estimate initial trial, an initial repeat rate, a secondary repeat rate, anticipated purchase cycle, and an assumption about packs bought per buying occasion. We also have to make assumptions about the distribution and awareness build in year one (either entered directly, or estimated from a detailed media plan and separately modeled). And there are obviously other factors, such as sampling and couponing, which don’t need further explanation here.

These are all fairly standard inputs in all sales forecasting models currently available, including those available for “rent” by consultancies. But perhaps the most powerful, yet overlooked, variable in the entire equation is advertising impact. Great creative generates a multiplier effect that is significant – and enough to overcome many of the mathematical constraints inherent in other marketing mix variables.

A simple example will demonstrate this clearly. Let’s assume that we have a sales forecast for a new snack product. An estimated target population, based on consumers within the household, is presumed to be 35MM, and from a concept-product test we have estimated trial of 20%, and first-year repeat of 65% (i.e., about 2/3 will make one repeat purchase in the launch year). We’ll assume, for simplicity that the average number of packs per purchase occasion (for both trial and repeat) are an average of 1.0. Similarly, if we take a typical distribution and awareness curve based on an average marketing plan, we might assume, say, 50% distribution by the end of year one, and an average 40% brand awareness achieved in the marketing plan.

The base forecast
So, the simple math would be something like: (35MM x 50% x 40% x 20% = 1.75MM trial purchases) + (1.75MM x 65% = 1.14MM repeat purchases) for a total volume of around 2.9MM units. But that also assumes average creative (based on a copy test vs. ad norms). But what if we have an advertising agency that was able to create a great campaign and wonderful creative?

In the context of an average sales forecast, the difference between 40% and 50% awareness, while significant, is 10%. Given the asymptotic nature of advertising awareness, the marginal return of each additional advertising dollar flattens the curve out rapidly. The same is true for distribution, albeit with a more linear relationship between spending and added points of sale. A company’s ability to go from say 50% distribution to 70% distribution is still going to be significantly more costly, because the large distribution outlets have already been captured, and what remains is the difficult blocking and tackling of smaller regional players which are, by default, more expensive to gain.

But let’s get back to the impact of good creative on the same marketing plan. It is not unreasonable, nor is it uncommon, to get great creative from your agency partners for the same amount of money that it might cost you for mediocre, or even lousy, creative. Yet the multiplier effect of great creative is far greater than what might be achieved through nominal increases in awareness or distribution.

Great creative is the most powerful multiplier
Assuming that the product proposition is sound, a 40% improvement in ad impact would race through the entire sales modeling exercise like freight train. Let’s assume that we can get really superior creative from our wonderful agency friends, and are able to generate an ad impact score of 1.4 (where the norm would be 1.0). That takes our 2.9MM unit forecast in year one to 4.1MM – a far greater impact than a marginal 10% increase in awareness or distribution (which would only get us to around 3.2MM – 25% less). Yet we always think that both awareness, distribution, and use-up rates are the holy grail in a sales forecast.

Of course, the fundamental inputs of a sales forecast are truly important, but the ability to connect with the consumer through great creative can often turn a mediocre story into one that truly resonates in the marketplace and captures consumers’ imagination.

So, when assessing the performance of your creative, especially in the context of a new product launch, give your agencies room to run! They may just come back to you with the kind of creative that will produce the multiplier effect we just saw. And that can make the difference between a winning formula vs. playing for a tie.
More appreciation is needed for the pieces of a sales forecast that really matter: great creative, with a great product behind it, is almost an unstoppable winning formula.

Is it “Market Research” or “Marketing Research”?

Is it “Market Research” or “Marketing Research”?

Yes, Virginia, “marketing research” is different
With school back in session, my thoughts naturally turn to… etymology! No, not the study of bugs, the study of words and their meaning. Every so often, I find myself revisiting a question that seems to shadow those of us in the research industry: the distinction that people make (or do not make) between the words “market research” and “marketing research”. The terms are often used interchangeably and without much thought. I often see industry publications, web sites, and top tier consultancies doing this, which is disconcerting, because they mean different things.
In the most literal sense, “market research” implies research about markets — how big they are, how fast they are growing, who has more market share, and other descriptive information.

That is not to say that ‘market research’ does not deal with diagnostics, or with extremely large data sets, or with the incredible and ever-expanding set of tools available for targeting. This is especially true in the world of media measurement, sales analysis, and transaction-level analysis. These work products are largely based on passively collected or observational data; rather than attempting to understand behavior, attitudes, or drivers of brand choice through proactive investigation, we deduce with the data available.

“Marketing” research refers to the marketing process, and by definition, subsumes all market research activities. Marketing research looks forward, and asks why – why has something happened, why are markets changing, and where are consumers headed next? It is the active nature of research and its focus on the marketing process that distinguishes marketing research from virtually every other business discipline. Market research looks at the rear view mirror: at what has happened, which is presumably predictive of what will happen, because that is what happened last time. That is helpful in a general sense.

Marketing research activities are designed to answer key marketing questions so that a business decision can be made: for example, go or no go, expand or sell off, segment A or B, what message. It is also the relentless nature of this exploration that makes it so valuable to every organization. If a company does not have its eyes looking up and outward, towards the ever evolving sea of consumer change, it is destined to become a captive of its own narrow metrics, unable to look outside its own boundaries for opportunities that can breathe new life into the organization.

Innovation and marketing
Peter Drucker, esteemed management professor, said that organizations have just two functions: innovation and marketing. Marketing research is integral to both: in the identification of new business opportunities, in assessing the market, in configuring the product, and in marketing communications. ‘Market research’ measures and reports on performance but fails to address the most basic question of all, which is: why?

Perhaps the most rewarding aspect of “marketing research” is that it not only works alongside marketing at virtually every step, but in fact guides where marketing needs to go, because research is the closest to the customer and the information he or she provides. The very best organizations know this, welcoming an active, assertive marketing research function to the table as partners in brand management.

The confusion in terminology may also rest on the fact that it is difficult to explain to others what it is we actually do. I sometimes resort to telling people that I conduct customer satisfaction studies or analyze survey data, because these are a bit more tangible. But, in reality, they are simply activities and tasks. The tasks fail to convey the rich nature of uncovering consumer insights, guidance for marketing strategy, or communications refinement that marketing researchers perform. Using “insights” is equally vague, since we don’t “manage” insights, but we do use research to uncover them. So, perhaps we need a better, more descriptive term for what we do, although exactly what that is eludes me at the moment!

Manufacturer vs. supplier roles differ
Unlike consultants, corporate marketing researchers have additional burdens that external researchers do not, because they live with the brand 24/7 (not just the lifespan of an engagement or a project). They must make sure that any and all research studies are appropriate for each stage of the brand’s support plan or life cycle.

Corporate marketing researchers must also always act as the conscience of the brand, and resist persuasive yet inexperienced marketing managers who make can make reactionary decisions (OMG, our “likes” are declining!) that can significantly dilute a brand’s equity.

Every organization needs to have a strong, well-respected marketing research function (or work with thoughtful marketing research consultants) to support innovation and marketing. And, marketing researchers must have the courage of their convictions, and not be afraid to challenge old assumptions.

Check one only
So, which type of researcher are you? One who reports facts, figures, and delivers data, or one who understands brand objectives, collaboratively challenges assumptions, and designs the appropriate research to support key business decisions? If it’s the latter, I’d call you a marketing researcher.

Surveys & Forecasts, LLC