by Bob Walker | Nov 11, 2019 | Data quality, DIY software, Marketing research, Research design, Survey research
Happy Birthday to SurveyMonkey, who turned 20 this week.
In 1999, when SurveyMonkey burst onto the scene, there were virtually no cloud-based (SaaS) DIY survey platforms in existence. Looking back, we can see that SurveyMonkey was the original “disruptor” in the online survey space: it democratized the process of gathering feedback for companies of all sizes.
Unlike it’s far more expensive brethren (e.g., Qualtrics, Confirmit come to mind) who use a “turnstile model” (pay per complete), SurveyMonkey is a flat rate. This lets researchers leverage the platform’s power at almost limitless scale. I can think of no other software platform that is as economical and feature-rich (see a short list of hacks below).
Much like disruption in other areas, everyone instinctively knew that online research would change everything. And so it was. Costs were driven lower. Project timing was vastly compressed. But there was a trade-off: the true identity of respondents was often unknown.
This opened the door to a “professional respondent” problem, automated (“bot”) survey taking, and thus outright fraud.
In the past 20 years, progress has been made. De-duplication technology (e.g., RelevantID) and identity technologies (e.g., Veriglif, blockchain) are creating positive disruption with solutions to improve data quality. Ultimately, newer technologies and reward structures will put more power in the hands of those who choose to participate in survey research. Data breaches have added to the pressure for more comprehensive solutions. Greater oversight and government regulation are already playing an increasingly powerful role in shaping the future of research and data collection.
SurveyMonkey completely changed the “price of entry” for marketing researchers and data scientists. Many tasks can be handled within an environment like SurveyMonkey. But trained professionals in marketing research understand experimental design, buyer psychology, questionnaire construction, and sources of bias that can completely invalidate a research study.
The question that companies must ask themselves is: do I have the skill set to grasp these issues, or to leverage the full power of this great platform?
As an example, here are 12 powerful SurveyMonkey hacks you should be expert in if you want to hang with the pros (you’ll need a Premier or Professional plan, but they are quite affordable):
- Block rotation: control order bias by creating identical blocks and then randomizing them. This is extremely helpful for concept screening or conjoint designs.
- Skip logic: use choice responses to re-direct to other questions (individual conditions by response).
- Advanced logic: show/hide questions/pages using multiple conditions or complex criteria.
- Modules: cross-link entire questionnaires by passing system variables.
- Stimuli: obtain reaction to concepts or full-motion video, which is easily embedded.
- Alerts: use the API, or services like Zapier, to send alerts and feed CRM systems or vizualization tools like Tableau.
- Incentives: integrate external rewards (e.g., virtual Visa or Mastercard codes) with services like Rybbon.
- Scoring: use algorithms to assign respondents to segments and route them through the survey.
- A/B Testing: this allows you to test different language for introducing a question to determine whether there is a biasing effect of wording or not. This is especially helpful in academic work.
- Quotas: set quotas based on specific question completion, or quotas based on total responses.
- Export: grab your raw data as SPSS or comma-delimited files for use in analysis packages like WinCross or visualization tools like Tableau or PowerBI.
- Show Off: create a custom URL for your survey to give it a more professional image, or create “white label” surveys for your company or business, or use CSS to create an entire look and feel for your business.
But maybe you don’t care about these geeky details. What does this mean for you as a Research/Insights Director, Director of Analytics, Marketing VP, or a CMO?
It means that you can get world-class customer feedback for a fraction of what you are probably paying now — without paying any penalty in data quality.
Give us a call to discuss how we can work together to provide you an affordable customer satisfaction or feedback system that really works.
by Bob Walker | Sep 17, 2018 | Marketing and strategy, Marketing research, Survey research
On August 29, 2018 SurveyMonkey filed an initial registration statement with the SEC (symbol “SVMK”) to float an IPO; the offering is now expected in late September. A recent update to its IPO filing includes a first pass at pricing; it has printed a price range of $9-$11 which, at a midpoint valuation, is $1.29 billion (lower than originally estimated).
In 2017, SurveyMonkey had revenues of $219MM, up 5.5% from 2016, and appears to be on track for around $240MM in 2018. However, the company is losing money: the loss of $24MM in 2017 has already been exceeded in the first six months of 2018 ($27MM). The company attributes this to increased R&D spending, but this accounts for $15MM of that figure.
In the research space, there are other possible IPO candidates, e.g., Qualtrics (we expect an IPO eventually), Decipher (part of FocusVision), and Confirmit (already listed on the Oslo exchange). Of all of the SaaS offerings, SurveyMonkey has perhaps the most to gain as it contemplates expansion – or sets itself up to be acquired. The list of potential suitors could include social media, e.g., Facebook (Sheryl Sandberg owns 5% of SVMK) or Google, and on the research/data science side are ResearchNow/SSI, IBM (SPSS), or even Microsoft.
Yet the opposite might be true: SVMK notes that their large user base, offerings, extensive data set, and integrations provide opportunities to drive acquisition: remember Zoomerang?
The S1 statement is interesting from a trends standpoint, as SVMK makes the following observations (paraphrased) about the survey research industry:
- The nature of engagement between organizations and their key constituents is fundamentally changing by becoming more open, bi-directional and frequent. Internet-enabled business models, together with rapidly evolving societal changes have revolutionized constituent expectations for service, speed and experience. Organizations that ignore, misinterpret or react too slowly to feedback risk falling behind the competition.
- “Big data” alone is insufficient to optimize decision making. To make good decisions, organizations need to marry “big data” with “people powered data” so that organizations can see beyond basic trends and better understand issues affecting key constituents.
- Employees are increasingly empowered to make decisions, and decision making within organizations has become decentralized. Employees throughout organizations are directly collecting and analyzing feedback. Access to information enables more decisions to be made at more levels across the organization. This accelerates the operating speed of the organization and increases accountability for decision making at all levels. As this data set is aggregated, organizational leadership is also using these insights to improve organization-wide decision making.
- Technology adoption is changing: IT solutions are now shaped by decentralized use. As organizations let employees become more empowered, technology becomes accessible to more individuals with varying levels of skill. IT departments then must step in and impose enterprise-grade security, customized company branding, and integration with software applications.
SVMK bolsters its IPO case by noting that quality research requires design, analysis time, and expertise that many companies do not have. Thus, individuals with absolutely no research expertise can gather and analyze data like a pro. As a long-time marketing research consultant, I find this assertion to be silly. Believe what you want; an additional planned layer of AI technology is envisioned to add support to this naive conceptual model.
Of note, a study conducted by SVMK in 2017 showed that 45% of business users who utilize online survey software considered SurveyMonkey to be their survey platform of choice. This makes perfect sense to me: SurveyMonkey fits the needs of individuals and small teams who need answers to basic questions. The design tool and integrations are good, and the online reporting is solid (better than several enterprise platforms), and the mobile app is very good.
In the right hands, SurveyMonkey can work as well as enterprise platforms, giving SVMK much more runway to grow. Conversely, growth in enterprise platforms like Qualtrics is flattening, as more revenue must come from consulting services and thus stealing business from full-service research firms. And, unlike many enterprise platforms, SVMK has developed a huge stable of free integrations to expand its functionality, while other companies charge ridiculous amounts for the same thing.
There is no question that the impact of SurveyMonkey on the survey research industry has been vast: there are 60 million registered users, of which 16 million are active. While most accounts are non-revenue generating (i.e., free), there are still 600K paying customers across 300K organizations.
by Bob Walker | Aug 28, 2018 | Customer satisfaction, Marketing and strategy, Survey research
Many companies have established ongoing customer satisfaction programs: your department or company may be one of them. If not, you probably see many customer satisfaction survey examples once you finish a purchase transaction with a company. The airlines and lodging industries are particularly good at sending out requests for feedback shortly after every flight or stay. Yet a recent conversation with a client gave me pause: he rather confidently indicated that customer satisfaction research was the primary tool used for strategic insight into the performance of their business and the minds of consumers. Um, not.
Customer satisfaction research is not strategic research, and it never will be because it was never intended to be. Customer satisfaction research results cannot identify areas for new product development, a new advertising or communications strategy, or possible new market opportunities. Importantly, customer satisfaction research cannot tell anyone if the business is expanding or contracting, or effectively meeting customer needs, since it is restricted to recent customers.
At its best, customer satisfaction research is a process control and exception reporting tool. But even these goals are sometimes elusive, especially when the measures being used are general and nonspecific. Customer satisfaction can be very useful if trying to determine if specific performance criteria are within acceptable limits. However, the research ‘container’ (i.e., areas of investigation, questions, scales, and metrics used) is generally naïve in terms of whether the dimensions themselves are relevant or not.
One can hypothesize that, in a number of cases, some of the measures being asked probably have little to do with characteristics of the transaction that matter, or where the business is going – or where it is been. As an exception reporting tool, customer satisfaction is useful; as a business guidance and strategy development tool, it is of limited use.
But where does that leave us if most marketing managers and researchers don’t recognize the essential distinction between a process control tool and research designed to help grow the business?
The function of strategic research is to help an organization look out the window and navigate the uncertain and constantly changing road ahead. It is both quantitative and qualitative in nature. Strategic research helps the management team understand their customers’ attitudes and behaviors about the products they are using – and also those of their competitors. Additionally, strategic research helps identify the direction in which category users feel the market is going. It’s research that is dynamic, and always listening to customers general feelings and more detailed perceptions of your brand or service, rather than restricting their responses to the measures that are predetermined in a customer satisfaction study.
Don’t be lulled into complacency by positive customer satisfaction research results that indicate your business is doing well among your existing customers. You are only getting part of the story, and strategic research (which can take many forms, and should be conducted routinely) involves actively listening and responding to the ever-changing needs of today’s customers.
by Bob Walker | Aug 22, 2018 | Marketing research, Research design, Survey research
f you’re a researcher, you’ve no doubt heard about “concepts”. Concepts are ideas that can come from many places, such as R&D, in reaction to competitive activity, or as “blue sky” what-if explorations. Management consultant Peter Drucker was known for saying that companies have just two functions: marketing and innovation. If so, a concept is where these two functions intersect.
The fundamental purpose of concept testing is to help companies allocate scarce new product development resources in the most effective manner possible. While not an exact science, concept testing is the best way to evaluate the merits of an individual idea. So here are some simple guidelines to keep in mind when creating and testing concepts.
#1: Stick To A Standard Format
Using a standardized format helps minimize bias caused by differences in idea presentation, letting you compare across time. Use the same format to represent new ideas, flankers, line extensions, or repositioning of existing products.
#2: Avoid Subtle Differences
As a rule, subtle differences in concepts (i.e., “tweaking”) do not matter, yet brand managers will obsess over them. Our firm has tested hundreds of concepts and a mistake that is continually repeated is assuming that consumers either care about, or can react to, subtle differences in the wording or features and benefits. Small wording changes are meaningless and glossed over by the average consumer.
#3: Don’t Slam The Competition
Research consistently shows that consumers dislike brand comparisons, and especially those that attack a competitor directly. When creating a concept, it’s perfectly fine to focus on the benefits and positive story that your product or service offers, but avoid negative attacks on the competition.
#4: Keep It Pithy
In an age of ever-increasing distraction, consumers do not have the time or interest in reading an exhaustive concept description. Particularly in an online format, and even when using a double opt-in consumer panel as your sample, biometric data consistently shows that most respondents simply scan rather than read.
#5: The Use of Images
An image is an extremely powerful tool to support your concept or new product idea, and can be used for a multitude of purposes: to show product function, convey a persona, use occasions, set the tone, and emotion. However, the use of images in the context of a concept testing system for a company, where there is a need for comparing ideas across time/studies, is open for debate. Images can overpower factual details of a concept, and make subsequent comparisons more difficult. In early stage testing, images are best left out and introduced once the core idea has been identified (i.e., for positioning or advertising concept research).
To learn more, download our brief pdf on this topic here.
by Bob Walker | Jun 7, 2018 | Marketing research, Research design, Survey research
Product tests are designed to evaluate and diagnose product performance.
When Used
Product testing is typically performed (1) after concept screening or testing has identified a winning idea; (2) after a product development phase, in which R&D, sensory tests, or employee panels have identified a new product candidate; (3) at any point to assess consumer reactions to product variations (e.g., cost-reduced, improved performance, etc.); or (4) for competitive claims purposes.
Stimuli
The stimuli used in product testing varies widely, depending on the type of test and the number of product variations under consideration. Stimuli can range from conceptual product mock-ups (which are not handled) to fully functional, branded products that are evaluated in a real-world setting. To assess “pure performance”, products are exposed without extensive packaging graphics, branding, pricing, or other identifying information. If branding needs to be assessed (concept-product fit test), then branded information is included. Usage, preparation, or safety instructions (if needed) are also provided.
Designs
There are two basic types of product tests: monadic tests, and comparison tests. In monadic tests, the respondent is presented with one product, much like a consumer would be in the real world. Conversely, comparison tests involve evaluating two (or more) products in either a head-to-head or sequential fashion, and are often used as screening studies.
For more information on product testing, download our free section, from the Pocket Guide to Basic Marketing Research Tools, here.
by Bob Walker | Feb 14, 2018 | Marketing research, Research design, Survey research
Since 1936, the Advertising Research Foundation has been the standard-bearer for unbiased quality in research on advertising, media, and marketing. Over the past 10 years, the ARF’s Foundations of Quality (FOQ) initiative has published 10 peer-reviewed papers dedicated to best practices in research and data quality. I should know: I was a co-author of two of those papers.
Now those FOQ insights are being brought to life in through The ARF’s Leadership Lab series of workshops and lectures on key research topics. Each are designed to inform and educate those in the marketing and advertising research industry about key aspects of the research process.
Nearly 120 students attended today’s event, including a few well-know industry experts. Turns out that even they must keep their skills sharp!
So, on Wednesday, February 14th (yes, Valentine’s Day) I was pleased (for a 2nd time) to be one of five research experts presenting a half-day crash course on how to design, execute, and analyze consumer survey research so that professionals new (and not so new) to the research field can understand what “good design” looks like. This includes sampling and weighting, scales and applicability for mobile, and appropriate statistical tests. We also covered the use of emojis in survey research as possible scale replacements. The overarching goal was to help our fellow researchers understand the nuances of data quality, and help them positive impact their organizations through superior insights and marketing success.
I am so very pleased to have been part of today’s program, which included yours truly (Bob Walker, CEO & Founder, Surveys & Forecasts LLC); John Bremer, Head of Research Science, The NPD Group; Randall Thomas, SVP, Research Methods, GfK; and Nancy Brigham, SVP, Global Head of Sampling and Research-on-Research, IPSOS Interactive Services. The event was beautifully coordinated by Chris Bacon, EVP, Global Research Quality & Innovation, from The ARF, and the wonderful ARF staff who work tirelessly behind the scenes. Kudos to all!
Not only are these individuals brilliant, published researchers, but they have become friends and are people who all respect one another and share their knowledge freely. The morning wasn’t without some amusing moments: I had too many slides, John couldn’t resist showing his dogs, Randy broke into a few rock’n roll riffs, and Chris utilized emojis and handed out Whitman Samplers and M&Ms to mollify the crowd. Nancy simply seemed bemused by it all…
We all look forward working together with the ARF in the future on research education, including the possibility of more hands-on case studies, webinars, and workshops. The industry is thirsty for real, practical knowledge, and the ARF is the best place for that to occur. It remains an anchor and a center of gravity for all things related to research.
Congratulations to all on a great day today – a job well done, and much more to come in 2018 and beyond!!