by Bob Walker | Apr 1, 2023 | Analytics, Conjoint analysis, Marketing and strategy, New product development

In today’s crowded marketplace, it’s more important than ever to understand what drives consumer preferences. That’s where conjoint analysis comes in. Conjoint analysis is a research method used to understand how consumers value different attributes of a product or service.
At its core, conjoint analysis involves presenting consumers with a series of hypothetical product or service profiles, each containing different combinations of attributes. Respondents or prospects must make choices which require that they “trade off” between options. For example, if you were conducting conjoint analysis for a new car, you might present participants with combinations of attributes, such as price, fuel efficiency, horsepower, and style. By analyzing the choices consumers make between these profiles, including price, researchers can determine the relative importance of each attribute in driving consumer preference.
There are two main types of conjoint analysis: traditional full-profile conjoint analysis and adaptive conjoint analysis. Traditional full-profile conjoint analysis presents participants with a fixed set of product profiles, while adaptive conjoint analysis uses a computer algorithm to adjust the profiles presented to each participant based on their previous choices.
Conjoint analysis has many applications in marketing research. It can be used to optimize product design, pricing, and marketing messaging. For example, if a company is considering launching a new product with different feature sets at different price points, conjoint analysis can help determine the optimal combination of features and price to maximize sales.
Conjoint analysis can also be used to understand how different customer segments value different attributes. This can include emotional drivers, such as attitudes and beliefs when building a brand’s core positioning. By analyzing the choices of different demographic groups, researchers can gain insight into how different segments of the market value different features.
While conjoint analysis can be a powerful tool for understanding consumer preferences, it’s important to keep in mind a few limitations. For example, conjoint analysis assumes that consumers make rational choices based on the features presented to them. In reality, consumers may be influenced by factors outside of the product attributes themselves, such as brand loyalty, emotional appeal, or social issues that affect their choice criteria. Price is also highly biasing, hence options must be presented in a realistic context.. Additionally, conjoint analysis is only as good as the attributes and levels included in the analysis, so it’s important to carefully consider which attributes to include.
Overall, conjoint analysis can be a valuable tool for marketers looking to understand consumer preferences and optimize their product offerings. By presenting hypothetical product profiles and analyzing consumer choices, conjoint analysis can provide insights into how different attributes drive consumer preference, helping companies make data-driven decisions about product design, pricing, and marketing.
Buyers of your product or service will consider many factors when deciding what or how to buy, and as a company you must decide on what to offer, and at what price, to maximize profit. If your product development roadmap has stalled, conjoint analysis is definitely something to consider. For more information about conjoint analysis or other research services, please visit the Surveys & Forecasts, LLC website or get in touch at info@safllc.com.
by Bob Walker | Oct 24, 2022 | Analytics, Communications, Marketing and strategy, Marketing research
A recent conversation on A/B testing with a client revealed an interesting perspective about messaging and positioning. The client, extolling their company’s rigorous A/B testing approach, failed to recognize a simple but scary fact: it is easy to compare multiple versions of a sub-optimal message. In the end, you end up with a “less-worse” version of an already weak message. This is not equivalent to building brand value over time — and building a moat around your brand’s essence.
What were they thinking?
The client had overlooked the obvious by ignoring underlying reasons to buy. Instead of testing which alternative was more persuasive based on price, the more important question they should have been asking was: what is the underlying motivation behind purchase? What segments, personas, or buyer types fall into our wheelhouse? Why should our brand be considered in this crowded category? This client, and so many others, seem to miss a simple tenant of marketing: why give away your marketing advantage so early in the game?
This client’s products have significant performance advantages over others in the category, yet they were A/B testing multiple executions built around being a lower-cost, value alternative. If they had taken just a little time to understand buyer behavior, they would have realized that price can be a relatively small factor in the buying decision when the brand looms large.
In this case, A/B testing was fueling a race to the bottom. By choosing the “less worse” option, the client had already decided that they would primarily compete on price, pushing them deeper into a commodity mindset for the customer.
When misused, A/B testing behaves like a cost-reduction test. There are many instructive lessons here: a well-known case is Maxwell House coffee. Over the course of a many years, the company increased its use of lower quality beans in the blend to cut its COGS. It conducted taste tests to make sure that consumers did not detect a difference when compared to the previous blend. But market share began to fall. Why? Because they never tested it against the original formula. What if there were thousands of Maxwell Houses across the globe instead of Starbucks? In the same way, test between meaningful options, rather than confine your evaluation to a narrow set of sub-optimal choices.
Be smart. A/B testing works best when the strategy is well-defined and plays to your advantage. First figure out what that is. Focus on highly persuasive messages that support your brand, rather than identify the best way to discount your business into oblivion. Don’t give away the store when you don’t have to.
by Bob Walker | Apr 28, 2021 | Analytics, Marketing research, Quantitative research, Survey research
Looking for a quick way to get started understanding the results of your research?
Projects that are survey research-based can be daunting. So can projects that involve the analysis of sales, promotional activity, advertising, or other marketing-related activity.
We live in a world of complexity and big data. Simple guidelines and a keen eye can reveal patterns that you might have otherwise overlooked. Here are a few tips to help you start analyzing your project:
Take a walk through your data.
Scroll through the data and see where values “pop” – that is, where are they high and where they are low? Do your tables flow in the same way that you think about your business? If so, you will begin to see numbers that imply relationships. As a result, visual outliers can become major insights.
Compare those who are interested versus not.
In the research business, we refer to this as “acceptor-rejecter” analysis. If, for example, you have a five-point purchase scale, group the “fours” and “fives” and compare them to “ones” or “twos”. Throw the neutrals in with the rejecters to compare positives vs. everyone else. Are there larger differences? If so, What do you infer?
Mine the gap.
The benefit of acceptors vs. rejecters is that you are looking more vs. less extreme. The difference between them is valuable in identifying a compelling story. Typically, this is done in the form of point gaps. A large gap between acceptors and rejecters points to an insight.
Sort your data.
If you have attributes of various features or benefits, sort them from high to low and compare the acceptors and rejecters. Or compare demographic groups, such as Millennials vs. Baby Boomers. Sort them on ratings or point gaps. Larger point gaps can identify attributes that are choice drivers.
Think linearly.
Array groups you are interested in analyzing by order of magnitude. For example, a variable like education is easy: college educated vs. not. For income, create low, moderate, and high income groups, and compare across. The same is true for other continuous variables, like age. Be clever, use medians and not means.
Your eyes will easily see patterns, especially if interest is correlated with your dependent measures.
These little baby hacks will get you on your way!
by Bob Walker | Mar 25, 2021 | Analytics, Marketing and strategy, Marketing research
Those of us who have spent some time in research departments tend to think in linear terms. By that I mean that there is a “classic” sequence to follow to understand customer needs, new product opportunities, line extensions, new advertising, etc. For example, we might start with a strategic study to understand buyer needs and behavior, identify segments or personas, follow that with benefit screening or concept testing to assess interest, then move into advertising concepts, and then marry that with the product development track with R&D, address any deficiencies, move into a test market, and then a national introduction.
Not. Those days are long gone. There is no appetite for “research” or “insights” in the classic flow referred to above.
This is most obvious when you look at the revenue of major research firms, which have grown anemically the last five years. While it is important to understand customer/buyer needs, research can’t add nearly as much value until it understands the digital landscape. Joel Rubinson and Bill Harvey have written about this eloquently. Those of us who consider ourselves insights experts or researchers must come to grips with the fact that most companies have no interest in spending much time conversing with customers, even when it has strategic value.
Most companies feel the need to respond or react to what is happening right now, in real time. In fact, for many companies, response time is the only thing that matters. We are in an age of data lakes, auctions, programmatic and ROI – a world of reaction-based marketing. In this world, a brand demands that for every nickel it spends, a nickel in sales should be generated. Companies are not interested in convincing you that their product is superior, or meets your needs, or fits your lifestyle unless they get paid back. Nor are they interested in the protective benefits of long-term brand building. This is a finance-centric rather than marketing-centric philosophy.
Reaction-based marketing has four primary characteristics which distinguish it from traditional marketing and brand building
- ROI is the primary KPI used to measure marketing success.
- Decisions are engineering-driven, not consumer needs-driven.
- Decisions are event-based, not strategically- or equity-driven.
- The cost of failure is less than the cost of testing.
A great example is Amazon, which alone has created these exact marketing conditions. It is a complete ecosystem for testing all elements of the marketing mix (excluding distribution, which it owns). Yet has Amazon not created the perfect ecosystem for driving brands into commodity status? A great example is alkaline batteries: Duracell currently cells 24 AAA batteries for $16. Amazon sells 48 AAA batteries for $15. I wonder who wins that battle?
As researchers and insights experts, where we add value is the missing link between all of the automated ecosystems that are competing for the consumer’s attention, and how the consumer thinks and feels. That market is wide open.
by Bob Walker | Apr 9, 2020 | Analytics, Customer satisfaction
Determining whether your customers are happy or not shouldn’t be a complicated, mind-numbing exercise. Too many companies believe that they cannot afford to conduct customer satisfaction programs because it will be either too complex, too expensive, or feel that they don’t possess the skills to analyze the data when it comes in. I’d like to put this misconception to rest.
In my many conversations with small- to medium-sized business owners this seems especially true. SMBs often have smaller marketing departments or lack a reasonably well-developed research and analytics function. It shouldn’t be that complicated, but the huge global management and research consultancies make it that way. They offer various customer satisfaction benchmarking or scoring systems that are complex or based on simulation and modeling. But customer satisfaction research should be a basic function.
I have conducted many customer satisfaction studies for some of the nation’s biggest companies. I’ve concluded that most companies (or, for that matter, business units or divisions) only need a core set of key measures to help them understand what customers think and feel about their business.
The nice thing about this “core set” is that each question is clear, obvious, and generally self-leveling. This means that you needn’t rely on external benchmarks, nor hire a marquee-name company to feel good about the data you are collecting. It’s unfortunate that many customer satisfaction consultancies try to make prospective clients feel that they have inside knowledge (i.e., without their brilliant insight, experience, or benchmarks, other customer satisfaction data is invalid). It’s just not true.
Below are five simple questions that will help you understand what your customers think and feel about your business. They provide solid diagnostics to help you focus in on areas that need improvement. Optionally, you can start with this core set and modify it to suit your particular business needs – but the incremental value is likely to be marginal. You might add ratings on brand features, or separate product performance from service and support, but with common sense and good judgment, these five questions will likely answer 90%+ of what you need to know about how your business is performing. These questions are:
- How satisfied were you with the product or service we provided you today? Satisfaction has been proven to be the best overall measure to assess whether customers are pleased with your product or service. Keep in mind that satisfaction does not predict loyalty: it is a temporal assessment – i.e., a general barometer of product or service performance. Loyalty is best measured by purchase behavior. Alternatively, you can replace the word “satisfied” with “happy” and get the same result. Note that satisfaction is also an excellent dependent measure when correlating with other metrics used by your organization.
- If we could change one thing about how we specifically did business with you today, what would it be? We have used this question in multiple studies and have found that it is especially effective at identifying pain points and in informing the customer journey (and informing the analysis of “moments of truth”). The benefit of this question is that it produces a hierarchy, similar to a ranking, by focusing the respondent on one thing. Note that this question is also focused on the most recent transaction. Answers about specific issues typically require a solution closer to the front line, such as a manager, director, business unit head, or head of operations.
- If we could change one thing about how we generally do business with you, or our products or services, what would it be? In contrast to #2 above, this question focuses on broader business processes, service issues, and interactions. Use this question in contrast to what customers experience transactionally. General business issues that are out of alignment require senior management involvement.
- What one positive thing stood out to you that that we should do more of, or tell other customers about? Once you have cleared out the constructive criticism (above), look for areas where your business is performing well. Use this feedback to improve overall product or service performance, and communicate it back up through the organization as motivational feedback. Leverage and communicate these strengths so that prospects are aware of what you offer and the great value you add.
- Aside from the product or service we provided, what was the personal benefit to you? This an optional question that we often include, because it is helpful at identifying “end benefits” – i.e., the key human benefit derived from your product or service. Note that we are not seeking product or service features, but rather downstream benefits that the customer receives from you. For example, your product or service may let a mom regain control over her day, such as freeing up her morning or spending time with her spouse or kids. The answers to this question are especially helpful in messaging, communication, and brand tonality (i.e., the character or feeling of what your business or brand is all about).
Again, the five questions above form a “core set”: there is nothing preventing you from asking other questions, such as brand awareness, usage, behavior, or attribute ratings on the product or service you provide. But companies often fall into a trap of asking exhaustive questions that produce flat results with little variability over time. Our advice here is simple: less is more. If the questions that you want to add are not actionable, trust your instincts and exclude them.
Asking fewer, simpler questions engages the customer in a conversation with you, rather than subjecting them to a relentless barrage of questions.
This core set of questions is especially useful because it forces business owners and managers to review and listen to the comments that customers provide – and offers huge opportunities to gain real insight and make continuous mid-flight improvements. And there are many software platforms that can let you ask your questions for little, if any, cost.
The challenge for you is to read their responses: it is in the nuance of their answers that real improvements in customer satisfaction often hide.
by Bob Walker | Dec 7, 2017 | Analytics, Marketing research, Survey research
If you work in marketing management or marketing research you are no doubt familiar with customer satisfaction (CS) programs. Some CS programs, created in the hope of helping businesses become more customer-centric, fail to deliver against this noble objective.
As a result, “customer satisfaction” reporting systems have reached an inflection point. We must move away from rote “report card” thinking to much more nimble feedback systems that support real-time response and intervention. We need a rapid, feedback-driven interaction model based on a “customer response system”, or CRS. This approach is NOT equivalent to assessing a customer’s “experience” or “journey”: this is a problem-solution model based on continuous improvement principles. Refer to W. Edwards Deming for a deeper understanding.
Below are 10 areas to consider before building a customer response system (CRS). If you have a customer satisfaction program already in place, consider these ideas to improve the effectiveness of your company’s program. Or, you also can call us to help you build one.
#1 Management Buy-In
CRS programs that have the endorsement of senior management have the greatest chance of success. A company must be invested in a never-ending journey to improve its products and services to the benefit of the end-customer.
#2 Key Touch Points
A comprehensive assessment of all possible customer touch points needs to be made across the organization. As the number of discrete touch points increases, fewer questions should be asked per point.
#3 Link Measures To Processes
Broad measures are unusable for decision-making because they fail to provide the linkage between a problem and the process that created it. Your CRS program’s goal should be to provide granular feedback to help improve the overall system.
#4 Minimize Feedback Time Lag
Strive for immediate feedback whenever possible after every transaction or consumer touch point. In psychological experiments, memory decay occurs in a matter of minutes.
#5 Strike A Balance
Every company must find the optimal balance between respondent burden and actionability. Broad measures are insufficient to provide precise guidance. A person’s “willingness to recommend” is an abstraction and usually inappropriate in most categories.
#6 Link Touchpoint Measures
Think longitudinally and link data points together using a unique ID. Feedback must be obtained at multiple touch points, yet by minimizing the number of questions per interaction, we have a higher probability of obtaining high-quality data across the feedback chain.
#7 Append Transactional Data
Do not ask customers questions that you already have on file. CRS programs should append descriptive data to the customer record. Consider reverse populating your data warehouse or CRM system with CRS variables to determine improvement longitudinally.
#8 Issue Immediate Alerts & Promote Recontact
Whether using a DIY survey tool or an enterprise-wide platform, use alerts and triggers to rectify problems. Alerts give you an opportunity to pick up the phone and call the customer – and in so doing, forge stronger bonds with customers.
#9 Emphasize Light Users & New Customers
Break out light buyers and new customers when analyzing CRS data. Measures are affected more by changes in new or lighter buyers than by loyal customers because there are more of them.
#10 Reporting
Since you are interviewing your own customers, you are only seeing a single slice of your market. Aggregate-level reporting can lull a company into a false sense of security by masking outliers. Always report the number of alerts by discrete category for validity and impact.