Sort of like an Eskimo tracker
“Tracking” studies have been a staple of marketing research departments for decades, yet they still mean different things to different researchers and companies. Generally speaking, conventional thinking holds that tracking studies are surveys conducted on an ongoing or wave basis, and intended to gather marketing intelligence on brands, advertising, messaging, users, or usage. It is the repeated nature (the same questions over time) that distinguishes tracking from other types of research. Tracking can certainly include diagnostic measures, but such measures tend not to be extensive.
The primary emphasis of tracking is, or should be, on being an efficient barometer of brand performance. The focus is on key performance indicators (KPIs), which typically include unaided and aided brand and advertising awareness, brand purchasing, usage occasions, performance measures (attributes), and basic classification questions. If we were to further extend out from these basic measures, we move into the realm of strategic research, which tracking is most certainly not — although tracking can point to areas that need to be explored in greater depth. Knowing where that line is crossed is a judgment call, hence each client must make his or her own decision regarding the “sweet spot” of breadth and depth.
With a steady stream of multiple data sources, from customer service to social media, tracking research has fallen out of favor somewhat. While many marketing research departments continue to conduct large-scale trackers, their utility and actionability are increasingly called into question given the multitude of other seemingly substitutable sources. Some clients appear quite comfortable relying solely on sales data, and the output of search engines that scrape social sites and web commentary.
If tracking hasn’t proven to be useful, you’re doing it wrong
No one will argue that (syndicated) sales data or site transaction data isn’t critical, but these data sources are not very useful forward-looking predictors. Social listening, while improving, produces data that is event-specific and volatile, and relying on the spiky nature of this data forces marketing into making knee-jerk reactions, and also diverts attention and resources away from more serious strategic management decisions. Hence a question is: What types of feedback should be institutionalized in the decision-making process? This seems to leave a fairly wide-open playing field for more thoughtfully designed and executed feedback tools, which tracking research most certainly can be. The recent (June 2017) decision by P&G to cut $125MM+ in digital ad spending was most certainly informed by smart tracking. Tracking research can and does impact significant business decisions.
The most tired, predictable criticisms of tracking research are that it is expensive, slow and, as a result, not particularly actionable. Let’s knock these straw men down one at a time.
If conducting consumer satisfaction research, sample acquisition costs are near-zero (more on that later). Of course, costs are incurred if your tracking program requires external sample. Costs obviously increase if you work in narrowly defined categories, limited geographies, or where incidence is low and screening costs are high. However, in most widely penetrated categories, incidence is not a huge factor, and the added benefits of sample quality are strong counterarguments against erratic sources with little quality control. Reliable data is rarely free, so perhaps the better case is a value story, and not purely a rebuttal using costs alone. Spending $100K on a smart tracking program to assess the impact of a $20MM ad budget seems compelling to me.
Management has to listen
The slowness critique is also a red herring, in that most tracking programs conducted online are done so continuously, and can be aggregated in any number of increments to reflect category dynamism or rate of change. If you are dealing with faster moving categories, or need more extensive subgroup analysis, then more frequent reporting is implied. Aggregations at the weekly, monthly, or quarterly levels are typical; some experimentation is needed to choose the right frequency of reporting for your business.
Last, the actionability of tracking research is always a function of both design and execution. With (a) the correct sample frame, (b) questions clearly linked to business processes, and (c) smart reporting (including exception reports and alerts), tracking becomes your primary radar screen and course correction tool for the business. As you likely know, customer satisfaction research gives us the added advantage of including a key field, such as a customer or company ID, which in turn lets us link survey results to transaction history. Once connected, we can unpack the relationship between perceptions and behavior at the respondent level, have the ability to develop predictive models, and inform and monitor specific marketing actions. We’ve had much success in this area.
To many of you, especially more senior researchers, most of this is not particularly newsworthy. But, it does raise the question of why tracking is not more widely used, and mined more fully, and better embedded within organizations. There are probably many reasons, although the most obvious include:
- Not re-examining your tracking program on a frequent enough basis (e.g., quarterly) to make sure that category, brand, attribute, or diagnostic measures are up to date. Failing to keep tracking programs current lets them lapse into obscurity.
- Seeking input from multiple people or departments, to identify consensus areas that are most important for workflow, and to identify when changes are needed.
- Internally framing tracking research as a continuous improvement tool (Deming), and not just another form of sales data, which it not.
- Not developing control bands on key measures to help identify data points that fall outside normative ranges and require corrective action. This avoids marketing uncertainty over what is, or is not, meaningful.
- Missed opportunities to push reporting deeper into the organization using dashboards, visualizations, or exceptions/alerts to specific individuals or teams.
- Increasing added value by including modules on hot topics that can be swapped in or out, depending on current market conditions (while keeping survey length brief).
- Failing to connect the sample frame or key questions to specific business decisions.
- Failing to leverage the linkage between survey responses and transactional data.
- Taking tracking research too far. Most buying decisions are emotional, hence tracking research can only get us part of the way there. Other forms of research (qual only, hybrid quant-qual, strategic) are still needed to continue to learn about consumer behavior.
Let’s improve the measures
One last (loosely related) thought… given all of the customer satisfaction work being done today, one metric that we need to develop more fully is a prospect-to-conversion score. Tracking is a perfect vehicle for this, yet customer satisfaction studies rarely look outside the world of the brand itself. Much like NPS, creating a BPS (brand potential score) may prove to be a far more useful metric than knowing how satisfied current customers are. Over time, all customers defect or die. They must be replaced with new customers predisposed to the brand. Understanding this potential will be much more useful to the long-term health of the business. Tracking can help get you there.