During the past few years, there’s been a great deal of talk within the market research industry about online panels and sample quality. I’ve been in online sampling since ’99, when my business partner and I started our first sampling firm, goZing.com, which we sold to Greenfield Online in 2005. I’m currently co-founder and CEO of uSamp (www.uSamp.com), a technology company providing panel and sampling solutions to market researchers worldwide.
As someone with a vested interest in the long-term viability of quantitative research online, I want to share my thoughts about areas that need attention. My critique of what can and should be done to preserve the field’s integrity is intended to be constructive throughout, informed by more than a decade of observing both vendor/client and consumer behavior.
Addressing sample burn
Panelists are people. Over the past several years, brands across the globe have become increasingly invested in collecting, interpreting, and monetizing data. To many, data is a means to an end, quickly forgotten as results become more important than processes. We often refer to panelists as “sample,” not “people,” but to market research professionals working in an industry founded on such data, panelists should be regarded as living and breathing entities. They are our neighbors, our friends, our family members. These panelists eat and sleep just like us, and understand the concepts of time management and reward motivations.
Participating in an online research panel can be a tedious experience, during which panelists attempt surveys with the best intentions, and spend a great deal of time trying to qualify inside of narrow quota segments — only to frequently be terminated or screened-out with little or no compensation for their time. Many opt-out and stop taking surveys altogether.
Sampling firms do their best to manage this panel burn, but due to complex business requirements and certain persistent gaps in technology between sample suppliers and research survey software, it’s impossible for sample companies to know exactly what quotas market research firms require. Sample firms are mostly blind to the real-time needs of survey quotas, largely because industry processes are heavily manual and lack full transparency.
Imagine that survey software was able to communicate with sampling databases, and, in real-time, deliver exactly the right people at the right time. Panelists wouldn’t waste time and sample companies wouldn’t disappoint panelists (in other words, burn sample).
When panelists stop taking surveys, sample firms need to refresh the panel with new people – and there are real costs associated with managing this attrition. These costs are passed on indirectly through the CPI (Cost-per-interview)-based pricing model. The fewer panelists used in a survey, the lower the price. Higher incidence (and better targeting) likewise means lower pricing.
As it gets harder and harder for sample companies to retain panelists, the industry has been placing constraints on sample companies. Many initiatives require address-validated panelists. Ask a family member if he or she is willing to give personally identifiable information to a sample company simply to earn $25 a year for taking surveys. Does this mean that panelists who are not willing to give personally identifiable information to a sample company should be left out of online sampling methodology? What does this do to the scalability of online quantitative research? Will we reach a ceiling where companies can no longer fill quotas?
Rethinking pricing models
Today, as in the early days of online research, the CPI (Cost-per-interview) pricing model is the industry standard; meaning that a researcher has to pay for only those completed interviews that he or she finds satisfactory. In the early days, this probably made sense because expectations were limited to the types of people who could be found and interviewed online.
But as the research industry has matured, this model has started to break down.
CPI-based sample pricing, in which sample companies assume almost all the risks, is analogous to CPA (Pay-per-action) pricing in online advertising, in which advertisers reassign their risk to web publishers. In the online advertising industry, a few different types of pricing models are in play. These models include (in order of quality from highest to lowest) CPM (Cost-per-impression), CPC (Cost-per-click), CPS (Cost-per-sale) and CPA (Cost-per-action).
To the extent that poor sample quality is an issue, the blame often falls on the sampling company’s recruitment and delivery methods, with survey design considered a close second. Pricing models are often an overlooked but critical component of ensuring quality results.
It is easy to be critical about the state of the industry, but harder to put forth actionable items. What can we do to preserve integrity?
- The industry should push survey tool companies to enable sample companies to electronically read quotas in real-time through APIs (Application Programming Interfaces).
- The industry should move away from CPI-based pricing to CPF (Cost-per-finish) pricing. A “finish” would be defined as a requested person starting a survey and getting to a completion, disqualified or over-quota status. Clients wouldn’t be charged for a panelist who closes his or her browser and doesn’t finish. This would require both precise targeting and an agreement between the sample and research firms, so that sample companies can deliver exactly the right person at the right time, with little room for sample burn. If a project requires untargeted gen pop sample, then every panelist should be rewarded and there’s a CPI for completes and reward fee for fails/over-quotas.
- Every person who finishes a survey gets rewarded. Imagine what would happen to email response rates if respondents knew that they’d always be rewarded for their time. Our friends, family members and neighbors would come back to taking surveys online.
- There should be a higher cost for blind targeting of panelists so that panel companies can pay every person who attempts the survey, regardless of termination rules and quota controls.
We have to be willing to make hard choices, and be open to new ideas by integrating and innovating technologically.
It’s 2011, not 1995. Every research company should have IT staffers charged with improving automation and efficiency between research companies and suppliers. Clients want higher quality at lower prices – every industry wants to follow that scenario. And it’s possible, but only if we work together.
uSamp is ready and willing to take on this challenge to build a better future for online surveying, market research in general, and the organizations that rely on market research for their success.