Archive for the ‘online research’ tag
by Ben Leet, Sales Director, uSamp
It is the summer of 2012. As I sit on a busy train during my morning commute, fellow passengers are glued to their mobile devices – almost all smartphones. Most people are not talking on these phones. They are pinching and scrolling, browsing and thumbing. They are streaming information as fast as it’s released- a visual sign of the future of human behaviour. In fact, scrap “future” – it’s here, right now, and is only expected to proliferate. So I wonder why we continue to question whether our industry should adopt mobile research as a core methodology?
Morgan Stanley predicts that the number of searches done on mobile handsets will overtake those done by PC in the next year. Others posture that the mobile internet will overtake desktop internet usage within three years. Alarming though this sounds, it’s a real environment for the consumer – they, we, have the internet in our pockets 7 days a week and 24 hours a day. Why would we open up a PC or notebook to do our browsing, surfing, buying, networking, organising when we can do it with just a few clicks without moving from our chairs?
So what does this all mean for the MR industry?
During the past few years, there’s been a great deal of talk within the market research industry about online panels and sample quality. I’ve been in online sampling since ’99, when my business partner and I started our first sampling firm, goZing.com, which we sold to Greenfield Online in 2005. I’m currently co-founder and CEO of uSamp (www.uSamp.com), a technology company providing panel and sampling solutions to market researchers worldwide.
As someone with a vested interest in the long-term viability of quantitative research online, I want to share my thoughts about areas that need attention. My critique of what can and should be done to preserve the field’s integrity is intended to be constructive throughout, informed by more than a decade of observing both vendor/client and consumer behavior.
Addressing sample burn
Panelists are people. Over the past several years, brands across the globe have become increasingly invested in collecting, interpreting, and monetizing data. To many, data is a means to an end, quickly forgotten as results become more important than processes. We often refer to panelists as “sample,” not “people,” but to market research professionals working in an industry founded on such data, panelists should be regarded as living and breathing entities. They are our neighbors, our friends, our family members. These panelists eat and sleep just like us, and understand the concepts of time management and reward motivations.
Participating in an online research panel can be a tedious experience, during which panelists attempt surveys with the best intentions, and spend a great deal of time trying to qualify inside of narrow quota segments — only to frequently be terminated or screened-out with little or no compensation for their time. Many opt-out and stop taking surveys altogether.
Sampling firms do their best to manage this panel burn, but due to complex business requirements and certain persistent gaps in technology between sample suppliers and research survey software, it’s impossible for sample companies to know exactly what quotas market research firms require. Sample firms are mostly blind to the real-time needs of survey quotas, largely because industry processes are heavily manual and lack full transparency.
Imagine that survey software was able to communicate with sampling databases, and, in real-time, deliver exactly the right people at the right time. Panelists wouldn’t waste time and sample companies wouldn’t disappoint panelists (in other words, burn sample).
When panelists stop taking surveys, sample firms need to refresh the panel with new people – and there are real costs associated with managing this attrition. These costs are passed on indirectly through the CPI (Cost-per-interview)-based pricing model. The fewer panelists used in a survey, the lower the price. Higher incidence (and better targeting) likewise means lower pricing.
As it gets harder and harder for sample companies to retain panelists, the industry has been placing constraints on sample companies. Many initiatives require address-validated panelists. Ask a family member if he or she is willing to give personally identifiable information to a sample company simply to earn $25 a year for taking surveys. Does this mean that panelists who are not willing to give personally identifiable information to a sample company should be left out of online sampling methodology? What does this do to the scalability of online quantitative research? Will we reach a ceiling where companies can no longer fill quotas?