Archive for the ‘market research’ tag
By Joe Jordan, Vice President of Panel Operations
Many times I discovered that the sample vendor I chose for my high-priority, top secret, critical study was just a mere middle man to other various sample vendors I specifically did not choose because they had wronged me in the past. Much like elephants and the IRS, researchers never forget vendors who have failed them at the final hour. They have nightmares of that 4 a.m. email the day a study should close, saying “We are reaching out to other partners” on a “best efforts basis.” It makes finding a sample provider for your next project all the more daunting.
As you send out your RFPs to preferred sample vendors with your exact demographic profiles and 18 nested quotas of left-handed grape soda drinkers who bought a laptop and puppy in the past 30 days, think through the reality of how likely that vendor can deliver your exact needs. Is it really possible they have this group of consumers anxiously waiting in front of their computers for the riveting subject line “A New Survey Just for You” to come flashing across the screen so they can sit for 45 minutes and give their entire purchase decision criteria in glorious detail?
So how can researchers separate those who only talk from those who also walk? Do your own research! Here are three suggestions for how to find the best sample provider for your next study.
- Ask what percentage of their projects requires partners. All sample companies use partners to assist in some percentage of their projects — the real question is what percentage. While the complexity of the audiences is always the driving factor in these studies, it is important to understand how often they outsource. Doing so usually limits a provider’s control of timing and feasibility and causes you anxiety about making the deadline. Part of why we brand uSamp as a technology company first is because it is technology that allows us to grow and manage our own, diverse panel. When I joined uSamp, I was impressed to learn that 95 percent of the projects completed in 2013 were sourced exclusively from our own suite of proprietary panels. This includes a wide variety of consumer segments and business-decision makers that are part of a network of websites and publisher partners. This vast network ensures the uSamp panel has the breadth to provide unique individuals from all customer segments willing to give their feedback on products and brands.
- Take the registration survey. As you evaluate your sample vendor, take a look through their registration page and review the type and number of questions required to sign up. Go through the double opt-in process to find out what the second wave of criteria is asking. Ideally, you should time this process and think through how long it takes to sign up and become an active member. Will a new panelist be willing to take this much time before even knowing if they can participate in the studies? Many of these registration forms are longer than mortgage applications and are about as exciting.To improve panelist engagement and reduce tedium and burnout, uSamp has launched Adaptive Profiling™, a profiling system that asks respondents targeted questions in short bursts and then utilizes predictive analytics and complex statistical analysis to identify other tendencies about panelists and connect them to the appropriate studies. This also allows uSamp to quickly assemble an audience that is custom-suited to clients’ specific needs while offering panelists more opportunities to qualify for studies without the cumbersome registration form.
- Ask how many unique panelists register daily. You’ve heard it before: “bigger is better.” At least, that’s what every sample vendor says when they proudly promote their panel as the largest on the market. But as a researcher, you care most about how they can target your specific audience quickly and accurately with unique and meaningful data.A raw count of millions of panelists does not mean they are engaged, active or applicable to your needs. You need respondents who are making decisions now, using smartphones, and interested in offering their opinions in the moment, rather than the heritage panelists who have been taking surveys for income for six years and registered their profile details on an eight-pound laptop. uSamp consistently signs up 18,000 new panelists a day who are fresh, engaged in the moment and ready offer insights on your products and brands.
Before you dive into your next relationship with a dubious sample vendor, remember to ask about other partners, play the role of the panelist, and find out about their new daily signups.
I hope you find a deep and reliable partner, at least until the next complex project comes along.
By The Editors
How do your customers view your products and services? In a marketplace where constant change is the new normal, being able to see the world through your customers’ eyes is essential to growing your business and finding new and retaining existing customers. In the video below, “Mobile Research Communities: An Agile Approach to Customer Context,” Allen Vartazarian, VP of product at uSamp, and Julie Vogel, VP of Communities at Morpace, discuss the following:
- How new mobile research capabilities let you interact with your customers in-the-moment
- How online research communities can help you build customer partnerships that strengthen and deepen your understanding of customer context
- Why one Fortune 500 company changed its approach to a target audience based on a combination of these research approaches
By Joe DiGregorio, Senior Director, Global Programming
As is the case with any trend in market research, large or small, the rapid growth of data collection on mobile devices has brought with it countless new tools and methodologies.
Having started my career at the dawn of the transition from computer-assisted telephone interviewing (CATI) to online as a method for data collection, I’ve lived through many of the challenges associated with this type of transition before. There’s a game-changing medium in town, and (almost) everyone wants a part of it. Clients are told they need it but not all of them know why or how to use it. Research methodologists brainstorm how to transition the old methods to the new without impacting historical data, and they invent brand new methods never before feasible with the old research methods. Developers race to create every new application they can think of, hoping enough people can be convinced they are useful. Some of them stick and become part of new way of doing research. Some of them gather dust as they are replaced or fail to prove their worth.
While all this goes on, your operations team is acting and reacting, drawing, erasing and redrawing the line between what is possible and what is not possible. It often falls to them to be the bearer of bad news when a request is made for something that isn’t quite feasible, regardless of any upstream promises. This unfortunate position, however, could have been avoided.
With that in mind, and without further ado, here are five tips about mobile programming to help you develop a better mobile study:
- Keep in mind that mobile devices have small screens.
I know what you’re saying: “I already know that mobile devices have small screens!” However, this influences survey design in many ways. Having programmed some detail-rich conjoint designs in my time, I’ve witnessed firsthand how much content we all try to cram onto one screen. Screen real estate is at an even bigger premium on hand-held devices. Keep your questions short and sweet and avoid horizontal scrolling.
- Test all questions on all devices – and then test again.
It goes without saying that some question types will render differently on mobile devices vs. desktop/laptop devices. If the survey platform being used for your project is worth its salt, it will have optimized rendering for mobile devices. For some question types – grids in particular – the layout of the question will be significantly different. Many platforms will display grid questions as a vertically scrolling series of single or multi-select questions on a mobile device instead of the default matrix style display. This goes back to the size of the typical mobile screen that will not allow the horizontal space necessary for more than a few columns without horizontal scrolling. Be sure to test your surveys on both types of devices so you know exactly what your respondents will be seeing.
- Specify on which devices you want your survey to be available.
Related to the above point, you may want to control what types of devices can be used to take the survey. Many survey platforms will detect the device type at a general level. This detected information could then be used to alert respondents to use a different device and/or screen them before continuing the survey. At a minimum, you should track the device type in case there are significant differences in responses between the two groups.
- Keep it short.
Yeah, you’ve heard this one before. Still, on mobile devices it is even more crucial that you limit your survey length. Your survey faces much more competition for the respondent’s attention on a mobile device than it would on a desktop or laptop. Mobile surveys work best when they are quick transactions.
- Take advantage of the unique capabilities of the mobile platform, but
be prepared for the results.
Some of the most commonly used features unique to mobile surveys are the multi-media uploads. Being able to ask respondents to take a picture of what they are seeing or doing, record a video of the same or provide an audio response instead of typing an open-ended answer in a text box can provide rich results. They also can provide some unexpected and surprising results. If you have any of these question types, make sure you and your project manager accommodate time for at least one preliminary review of the uploads before the end of data collection. You may need to recoup some respondents you remove from the data based on this review and possibly reconsider or reword your question(s).
While this is by no means an exhaustive list (and some of these items may even sound familiar to those who lived through the transition to online research), keeping these in mind the next time you design your mobile study will go a long way towards efficient, high-quality project execution.
By The Editors
At MRMW this year, Justin Wheeler shared fascinating research in a presentation that probed one seemingly simple question: Are mobile respondents more honest? Wheeler’s research is trying to get at the twin problems of social desirability bias and consumer satisificing. The former describes the phenomenon of respondents providing answers that they think researchers will want to hear or that they think will make them appear in a more positive light in researchers’ eyes. The latter describes the mental shortcuts or paths of least resistance consumers will unconsciously take when asked to recall specifics of advertisements or products in an online survey. Wheeler’s research indicates that mobile could be an antidote to both of these problems. How? In-context mobile surveys remove interviewers from the equation, mitigating the influence of social desirability, and also eliminate the need for consumer recall.
See below for a video of Wheeler’s entire presentation at MRMW:
By Jacob Tucker, Senior Analyst of Insights and Strategy
The Market Research in the Mobile World conference in Chicago was filled with emerging technologies, new capabilities, and aspirations to push the limits on the type of data we can collect. Be it simply adapting online surveys to mobile, using geolocation technology to intercept shoppers during purchase decisions, or experiencing personal moments with consumers through wearable computers like Google Glass, it is clear that many organizations in the market research industry are trying to pull us forward into the future. As I took in presentation after presentation, a few common themes emerged.
1. Researchers are increasingly tasked with understanding the “why”
in addition to the “what.”
Knowing that 74% of shoppers are likely to try Product A while just 48% are likely to try Product B can only take us so far. What is it about Product A that speaks to consumers more than Product B? We think that mobile methods are better equipped to give us these “why’s.”
2. Segmenting data by standard demographics is diminishing in favor
of behavioral characteristics.
We’re less interested in the differences between men and women, for example, as we are the differences between someone who is on five social networks compared to only one. These behavioral characteristics have a more significant reach in the marketplace, and mobile opens the door to discovering more behaviors which can help us understand just how far that reach is.
3. Consumer intimacy is the underlying concept that researchers seem to be dancing around as it pertains to mobile.
We’re trying these new methods in order to get closer to the consumer. It makes sense that if we can feel what the consumer feels, we can market better experiences for them.
4. Mobile is here to stay, now let’s prove its value.
The next necessary step I see for mobile is evidence that it actually works. Now that we’ve been exposed to its potential, we need to find out if companies are indeed making better business decisions because of it. What information are we gathering from mobile that we couldn’t get from other methods? Would the best business decisions be out of reach without this information? Those of us diving into the waters of mobile believe it to hold some uncharted answers, and now it’s
time to prove it.
By The Editors
Designing an effective market research questionnaire is all about approach – a backwards one, that is. Before you can delve into the question-writing process, you need to conceptualize your ideal answers in order to derive the appropriate measurements. Check out this 9 tips for improving your questionnaire.
By The Editors
Are mobile respondents more honest? We know that mobile lends itself to in-context surveying, but mobile devices themselves may have an enormous impact on how honestly consumers answer questions. Why? Research shows that respondents will often take the path of least resistance when answering difficult survey questions, a phenomenon called satisficing. Additionally, traditional in-store market research requires positioning researchers on-location to ask consumers questions, and that face-to-face interaction causes consumers to want to give the answers they expect researchers want to hear – versus what they really think, a problem known as social desirability bias.
The good news? Mobile helps us get answers while consumers remain in the store, in front of the products, but adds a level of privacy that at-home research provides. Additionally, with the ways that smart phones have become a natural part of our everyday lives – studies show that we look at our smart phone once every three minutes – consumers don’t find mobile surveys to be disruptive. Therefore, mobile may just provide the key to unlocking two of market researchers’ thorniest data quality problems.
In this video, Justin Wheeler explains more about how mobile may be poised to solve these problems.
To learn more about this study, visit uSamp at the Mobile Research in the Mobile World event May 27-30th in Chicago, or stay tuned for a post-event recap.
Creating Your Questionnaire Part II: If You Don’t Know What You’re Aiming At, You Won’t Hit Anything
By Scott Worthge, VP, Research Solutions
In my first post about creating effective questionnaires, I started with the broad premise that most survey writers follow the wrong process in crafting what they think will be a “good” survey. Oftentimes, survey writers can jump into the question-writing process too early. If they followed my advice, they would have a better understanding of what the client needs from those up-front conversations one must have to start the research ball rolling, and use that as an immediate check on how the survey was to be structured.
In the second stage of my three-part survey-writing process, I’m still not ready to delve into question writing just yet. Instead, you must define your measurements first. “Really?”, you may ask, “Why start with the answers?” Because the answers will define the questions you need to use, not the other way around!
Questions are so easy to write when you know what form the answers will take. There are two main types of measurements you will find in surveys; I call them “black and white,” or, the easy stuff, and “color,” a bit more nuanced.
Black and White
This phrase is used all the time to describe something that is unambiguous and easily understood. It’s no different in surveys: The black and white measurements are those that are determined by facts, not feelings or interpretations, and can be further broken down into state of being and state of behavior measurements. State of being includes demographics–age, gender, income, education, ethnicity, employment status, etc. State of behavior is past behavior–what someone has done, when, how often, where (think shopping or vacationing). With these types of questions, the facts are the facts, and the questions are relatively easy to craft.
On the opposite side of the spectrum are “color” measurement–those topics that are necessary in every survey, but are difficult to pin down properly. Color involves state of mind and state of intention measurements–that vast, slippery slope of thoughts, feelings, perceptions, expectations and the like. These are critical to measure, but can be interpreted in so many ways.
Consider the concept of customer service. How can a consistent, systematic process be developed to create appropriate “color” measurements, since such a topic can take many shapes? How do you build questions that your client will agree are the “right” ways to gain information for their goals and objectives for the research?
Here’s a simple system for deciding on measurements that I have developed through the years working with clients…
The Diamond Method: A 5-Step Process For Developing “Color” Survey Measurements
- Pick a starting point and look for color first. Add in the black and white measurements later, after the heavy lifting. For this example, let’s stick with customer service, and see how one could decide how to address this for a survey questionnaire.
- Expand your concept to its widest definition. List all of the potential ways to measure your concept (moving from the tip of the diamond to its widest point). For customer service, I would list things like “time to respond to my inquiry,” “how well did they fix my problem,” “how long did I sit on hold,” “did I get shuffled off to another department,” and so on. The goal is to come up with an exhaustive list of reasonable choices, any of which could be relevant to how customer service could be measured. I also always want input from my team at this stage, as other perspectives will help widen that diamond faster.
- Evaluate your list against the client’s goals—and then edit. Here’s the key! Immediately go back to what you’re being hired to do and start filtering the measurements you’ve listed. It’s so easy for researchers and laymen alike to fall in love with their own ideas and focus on that instead of what the client wants. Drill down to those that are their priorities, not yours (and narrow your diamond back toward a point).
- Determine how much of your survey should be dedicated to that concept. Take your short(er) list of measurements and see how much of your survey should be occupied by customer service metrics. If customer service is one of a few topics in this survey, keep your measurements (and resulting questions) few. If it is the sole focus of the survey, then you can allocate much more of your questionnaire real estate using the list of possibilities you’ve developed.
- Update your client. Lay it all out. Show them what you considered, how you narrowed it down, how it fits within the overall survey length and focus and what you think the key measurements are–-for EACH topic to be included. Watch for the vigorous head nodding, then document their approval in an email. If they disagree, repeat the above steps until you pin down the measurements they agree are priorities.
Once approved, you can move forward in confidence. A big bonus of the diamond method is that you can defend, at any time, how you considered an ambiguous topic like customer service, looked at a range of possible measurements and got approval from the client for the most important ones to include in this survey at this time. I’ve been in enough presentations to know that being able to point back to the diamond process is golden for answering a skeptic whose favorite measurement didn’t make the cut.
And voilà–you have all the pieces in place to write thought-out questions that focus on the client’s agreed priorities. Which just so happens is the next topic in this series: Writing questions according to best practices. Stay tuned.
By The Editors
If you missed ARF Re:Think 2014, you only missed one of the biggest market research and advertising events of the year. Go ahead, #facepalm.
From March 23rd through 26th, more than 2,500 top advertisers, market research companies, ad agencies and more gathered in NYC to “Inspire Intelligent Growth” and push our industry toward making smarter, faster, and better business decisions. There were a lot of interesting talks given and exciting news announced. But if you missed it, don’t beat yourself up. We have you #covered with this quick recap of the most important happenings and news shared on Twitter.
Talks by Keith Reinhard of DDB Worldwide and James Burke and Euan MacKay of Kantar Media captivated audiences.
From our booth, we spread love, not war–in the form of creamy chocolate hazelnut spreads, that is. Our live demo on mobile IHUTs featured results from a recent study on spreadables from Hershey’s, Jif, and Nutella (complete with samples!). Who did consumers crown as king nut? Check out the results here.
While meeting with hundreds of attendees and attending presentations, we definitely noticed more chatter over the importance of tracking and analyzing mobile data. Here’s what people were saying:
And in the mind-boggling-facts-department, presenters did not disappoint:
We also had a blast scooping Ben and Jerry’s ice cream and chatting with attendees at our booth. Plus, shirts!
All in all, the show was a great success, so much so it prompted a few post-event responses from Huff Post and Greenbook. If we missed you this time, be sure to come out and visit next year. We’ll be there, ice-cream scoops a ready.
By Justin Wheeler, VP Product Innovation & Business Development
In my first two posts in our data privacy series, we learned that Americans are strongly in favor of personal data protection and want an amendment that explicitly makes data privacy a guaranteed right. From a political perspective, this seems like an easy lob for someone to step up and knock right out of the park, or at the very least use to mobilize a national conversation. We polled our respondents to find out if Americans already have someone in mind to lead this charge. So who’s at the top of the ballot? That’s still a big question mark.
No Heroes Here, Only Survivors
Respondents were asked to identify which current political figure “best represents” their own views about appropriate protections for data privacy. As Richard Pryor championed in Brewster’s Millions, we got the answer that few politicians are going to like: “None of the Above” is currently carrying a double-digit lead over any challenger from our list:
Political Figure Who Best Represents My Views on Data Privacy
|None of the Above||38%|
|Other (Write In)||5%|
It’s worth noting, of course, that Democrats were much more likely to indicate Barack Obama or Hillary Clinton here, and Republicans were more divided among several players. Also of note: Nearly 1/3 of “write-in” votes were for Ron Paul (retired), and there were a few “Edward Snowdens” thrown in for good measure.
The following charts further break down these rankings by the respondent’s political party:
|Barack Obama||31%||None of the Above||37%||None of the Above||43%|
|None of the Above||30%||Rand Paul||15%||Barack Obama||13%|
|Hillary Clinton||27%||Chris Christie||13%||Rand Paul||12%|
|Joe Biden||4%||Ted Cruz||9%||Hillary Clinton||10%|
|Other||3%||Marco Rubio||7%||Chris Christie||7%|
|Rand Paul||1%||Hillary Clinton||3%||Marco Rubio||3%|
As we head into the 2014 election year, one thing is clear: Protecting data privacy is a key issue among voters, and a strong bi-partisan majority support the cause enough to want to amend the U.S. Constitution. Although Americans still have mixed feelings about who should lead the charge, rest assured change is on the way. In fact, this morning CNN reported that Sen. Rand Paul will file a class-action lawsuit against the NSA for their surveillance programs. Paul is filing the suit with former Virginia attorney general Ken Cuccinelli and Matt Kibbe, president of the political group FreedomWorks.
The next couple of years will prove whether Paul or another from this list is up for the challenge. Then again, the 38% “None of the Above” response suggests that the people could be looking for a newcomer to fill that void.