uSamp Blog

The Answer Network

Choosing a Sample Partner: Three Tips
for Vetting Your Vendors

By Joe Jordan, Vice President of Panel Operations

In my many years of managing custom market research projects in all study type varieties, one issue constantly lurked in the underground of all sample providers: Where is this sample being sourced?

Many times I discovered that the sample vendor I chose for my high-priority, top secret, critical study was just a mere middle man to other various sample vendors I specifically did not choose because they had wronged me in the past. Much like elephants and the IRS, researchers never forget vendors who have failed them at the final hour. They have nightmares of that 4 a.m. email the day a study should close, saying “We are reaching out to other partners” on a “best efforts basis.” It makes finding a sample provider for your next project all the more daunting.

As you send out your RFPs to preferred sample vendors with your exact demographic profiles and 18 nested quotas of left-handed grape soda drinkers who bought a laptop and puppy in the past 30 days, think through the reality of how likely that vendor can deliver your exact needs. Is it really possible they have this group of consumers anxiously waiting in front of their computers for the riveting subject line “A New Survey Just for You” to come flashing across the screen so they can sit for 45 minutes and give their entire purchase decision criteria in glorious detail?

So how can researchers separate those who only talk from those who also walk? Do your own research! Here are three suggestions for how to find the best sample provider for your next study.

  1. Ask what percentage of their projects requires partners. All sample companies use partners to assist in some percentage of their projects — the real question is what percentage. While the complexity of the audiences is always the driving factor in these studies, it is important to understand how often they outsource. Doing so usually limits a provider’s control of timing and feasibility and causes you anxiety about making the deadline. Part of why we brand uSamp as a technology company first is because it is technology that allows us to grow and manage our own, diverse panel. When I joined uSamp, I was impressed to learn that 95 percent of the projects completed in 2013 were sourced exclusively from our own suite of proprietary panels. This includes a wide variety of consumer segments and business-decision makers that are part of a network of websites and publisher partners. This vast network ensures the uSamp panel has the breadth to provide unique individuals from all customer segments willing to give their feedback on products and brands.
  2. Take the registration survey. As you evaluate your sample vendor, take a look through their registration page and review the type and number of questions required to sign up. Go through the double opt-in process to find out what the second wave of criteria is asking. Ideally, you should time this process and think through how long it takes to sign up and become an active member. Will a new panelist be willing to take this much time before even knowing if they can participate in the studies? Many of these registration forms are longer than mortgage applications and are about as exciting.To improve panelist engagement and reduce tedium and burnout, uSamp has launched Adaptive Profiling™, a profiling system that asks respondents targeted questions in short bursts and then utilizes predictive analytics and complex statistical analysis to identify other tendencies about panelists and connect them to the appropriate studies. This also allows uSamp to quickly assemble an audience that is custom-suited to clients’ specific needs while offering panelists more opportunities to qualify for studies without the cumbersome registration form.
  3. Ask how many unique panelists register daily. You’ve heard it before: “bigger is better.” At least, that’s what every sample vendor says when they proudly promote their panel as the largest on the market. But as a researcher, you care most about how they can target your specific audience quickly and accurately with unique and meaningful data.A raw count of millions of panelists does not mean they are engaged, active or applicable to your needs. You need respondents who are making decisions now, using smartphones, and interested in offering their opinions in the moment, rather than the heritage panelists who have been taking surveys for income for six years and registered their profile details on an eight-pound laptop. uSamp consistently signs up 18,000 new panelists a day who are fresh, engaged in the moment and ready offer insights on your products and brands.

Before you dive into your next relationship with a dubious sample vendor, remember to ask about other partners, play the role of the panelist, and find out about their new daily signups.

I hope you find a deep and reliable partner, at least until the next complex project comes along.

Joe Jordan is a senior market research executive with 20 years of experience working with Fortune 500 companies. With a background in international panel development and management, Joe is tasked with expanding and maintaining the health of uSamp’s online panel, as well as scaling the mobile panel to increase the lifetime value of iPoll. His most recent position was as the strategic account director for Networked Insights, and he also previously served as director of client services and research operations at Troy Research. Joe graduated from the University of Illinois at Chicago with a B.S. in marketing.

leave a comment »

Leveraging Mobile and Online
Communities to Gauge Customer Context

By The Editors

How do your customers view your products and services? In a marketplace where constant change is the new normal, being able to see the world through your customers’ eyes is essential to growing your business and finding new and retaining existing customers. In the video below, “Mobile Research Communities: An Agile Approach to Customer Context,” Allen Vartazarian, VP of product at uSamp, and Julie Vogel, VP of Communities at Morpace, discuss the following:

  • How new mobile research capabilities let you interact with your customers in-the-moment
  • How online research communities can help you build customer partnerships that strengthen and deepen your understanding of customer context
  • Why one Fortune 500 company changed its approach to a target audience based on a combination of these research approaches

Mobile Research Communities Webinar: An Agile Approach to Customer Context from uSamp on Vimeo.

leave a comment »

Written by adrien

June 13th, 2014 at 10:11 pm

5 Things Every Survey Programmer
Wishes You Knew About Mobile Research

By Joe DiGregorio, Senior Director, Global Programming

As is the case with any trend in market research, large or small, the rapid growth of data collection on mobile devices has brought with it countless new tools and methodologies.

Having started my career at the dawn of the transition from computer-assisted telephone interviewing (CATI) to online as a method for data collection, I’ve lived through many of the challenges associated with this type of transition before. There’s a game-changing medium in town, and (almost) everyone wants a part of it. Clients are told they need it but not all of them know why or how to use it. Research methodologists brainstorm how to transition the old methods to the new without impacting historical data, and they invent brand new methods never before feasible with the old research methods. Developers race to create every new application they can think of, hoping enough people can be convinced they are useful. Some of them stick and become part of new way of doing research.  Some of them gather dust as they are replaced or fail to prove their worth.

While all this goes on, your operations team is acting and reacting, drawing, erasing and redrawing the line between what is possible and what is not possible. It often falls to them to be the bearer of bad news when a request is made for something that isn’t quite feasible, regardless of any upstream promises. This unfortunate position, however, could have been avoided.

With that in mind, and without further ado, here are five tips about mobile programming to help you develop a better mobile study:

  1. Keep in mind that mobile devices have small screens.
    I know what you’re saying: “I already know that mobile devices have small screens!” However, this influences survey design in many ways. Having programmed some detail-rich conjoint designs in my time, I’ve witnessed firsthand how much content we all try to cram onto one screen. Screen real estate is at an even bigger premium on hand-held devices. Keep your questions short and sweet and avoid horizontal scrolling.
  2. Test all questions on all devices – and then test again.
    It goes without saying that some question types will render differently on mobile devices vs. desktop/laptop devices. If the survey platform being used for your project is worth its salt, it will have optimized rendering for mobile devices. For some question types – grids in particular – the layout of the question will be significantly different. Many platforms will display grid questions as a vertically scrolling series of single or multi-select questions on a mobile device instead of the default matrix style display. This goes back to the size of the typical mobile screen that will not allow the horizontal space necessary for more than a few columns without horizontal scrolling. Be sure to test your surveys on both types of devices so you know exactly what your respondents will be seeing.
  3. Specify on which devices you want your survey to be available.
    Related to the above point, you may want to control what types of devices can be used to take the survey. Many survey platforms will detect the device type at a general level. This detected information could then be used to alert respondents to use a different device and/or screen them before continuing the survey. At a minimum, you should track the device type in case there are significant differences in responses between the two groups.
  4. Keep it short.
    Yeah, you’ve heard this one before. Still, on mobile devices it is even more crucial that you limit your survey length. Your survey faces much more competition for the respondent’s attention on a mobile device than it would on a desktop or laptop. Mobile surveys work best when they are quick transactions.
  5. Take advantage of the unique capabilities of the mobile platform, but
    be prepared for the results.
    Some of the most commonly used features unique to mobile surveys are the multi-media uploads. Being able to ask respondents to take a picture of what they are seeing or doing, record a video of the same or provide an audio response instead of typing an open-ended answer in a text box can provide rich results. They also can provide some unexpected and surprising results. If you have any of these question types, make sure you and your project manager accommodate time for at least one preliminary review of the uploads before the end of data collection. You may need to recoup some respondents you remove from the data based on this review and possibly reconsider or reword your question(s).

While this is by no means an exhaustive list (and some of these items may even sound familiar to those who lived through the transition to online research), keeping these in mind the next time you design your mobile study will go a long way towards efficient, high-quality project execution.

Joe DiGregorio is responsible for all operational aspects of uSamp’s programming and hosting team. He helps define and implement efficient, quality processes for the team, identifies opportunities for enhancing uSamp’s programming offering and works with other internal teams to devise creative solutions for meeting our clients’ needs. He joined uSamp in November 2012, with 14 years of market research operations experience. Joe holds a BA in Mathematics from Nazareth College in Rochester and resides in Rochester, NY.

leave a comment »

Removing Respondent Bias from
the Research Equation:
Why Mobile Makes Us More “Honest”

By The Editors

At MRMW this year, Justin Wheeler shared fascinating research in a presentation that probed one seemingly simple question: Are mobile respondents more honest? Wheeler’s research is trying to get at the twin problems of social desirability bias and consumer satisificing. The former describes the phenomenon of respondents providing answers that they think researchers will want to hear or that they think will make them appear in a more positive light in researchers’ eyes. The latter describes the mental shortcuts or paths of least resistance consumers will unconsciously take when asked to recall specifics of advertisements or products in an online survey. Wheeler’s research indicates that mobile could be an antidote to both of these problems. How? In-context mobile surveys remove interviewers from the equation, mitigating the influence of social desirability, and also eliminate the need for consumer recall.

See below for a video of Wheeler’s entire presentation at MRMW:

uSamp’s Justin Wheeler discusses why mobile research is better from uSamp on Vimeo.

leave a comment »

Written by

June 2nd, 2014 at 6:50 pm

Being Pulled into the Future:
A Review of MRMW

By Jacob Tucker, Senior Analyst of Insights and Strategy

The Market Research in the Mobile World conference in Chicago was filled with emerging technologies, new capabilities, and aspirations to push the limits on the type of data we can collect. Be it simply adapting online surveys to mobile, using geolocation technology to intercept shoppers during purchase decisions, or experiencing personal moments with consumers through wearable computers like Google Glass, it is clear that many organizations in the market research industry are trying to pull us forward into the future. As I took in presentation after presentation, a few common themes emerged.

1. Researchers are increasingly tasked with understanding the “why”
in addition to the “what.”

Knowing that 74% of shoppers are likely to try Product A while just 48% are likely to try Product B can only take us so far. What is it about Product A that speaks to consumers more than Product B? We think that mobile methods are better equipped to give us these “why’s.”

2. Segmenting data by standard demographics is diminishing in favor
of behavioral characteristics.

We’re less interested in the differences between men and women, for example, as we are the differences between someone who is on five social networks compared to only one. These behavioral characteristics have a more significant reach in the marketplace, and mobile opens the door to discovering more behaviors which can help us understand just how far that reach is.

3. Consumer intimacy is the underlying concept that researchers seem to be dancing around as it pertains to mobile.

We’re trying these new methods in order to get closer to the consumer. It makes sense that if we can feel what the consumer feels, we can market better experiences for them.

4. Mobile is here to stay, now let’s prove its value.

The next necessary step I see for mobile is evidence that it actually works. Now that we’ve been exposed to its potential, we need to find out if companies are indeed making better business decisions because of it. What information are we gathering from mobile that we couldn’t get from other methods? Would the best business decisions be out of reach without this information? Those of us diving into the waters of mobile believe it to hold some uncharted answers, and now it’s
time to prove it.

Jacob Tucker is senior analyst of insights and strategy at uSamp. In this role he supports all aspects of mobile research projects. Prior to joining uSamp, Jacob worked as a research assistant in the Department of Kinesiology at the University of North Texas. He also served as an independent researcher for the Cooper Institute in Dallas. Jacob received his B.A. in Psychology from Howard Payne University and his M.S. in Kinesiology from the University of North Texas.

leave a comment »

Written by

June 2nd, 2014 at 4:49 pm

9 Tips for Designing Better Research Questionnaires

By The Editors

Designing an effective market research questionnaire is all about approach – a backwards one, that is. Before you can delve into the question-writing process, you need to conceptualize your ideal answers in order to derive the appropriate measurements. Check out this 9 tips for improving your questionnaire.

9 Tips for Designing Better Research Questionnaires from uSamp

leave a comment »

Written by

May 29th, 2014 at 4:17 pm

uSamp University:
The Wonderful World of the IHUT

By Tina Day, Director of Organizational Development and Quality

uSamp University is our column for breaking down some of market research’s thorniest concepts and terms. We write for both industry newbies and seasoned pros looking for a quick refresher. Because there are often differing schools of thought about the application and value of many of these techniques or methodologies, our intent is not to be the final word, but merely provide an introduction for curious researchers. At the end of each post we’ll also suggest a few links for further reading.

Not to be confused with everyone’s favorite pancake house, an “IHUT,” or simply “HUT,” is, at its most basic, a type of in-home study that involves consumers using and evaluating a product. IHUT stands for in-home usage test, and it has long been one of researchers’ go-to studies for detailed, in-context consumer feedback on anything from pillow cases to, well, pancake mix.

IHUT 101

As the name implies, IHUTs are used to test products with real consumers in their homes. This type of study is particularly useful for testing prototypes before they hit the market, newly released products, or existing products that may be in need of a redesign.

Consumers are shipped the product or sometimes instructed where to purchase it. Their feedback is gathered in follow-up surveys, or, in the case of mobile research, in real-time using smartphones or tablets. IHUTs can give market researchers deep and important insights into many facets of how a product is perceived and used, and how it fits into a consumer’s regular routine. Maybe consumers are overlooking an important step in preparation, or maybe they’re having trouble with the enclosure system. An IHUT can reveal such product challenges.

Here’s what an IHUT can help you do:

  • Learn how consumers interact with the product in a natural environment.
  • Understand sequencing of consumer interaction with the product.
  • Collect in-the-moment consumer feedback about the product as it is used or consumed.
  • Gauge the popularity and satisfaction of the product.
  • Discover new uses for the product.

Going Mobile, Baby!

Mobile technology greatly streamlines home-use testing and can decrease time to field. Mobile can also replace time-consuming, costly follow-up methods such as phone surveys and the outmoded in-home visit.

Perhaps most important, mobile expands what was previously possible with an IHUT. Respondents can offer feedback at every step of their journey with the product – from their first encounter with the product to final use or consumption.

Mobile IHUTs also allow for types of data that we only dreamed of before. For instance, respondents can take pictures and even videos while interacting with a product. Is the packaging too difficult to open? Video tells the frustrating story. Do people find surprising uses for a product that could drive innovation? A picture is worth a thousand words.

Click here to learn more about in-home mobile research solutions.

Simple, But Substantive

The whole point of in-home testing is to get accurate, quality feedback from consumers or potential consumers of your product. To do that, your goals as a researcher should be clear going in, as should your questions.

Here are examples of questions businesses typically hope to answer through in-home testing:

  • How do consumers use my product? It’s likely that the overworked team down in product development overlooked some really cool uses for your product. Consumers won’t.
  • Is the packaging engaging and attractive? This is often your customer’s first encounter with your product, so it can really set the tone for what you can expect from sales.
  • What suggestions for product improvement do customers have? You’d be surprised how honest and thoughtful your consumers can be. They’ll openly tell you about the proverbial good, bad and ugly if you give them the opportunity.
  • How satisfied are consumers with my product overall? If you could only get one data set back, this might be the one. Do they take it to bed at night and tuck it in with them or leave it in the corner of the garage? That could be the make-or-break question.

Size Matters

There are a few key factors to remember with IHUTs, whether mobile or traditional. First, over-recruiting is crucial. Product types, respondent pools, whether a product is purchased by a respondent or delivered through a fulfillment company all affect completion rates, so determining the sample size is an intricate and important stage that needs to be considered. Often, the sample size needs to be well beyond double the responses desired.

For this reason, many suppliers have prescreened pools ready to go. They could have tens to hundreds of thousands, or even millions, of consumers who fit varying demographics. These consumers can often be reached using mobile apps and geofencing technology to locate, recruit and validate willing participants. So much for the pen and clipboard days, huh?

To learn more about home-use testing and IHUTs, here are some suggestions for further reading:

Tina Day is uSamp’s Director of Organizational Development and Quality. With over 15 years in the market research industry, she has a well-rounded understanding of the end-to-end research process and uses her expertise to drive training and development efforts throughout the organization. In addition, Tina supports key initiatives related to quality and operational efficiencies, with a strong focus on uSamp’s Mobile Solutions..

leave a comment »

Written by

May 21st, 2014 at 4:55 pm

Satisficing and Social Desirability Bias:
Is Mobile Poised to Solve These Problems?

By The Editors

Are mobile respondents more honest? We know that mobile lends itself to in-context surveying, but mobile devices themselves may have an enormous impact on how honestly consumers answer questions. Why? Research shows that respondents will often take the path of least resistance when answering difficult survey questions, a phenomenon called satisficing. Additionally, traditional in-store market research requires positioning researchers on-location to ask consumers questions, and that face-to-face interaction causes consumers to want to give the answers they expect researchers want to hear – versus what they really think, a problem known as social desirability bias.

The good news? Mobile helps us get answers while consumers remain in the store, in front of the products, but adds a level of privacy that at-home research provides. Additionally, with the ways that smart phones have become a natural part of our everyday lives – studies show that we look at our smart phone once every three minutes – consumers don’t find mobile surveys to be disruptive. Therefore, mobile may just provide the key to unlocking two of market researchers’ thorniest data quality problems.

In this video, Justin Wheeler explains more about how mobile may be poised to solve these problems.

To learn more about this study, visit uSamp at the Mobile Research in the Mobile World event May 27-30th in Chicago, or stay tuned for a post-event recap.

leave a comment »

Written by

May 12th, 2014 at 9:47 pm

Creating Your Questionnaire Part II:
If You Don’t Know What You’re Aiming At, You Won’t Hit Anything

By Scott Worthge, VP, Research Solutions

In my first post about creating effective questionnaires, I started with the broad premise that most survey writers follow the wrong process in crafting what they think will be a “good” survey. Oftentimes, survey writers can jump into the question-writing process too early. If they followed my advice, they would have a better understanding of what the client needs from those up-front conversations one must have to start the research ball rolling, and use that as an immediate check on how the survey was to be structured.

In the second stage of my three-part survey-writing process, I’m still not ready to delve into question writing just yet. Instead, you must define your measurements first. “Really?”, you may ask, “Why start with the answers?”  Because the answers will define the questions you need to use, not the other way around!


Measurement Categories: Two Lenses to See the World

Questions are so easy to write when you know what form the answers will take. There are two main types of measurements you will find in surveys; I call them “black and white,” or, the easy stuff, and “color,” a bit more nuanced.


Black and White

This phrase is used all the time to describe something that is unambiguous and easily understood. It’s no different in surveys: The black and white measurements are those that are determined by facts, not feelings or interpretations, and can be further broken down into state of being and state of behavior measurements. State of being includes demographics–age, gender, income, education, ethnicity, employment status, etc. State of behavior is past behavior–what someone has done, when, how often, where (think shopping or vacationing). With these types of questions, the facts are the facts, and the questions are relatively easy to craft.


On the opposite side of the spectrum are “color” measurement–those topics that are necessary in every survey, but are difficult to pin down properly. Color involves state of mind and state of intention measurements–that vast, slippery slope of thoughts, feelings, perceptions, expectations and the like. These are critical to measure, but can be interpreted in so many ways.

Consider the concept of customer service. How can a consistent, systematic process be developed to create appropriate “color” measurements, since such a topic can take many shapes?  How do you build questions that your client will agree are the “right” ways to gain information for their goals and objectives for the research?

Here’s a simple system for deciding on measurements that I have developed through the years working with clients…

The Diamond Method: A 5-Step Process For Developing “Color” Survey Measurements

  1. Pick a starting point and look for color first. Add in the black and white measurements later, after the heavy lifting. For this example, let’s stick with customer service, and see how one could decide how to address this for a survey questionnaire.
  2. Expand your concept to its widest definition. List all of the potential ways to measure your concept (moving from the tip of the diamond to its widest point). For customer service, I would list things like “time to respond to my inquiry,” “how well did they fix my problem,” “how long did I sit on hold,” “did I get shuffled off to another department,” and so on. The goal is to come up with an exhaustive list of reasonable choices, any of which could be relevant to how customer service could be measured. I also always want input from my team at this stage, as other perspectives will help widen that diamond faster.
  3. Evaluate your list against the client’s goals—and then edit. Here’s the key! Immediately go back to what you’re being hired to do and start filtering the measurements you’ve listed. It’s so easy for researchers and laymen alike to fall in love with their own ideas and focus on that instead of what the client wants. Drill down to those that are their priorities, not yours (and narrow your diamond back toward a point).
  4. Determine how much of your survey should be dedicated to that concept. Take your short(er) list of measurements and see how much of your survey should be occupied by customer service metrics. If customer service is one of a few topics in this survey, keep your measurements (and resulting questions) few. If it is the sole focus of the survey, then you can allocate much more of your questionnaire real estate using the list of possibilities you’ve developed.
  5. Update your client. Lay it all out. Show them what you considered, how you narrowed it down, how it fits within the overall survey length and focus and what you think the key measurements are–-for EACH topic to be included. Watch for the vigorous head nodding, then document their approval in an email. If they disagree, repeat the above steps until you pin down the measurements they agree are priorities.

Once approved, you can move forward in confidence. A big bonus of the diamond method is that you can defend, at any time, how you considered an ambiguous topic like customer service, looked at a range of possible measurements and got approval from the client for the most important ones to include in this survey at this time. I’ve been in enough presentations to know that being able to point back to the diamond process is golden for answering a skeptic whose favorite measurement didn’t make the cut.

And voilà–you have all the pieces in place to write thought-out questions that focus on the client’s agreed priorities. Which just so happens is the next topic in this series: Writing questions according to best practices. Stay tuned.

Scott Worthge has more than 25 years of supplier-side market research experience, ranging from small consulting organizations to major international service providers. Prior to uSamp, Scott was a VP at TNS Taylor Nelson Sofres (part of Kantar in North America). Scott has taught marketing research and strategic brand management for the past ten years in UC Berkeley’s Extension program. He recently joined the Advisory Board for the Master of Science in Market Research program at Michigan State University to assist in program development and provide guest lectures. He holds degrees in Economics and Psychology from UCLA, Summa Cum Laude.

leave a comment »

Written by

May 7th, 2014 at 4:06 pm

The Power of Convenience:
Mobile Market Research in an On-the-Go World


By The Editors


It’s no secret that we at uSamp are excited about mobile technology. Smartphones and tablets open up a whole new realm of market research – often resulting in richer, more interesting data. Mobile enhances in-context product testing by introducing convenience. Consumers can provide feedback and record smartphone video responses while still in the store aisle, and they can continue while in home and testing out the products. uSamp VP of product innovation Justin Wheeler recently sat down with Bob Lederer of the famous Research Business Daily Report to talk about what exactly mobile technology can do for market research and the benefits of having another platform to get insights. Watch the entire video interview below.

leave a comment »

Written by adrien

April 23rd, 2014 at 6:11 pm