Research Methodologies

May 16, 2022

Designing Surveys Optimized for Modern Sampling Technologies

How to improve your survey design.

Designing Surveys Optimized for Modern Sampling Technologies
Karine Pepin

by Karine Pepin

Co-Founder at The Research Heads

In a market starved for survey participants, one of the hottest topics in market research these days is earnings per click (EPC). EPC measures how much sample suppliers in a marketplace (e.g. Lucid, Cint) can expect to earn per respondent sent into your survey.

EPC is used to determine the cost per interview (CPI). It is also a measure of survey health for sample providers who monitor EPC during fieldwork. From a sample supplier’s standpoint, studies with low EPC are more difficult to monetize, so it is to their benefit to direct traffic to surveys that are easier to complete such as those that have a higher incidence rate (IR) or shorter length of interview (LOI).

It wasn’t until recently that this model, where suppliers direct traffic to maximize EPC, became a real challenge for researchers. Now in a supply crisis for participants (especially thoughtful ones!), those of us who do not optimize our surveys to maintain a high EPC will struggle during fieldwork.

While increasing the CPI may solve the problem in some cases, it is often not enough in a world where participant demand far exceeds supply. A more sustainable strategy is to focus on improving survey conversion: the higher your conversion, the higher the EPC, the more traffic you’ll get.

 

Iterate to optimize

The dynamic nature of the marketplace means that survey design should be more iterative than it has been in the past. Gone are the days when you could launch a survey, then sit back and relax for a few days until the end of the fieldwork.

Instead, we now have an opportunity to inspect and incrementally adapt our survey design. Because we work with software and not paper questionnaires anymore, we can gauge the performance of a survey early and make small changes that can improve the EPC. Participants are constantly giving us feedback about our surveys and it is to our own detriment to ignore their voices and rigidly carry on with a sub-optimal survey.

 

How to improve survey metrics

Below, find four ways to enhance your surveys.

 

1) Plan to iterate on your design

Designing surveys optimized for the marketplace starts with being aware of your survey metrics during fieldwork. Listen to what the metrics, and your participants, are telling you. Expect to iterate on your original design.

The “soft launch” is an opportunity to assess the health of the survey metrics early, and proactively improve your design before your conversion rate falls off a cliff.

 

2) Optimize the length of interview

There are two things displayed on a panelist’s dashboard that inform their decision of whether or not to participate: the incentive (e.g. points), and the LOI. Putting yourself in the panelist’s shoes, how likely are you to choose to complete a survey with an LOI longer than 20 minutes? Shorter LOIs not only encourage a healthy flow of traffic to your survey, but they will also likely yield more thoughtful responses.

Related

A Researcher Who Became a Respondent (Part One): Fixing Data Quality at the Source

In addition, it’s important to think about the LOI for people who do not qualify for your study. It is tempting to include all sorts of sizing questions in the screener, but because the LOI of your screened-out participants is also monitored, a long screener (i.e. 5+ minutes) will negatively impact the overall performance of your survey.

If you suspect your LOI is discouraging participants, you can incrementally iterate on your design: consider hiding low-scoring attributes or removing those ‘on-the-fence’questions to quickly shorten the survey.

 

3) Lower the drop off rate

Aside from a short LOI and making a survey mobile-friendly, survey engagement is key to preventing drop-off. If your participants are giving you feedback that leads you to believe that a survey is too bland, consider “story-fying” your survey. Story-fying means giving your survey more of a storybook feel with an enticing landing page, a pleasant user interface, illustrations, a conversational tone, and a creative and engaging sense of progression.

After the soft launch, review the questions that result in the highest drop-off rate, then make small adjustments to improve the experience. Can you adjust the language to be more friendly and engaging? Can you change the question format to make it more digestible?

 

4) Improve the incidence rate

After the soft launch, you should have a sense of where the incidence rate stands. Knowing that a low IR could potentially slow traffic to your survey, it’s wise to look at where people are screening out and consider relaxing some of the less-important screener criteria, if you can.

Furthermore, examine the IR by quota group. This will give you an idea of how difficult fieldwork will be near the end, when the most difficult quotas are left open to fill. Will you be looking for a needle in a haystack?

 

5) Lower the reconciliation rate

Everyone in the industry agrees that we absolutely cannot compromise when it comes to data quality. Yet, what qualifies as “good” data remains subjective, a theoretical gray area. By extension, reconciliation is also rather a subjective exercise.

Generally, to prevent the completes you toss out from impacting your conversion, consider using a “quality control redirect”,  which lets your sample supplier know in real-time that it was the participant’s fault for termination.

 

Incrementally adapt, or perish

Survey metrics are more transparent than ever to both sample suppliers and panelists. Let’s pay attention! These metrics are communicating important details about how participants feel about your survey. We ought to listen to the data, listen to the participants, and course-correct our surveys to optimize EPC.

The opportunity is before us to make adjustments during fieldwork to establish and maintain healthy survey metrics. Designing surveys that are optimized for modern sampling technologies not only benefits researchers, but also improves the participant experience and the participation rate. It’s the little things: small, incremental iterations that can result in huge, positive impacts.

data collectiondata qualityincentivespanelssurvey design

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karine Pepin

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality
Data Quality, Privacy, and Ethics

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality

Explore five key truths about sampling, uncovering fraud, low-quality respondents, and transparency issues that have eroded data quality over two deca...

From Deliverables to Research Assets: How Insights Teams Can Leverage Content Design Principles for Greater Influence
Research Methodologies

From Deliverables to Research Assets: How Insights Teams Can Leverage Content Design Principles for Greater Influence

Learn key principles of content design that enable researchers to distill insights into assets, fostering stakeholder influence and sustainable busine...

The Cost of Being Wrong: How Overconfidence in Ineffective AI Detection Tools Impacts the Research Ecosystem
The Prompt

The Cost of Being Wrong: How Overconfidence in Ineffective AI Detection Tools Impacts the Research Ecosystem

Discover the challenge of identifying AI-generated open-ended responses and the potential consequences for researchers and the market research industr...

Why the Sampling Ecosystem Sets Up Honest Participants for Failure
Data Science

Why the Sampling Ecosystem Sets Up Honest Participants for Failure

This article discusses how the online sampling ecosystem favors professional respondents and bad actors. It advocates for a transformative shift towar...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers