Research Methodologies

March 11, 2022

A Researcher Who Became a Respondent (Part Two): Mapping the Panelist Journey

An insider’s look into the online panelist experience.

A Researcher Who Became a Respondent (Part Two): Mapping the Panelist Journey
Karine Pepin

by Karine Pepin

Co-Founder at The Research Heads

Editor’s Note: This is the second installment of A Researcher Who Became a Respondent. View part one here, and part three here.


Eight weeks ago, after seeing an ad for paid surveys on Twitter, I decided to join a couple of research panels. Now, several weeks into my panelist journey, I can’t help but wonder why I hadn’t done this sooner – I’ve learned so much! In this article, I describe my journey as a panelist, including what I learned about the keys to a good experience, and the impact of panel sourcing on data quality.

As a researcher, I have strong opinions about which research panels tend to yield better quality data. Now a panelist, I had the chance to explore the “good” and “bad” panels from a different perspective. After getting the lay of the land, I specifically enrolled in panels that I work with, and the panels I generally avoid, with the goal of understanding how the panelist experience impacts data quality.

Before this experiment, I didn’t realize the extent to which the panelist experience varies from panel to panel. The panels that, as a researcher, I had identified as “good” also tended to offer a superior experience for panelists, and they followed better research practices. The infographic below illustrates the ups and downs of my journey.

the panelist journey infographic

KARINE PEPIN

 

What not to do

The majority of panelists participate in market research surveys to earn money (Engage 2.0 report), but how the panels help participants achieve that goal varies widely.

I discovered that some strategies have a very negative impact on data quality because they help dishonest participants to qualify for surveys. For example, they allow participants to sort by incentives (high to low), which almost certainly leads them straight to B2B surveys. Or, they give away the survey topic by inserting a leading pre-screener (e.g. “We are only looking for organizations who use Brand X as a cloud provider!”), or displaying survey feedback publicly (e.g. “I loved this survey about cars!”). These practices make the market research industry an easy target for fraudsters.

 

It can be done right

Related

A Researcher Who Became a Respondent (Part One): Fixing Data Quality at the Source

From a panelist perspective, my ideal experience is simple: qualify fast to avoid a “pinball machine” experience, qualify often, get rewarded, rinse and repeat. Are there ways to achieve this while maintaining the integrity of the research process? What does it look like when it’s done right?

After battling through countless frustrations, I tallied the things my favorite research panels do well:

  • Matching: Getting this right is so critical because it means I, as a participant, qualify for more surveys without wasting my time in an infinite loop of screeners. The surveys for which I am a match pop up in my dashboard as they become available, and the panel even indicates which ones are a perfect match for me.
  • Relevance: This process matches me with surveys that are more relevant to my interests, which is more engaging. The panel also indicates what the general theme is (e.g. sports, consumer trends, etc.) and illustrates the theme with a fun icon.
  • Incentives: As a participant who qualifies for surveys more often, I end up with more money.
  • Loyalty: It’s easier to maximize my earnings by sticking to one panel. As such, that good experience translates into more loyalty to the panel.
  • Peak time: The downside of a panel that does such a good job matching is that I get asked to complete fewer surveys. However, my dashboard mitigates this by indicating “peak hours” so I know when to expect more requests to complete a survey.
  • Design: The general look and feel of the dashboard are more sleek, modern, and appealing.

So far, this eight-week journey has been an eye-opening experience for me as a market researcher. While panel-sourcing can sometimes feel irrelevant to the final insights, my experience confirms that it is a key factor in the outcome. While it is a reasonable goal for a panel company to want to convert as much traffic as possible, some of the strategies used lead to poor data quality. The best way to improve conversion among honest panelists while also maintaining the integrity of our research practices is to improve the technology that matches participants to surveys.

online panelsrespondent experiencesurvey design

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karine Pepin

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality
Data Quality, Privacy, and Ethics

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality

Explore five key truths about sampling, uncovering fraud, low-quality respondents, and transparency issues that have eroded data quality over two deca...

From Deliverables to Research Assets: How Insights Teams Can Leverage Content Design Principles for Greater Influence
Research Methodologies

From Deliverables to Research Assets: How Insights Teams Can Leverage Content Design Principles for Greater Influence

Learn key principles of content design that enable researchers to distill insights into assets, fostering stakeholder influence and sustainable busine...

The Cost of Being Wrong: How Overconfidence in Ineffective AI Detection Tools Impacts the Research Ecosystem
The Prompt

The Cost of Being Wrong: How Overconfidence in Ineffective AI Detection Tools Impacts the Research Ecosystem

Discover the challenge of identifying AI-generated open-ended responses and the potential consequences for researchers and the market research industr...

Why the Sampling Ecosystem Sets Up Honest Participants for Failure
Data Science

Why the Sampling Ecosystem Sets Up Honest Participants for Failure

This article discusses how the online sampling ecosystem favors professional respondents and bad actors. It advocates for a transformative shift towar...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers