Categories
CEO Series
March 22, 2019
Four years after the provocative article & observations from the insides of the online sample machinery.
Editor’s Note: Keen-eyed readers of this blog will have noticed several articles recently about sample quality, or sometimes the lack thereof, and what to do about it. Here, Scott Weinberg adds his POV on the pervasiveness of bad actors and actions that need to be taken to help researchers deal with the issue. Good researchers must always be on their guard. The good news is that there are solutions based on things like blockchain that could help significantly; they need to become more pervasively employed.
In January 2015, GreenBook published my observations of several years of a front row seat within the online sample business. That article broke some glass, opened doors, generated more articles, gave me my 15 minutes, etc…life moved on.
Except…a few times a year, still; someone reaches out to me, either to thank me or to commiserate. Moreover, three panel firms reached out to me along the way, to show how they’re tackling the quality issue(s).
So, what triggered this follow up? 3 months ago, the head of a (non-USA) MR firm reached out to me. Their firm is experiencing issues finding quality sample sources. In fact, they were running their own in-house set of parallels on a tracker, various sources (Google, Survata, trad, programmatic) to approach the issue with rigor. We got to talking.
Sidebar: they were at an industry conference last year and said a sales rep was proudly offering ‘$1 CPI’s.’ I guess it had to happen. Maybe it’s been happening for a long time; I don’t monitor CPI’s anymore.
Right then, during their RoR tracker source scrutiny, numerical scores came in consistently. So that’s good. On a subsequent regular study soon after, the supplier (a major) presented a data set infested with ridiculousness (at least 1/3 of the records were bogus). The bad apples were only spotted via open end comments by the way. I was forwarded the supplier’s apology slash explanation. An excerpt is below:
“Our sincere apologies for the fraud that occurred on these projects. We have gone ahead and blocked those users on our end so they won’t be able to complete any more surveys.
We can confirm that some of those completes are from the same person creating multiple accounts on our panel, using different IP addresses and completing the study over and over again. The user has different information on file for each account and the user passed all of our security checks as well as a third parties security checks that we use for additional insurance against this type of behavior. While other cases are of users who were casual in their responses.
We have gone ahead and blocked those users on our end so they won’t be able to complete any more surveys. Also, we are currently researching the recruitment source this user came in from and trying to find more information there to make sure this individual or similar other ones can no longer enter and create multiple accounts without being detected.”
Thank you for reading this follow up. Looking forward to reading your comments. Always happy to chat with others interested in this topic. Lately, I’ve been getting up to speed on what may be a new methodology for the MR / Insights space. Hope to be writing about that later this year. Thank you!
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Scott Weinberg
Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.
Apparently the number of mobile fieldwork suppliers with fully functional geofencing is amazingly low. Why?
I was taught, .05 is an arbitrarily agreed to compromise that splits the chances of making a Type 1 and Type 2 error.
Defining mobile research quality, in absolute and relative contexts.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.
67k+ subscribers