The Future Role of the Researcher Is Taking Shape

As AI accelerates research, the insights role isn’t disappearing, it’s evolving. Discover how researchers shift from creators to guardians of quality and trust.

The Future Role of the Researcher Is Taking Shape

A few weeks after Qualtrics X4, and heading into IIEX North America, this is one of the themes I keep coming back to.

It is the quieter but more consequential question sitting underneath all of the talk about AI, synthetic research, automation and democratization: what happens to the role of the researcher when the workflow becomes faster, easier, and increasingly available to people outside the insights function?

It feels like that question is everywhere in this industry right now, if not overtly stated. It shows itself directly, sure. But it also appears in conversations about data governance, agentic workflows, augmented methodologies, and synthetic respondents

I’m more convinced than ever that this may be one of the defining questions facing the industry right now. 

The researcher’s job is not disappearing, but it is changing shape. And that shape looks less about producing every piece of work by hand and more about protecting the quality of what moves forward in a world full of fast, polished, and increasingly ungoverned answers.

Shift From Executing Research To Shaping Workflows

For years, one core part of the researcher’s value came from executing the process: designing the study, building the instrument, fielding the work, analyzing the results, and presenting the findings. Much of that still matters. Some parts of that process are becoming easier to assist, automate, or distribute.

Jordan Harper, Principal AI Thought Leader at Qualtrics, offered a useful glimpse of what that might look like. Talking about agentic systems, he described a workflow in which AI could help turn a business problem into a more structured research plan: “What an agentic system could do in the future is this: you give it your broad research question, your business problem, and then ask, ‘How might the best quantitative researcher in the world break that down into a series of questions?’ Let’s iterate on that survey, change this, change that, test it again, and then bring the result back to the researcher.”

That shifts the frame. The researcher is no longer just the one manually building every component. The researcher becomes, increasingly, the one who knows what good research design should look like, what deserves scrutiny, and where the process still needs human intervention.

That is a different kind of authority. One that is more critical than ever in a fast-paced business world filled with challenges that demand answers.

More Time Editing, Pressure-Testing, and Applying Judgment

Harper made another comment that stayed with me because it was so ordinary and so revealing. “I’m a really good editor,” he said. “I like being given something, given a draft that I can then improve. But I can procrastinate at the beginning of a process if I’ve got to create the draft as well. That’s the bit that takes me longer than getting a draft to sort of work with.”

That may sound more relevant to writers than researchers at first. But the underlying point applies just as well here. Harper’s point gets at a larger shift.

As AI tools get better at producing first drafts, whether those are summaries or question sets, outputs or frameworks, the human value may increasingly lie in what happens next:

  • knowing what is missing
  • knowing what is off
  • spotting what sounds polished but is not actually useful
  • and improving what the machine began.

Those are not small things. Because in research, as with any writing endeavor, the dangerous part is believing the draft that your AI tool created is already good enough.

Acting as a Strategic Advisor Across the Business

One of the clearest shifts I’ve heard discussed this year is that insight work is no longer staying neatly inside the insights function.

Ali Henriques, Head of Qualtrics Market Research, put it bluntly when we spoke: “I cannot stop the ad agency from going to the marketing team and selling them a persona bot. They’ve got one. All they asked for was the brand tracking data, and that’s ours. I have to share that with them. They are my stakeholder.”

That, to me, represents another shift.

The issue will no longer be about whether researchers will use AI (they will) or if they will use it well (they have to). 

It is that other teams, other partners, and other vendors will have direct access to AI-powered tools that can generate something that looks and sounds like insight, without necessarily going through the same research discipline.

Researchers won’t be able to control who has access to AI. They won’t be able to stop stakeholders from experimenting with new tools or vendors from bringing AI-generated outputs directly into the business. But they can still define what level of confidence is acceptable, what requires human review, what counts as good evidence, and where overconfidence or weak logic should be challenged.

That changes the role of the researcher in a fundamental way. The job will become less about controlling every step of the process and more about protecting the standard of what moves forward. 

Researchers have to become the people who know what deserves confidence, what needs more scrutiny, and what should not move forward yet, and who can advise the business accordingly.

Mitigating the Growing Risk of Bad AI Outputs

Henriques made the stakes plain in another moment of candor: “All it’s going to take is one incredibly catastrophic decision that was made off one of these AI-powered things. And we’re going to feel like, ‘I didn’t have anything to do with it.’ But that still sucks. That sucks for the collective us.”

That may sound dramatic, but it is not hard to imagine what she means.

As more teams gain access to systems that can simulate analysis, generate summaries, or produce plausible-looking recommendations, the risk is not merely that the outputs are imperfect. The risk is that they are polished enough to be persuasive, fast enough to be tempting, and detached enough from research discipline to create bad decisions at scale.

That changes the researcher’s role in a very practical way. The function is no longer there just to answer research questions, or even business questions. It is there to help prevent the organization from trusting the wrong answers too quickly.

Informed Adaptation Is Now Part Of The Job

This does not mean researchers should become anti-AI holdouts. Quite the opposite.

One of the strongest currents I'm sensing is that professionals who stay relevant will be the ones who learn how the new tools work, where they help, and where they need oversight. The goal should be informed adaptation.

That may be the most important career lesson of the moment.

The future researcher is unlikely to win by trying to preserve every old workflow intact. But neither should the future researcher surrender too much of the craft in the name of modernity. The real opportunity is to move upward: to become more valuable where evidence has to be interpreted, weighed, and turned into a decision.

That means stronger instincts around method fit. Better language for explaining trust and risk. More comfort directing AI-supported work without being dazzled by it. And more confidence in the uniquely human parts of the job, especially the parts that are hard to automate because they depend on context, skepticism, and real-world sense-making.

live eventartificial intelligencesynthetic databrand tracking

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Karen Lynch

Karen Lynch

Head of Content at Greenbook

339 articles

author bio

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karen Lynch

 Synthetic Hype vs. Reality: Where AI in Insights Is Actually Heading
The Exchange

Synthetic Hype vs. Reality: Where AI in Insights Is Actually Heading

In the Exchange episode 127, Karen Lynch and Lenny Murphy unpack synthetic data, AI limits, and the ...

Laura Gonzalez Quijano on Human-Centered Innovation, Emotion, and the Future of Insights
Future List Honorees

Laura Gonzalez Quijano on Human-Centered Innovation, Emotion, and the Future of Insights

Future List Honoree Laura Gonzalez Quijano explores empathy, AI, and human-centered innovation in shaping the future of insights.

What Synthetic Research Can Do Now, and What It Still Can’t
Data Science

What Synthetic Research Can Do Now, and What It Still Can’t

Synthetic research is evolving fast. Beyond the hype, what can it truly do well today — and where does it still fall short for insights teams?

Rob Wengel of Radius: Leading Insights Through AI Transformation
CEO Series

Rob Wengel of Radius: Leading Insights Through AI Transformation

Rob Wengel shares how AI, innovation, and human judgment are reshaping insights, leadership, and the...

Your Market Research is Already Outdated—Here’s How AI Changes That
Artificial Intelligence and Machine Learning

Your Market Research is Already Outdated—Here’s How AI Changes That

Traditional research is too slow for today’s markets. Discover how AI enables continuous, always-on insights with real-time conversations and decision...

Stu Sjouwerman

Stu Sjouwerman

Co-founder, CEO at ReadingMinds.ai

 Synthetic Hype vs. Reality: Where AI in Insights Is Actually Heading
The Exchange

Synthetic Hype vs. Reality: Where AI in Insights Is Actually Heading

In the Exchange episode 127, Karen Lynch and Lenny Murphy unpack synthetic data, AI limits, and the ...

Haley Kiernan of Mars Petcare on AI & Data Trust
Karen Lynch

Karen Lynch

Head of Content at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers