The Prompt

March 16, 2026

Where Human Judgment Still Matters in AI Qual

AI can speed qualitative workflows, but human judgment drives value. Learn why interpretation and consulting define the future of qual.

Where Human Judgment Still Matters in AI Qual

Editor’s Note: At the 2026 annual QRCA conference, one of the most useful sessions I attended was Susan Saurage-Altenloh’s Human-First Insight in an AI World: Future-Proofing the Qualitative Researcher. I’m pairing this one with my recent piece on AI moderation because the two talks belong together. Lauren McCluskey’s session helped clarify when AI can reasonably conduct a conversation. Susan’s session tackled the next question, and honestly, it may be the more important one: What happens after the conversation, when AI starts shaping the interpretation? That is where a lot of the real risk sits. Not in the transcript cleanup, not in the draft summary … but in the meaning-making.   

AI Can Speed the Workflow; it Can’t Own the Interpretation

I mean, that’s the bottomline right now. Early on in her QRCA 2026 conference presentation, Susan Saurage-Altenloh made a point that I think the industry needs to absorb quickly: the question is no longer whether AI will be used in qualitative research. In her framing, “the question for research is no longer whether AI will be used.” 

That decision has already been made. AI is already inside the platforms, the workflows, and the client expectations. It is cleaning transcripts, drafting guides, generating summaries, and offering instant “analysis.” 

So no, I’m not offering a pov on the “should we use AI in qual” debate. The better question, in my opinion, is how we use AI without compromising the very thing clients actually hire us for?

Not speed. Not formatting. Not a polished summary.

They hire us for judgment.

That framing aligns directly with the signal I wrote about in my earlier QRCA piece: the work is shifting; the value is moving toward synthesis, advisory, and decision support. 

The quallies who endure will not be selling hours and/or their moderating skills, no matter how stellar. They’ll be selling interpretation and consulting. They will be those making meaning out of the findings.

Where the Risk Actually Starts: Interpretation

Susan was not anti-AI, and that matters. She was practical. AI is genuinely useful for:

  • guide drafting
  • logistics
  • transcript cleanup
  • first-pass summaries
  • early categorization support

Taking those tasks off a quallie's plate is a real efficiency gain. It gives them a running start. As Susan put it in the Q&A, “It gets you going.” That is exactly right. It can absolutely save time on the front end. And if you’re not saving time somewhere right now, you are probably using the tools poorly. (I stand by that.)   

The point of the session was not “don’t use AI” – it was to use it where it supports insight, but don’t let it replace the parts that shape insight. That distinction is the whole game.

Flattening Is the Failure Mode to Watch

Susan gave the room a term that is being labeled in various other industries: flattening. And she repeated it clearly: “It [AI] flattens the data.” AI smooths out the tension in participant language and gives you back something polished, coherent, yet emotionally dead. 

You’ve seen human versions of this, maybe even included a few of the following phrases in your own reporting:

  • “Participants found the product helpful and easy to use.”
  • “Overall sentiment was mixed.”
  • “Trust was an important driver of loyalty.”
  • “Participants were motivated by saving time.”

None of those are technically false. But they potentially hide the useful part. That a human also looks at: the outliers, the emotional hot buttons, the nuggets that are insightful and/or thought provoking. 

Susan’s warning here was sharp and worth quoting directly: AI summaries can look authoritative, “but it often erases precisely the components of human expression that should drive strategy.” 

In the session, she showed how a human-restored interpretation brings the contrast back:

  • who felt differently
  • where the friction happened
  • what the contradiction reveals
  • what strategic decision is now on the table

That is qualitative craft. And it is exactly what gets lost when teams mistake AI outputs for conclusions instead of starting points. 

Flattening Is the Failure Mode to Watch

People talk a lot about hallucinations with AI. Fair enough. Hallucinations are a known failure mode.

But flattening is constant.  It's more subtle because it often looks professional. It sounds right. It feels client-ready. Again, you’ve likely used flat language to convey ambiguity.

That’s what makes it dangerous. You get a polished output and think, “Great, we’re done.”

But if the tool has collapsed disagreement into consensus, removed emotional intensity, or stripped out minority voices, you may be presenting something clear that is no longer true enough to guide action.

Susan put the core issue in plain language: “AI wants everything to make sense. It wants to please you. Humans don’t work that way.” That is exactly why qual still needs qual researchers. 

The Boundary: What Supports Insight vs. What Shapes It

Susan offered a simple filter I wish more teams would operationalize immediately:

  1. Does nuance matter here?
  2. Would my client trust AI to handle this?
  3. Does this task shape insight or support insight? 

That third question is the one I would put on a wall. Because this is where I see the confusion right now. Many teams are treating all workflow steps as equal. They aren’t.

Some tasks support the work:

  • administrative prep
  • cleaning
  • formatting
  • draft clustering

Some tasks shape the work:

  • restoring emotional sequence
  • interpreting contradiction
  • making sense of outliers
  • connecting behavior to business decisions

Susan said this plainly too: “If it shapes insight, keep it human.” That line alone should be in every qual team’s internal AI policy. 

If it shapes meaning, it needs a human in it.

What Clients Need From Quallies Now

Another thing I appreciated about Susan’s session was that she addressed client communication directly.Because this is also where the market is changing.

Clients do not always know what they are losing when they get a neat, flattened summary. They may love the speed. They may love the one-page output. They may not yet see what was smoothed away. That is not a reason to panic. It is an opening.

Susan’s framing here was practical and useful: clients want to know not just that you use AI responsibly, but that “your interpretation is human in the very best possible and professional sense.” 

The opportunity for qualitative researchers now is to explain, clearly and confidently:

  • where AI helps
  • where you use it
  • where you do not
  • and why your interpretation layer is still essential

Not in a defensive way. Not in a “trust me, I’m artisanal” way. In a professional way. This is where your value proposition sharpens: I use AI for speed. I use human judgment for truth.

What I’m Taking Forward from This Session

Here is the signal I’m taking from Susan’s talk, and it complements what I wrote in my AI moderation piece:

  • AI can absolutely improve parts of the workflow
  • AI can also degrade insight quality if left unchecked
  • The biggest risk is not just bad moderation, but bad interpretation
  • The future-proof qualitative researcher is not the one who resists AI
  • It is the one who can define the boundary between efficiency and meaning

Susan closed with a line that captures the moment well: “transparent, ethical, human first practices are becoming a competitive advantage.” That is the shift. 

And if we’re serious about protecting the craft, this is where we need to get much more explicit, both in our teams and with our clients. Because “human-first” should be a methodological choice, not something you read in a sales pitch.

Frequently Asked Questions About Human-First Qualitative Research in an AI Workflow

What does “human-first insight” mean in qualitative research?

Human-first insight means using AI to support efficiency while preserving human responsibility for interpretation, context, and meaning-making. In practice, it means researchers do not outsource nuance, contradiction, or strategic judgment to automated outputs. 

What is “flattening” in AI-assisted qualitative analysis?

Flattening is when AI-generated summaries smooth out emotional intensity, disagreement, and contextual nuance into generic, polished language. The output may sound clear, but it often removes the tension that makes findings actionable. 

Which qualitative research tasks are safe to automate with AI?

AI is often useful for support tasks such as transcript cleanup, first-pass summaries, guide drafting, logistics, and early categorization. These tasks can improve speed and reduce admin burden when researchers remain actively involved in reviewing outputs. 

Which qualitative research tasks should remain human-led?

Tasks that shape insight should remain human-led, including interpreting contradiction, restoring emotional sequence, identifying outliers, reading cultural context, and connecting findings to business decisions. 

How should researchers explain AI use to clients?

Researchers should be transparent about where AI is used, where it is not used, and why. The strongest positioning is not anti-AI. It is a clear, professional explanation that AI supports speed while human expertise protects insight quality and strategic meaning. 

Is this the same debate as AI moderation?

Related, but not identical. AI moderation focuses on what happens during the interview itself. Human-first insight focuses on what happens after the interview, especially during analysis and synthesis, where flattened outputs can distort meaning even when fieldwork appears successful.  

artificial intelligencequalitative research

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Karen Lynch

Karen Lynch

Head of Content at Greenbook

328 articles

author bio

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karen Lynch

Dan Parker-Smith on Storytelling, Video, and Creative Collaboration in Insights
Future List Honorees

Dan Parker-Smith on Storytelling, Video, and Creative Collaboration in Insights

Future List Honoree Dan Parker-Smith shares how video storytelling, creativity, and collaboration are transforming insight communication in MRX.

Qualtrics X4 and the New Questions Facing Insights Leaders
Insights Industry News

Qualtrics X4 and the New Questions Facing Insights Leaders

Qualtrics X4 2026 marked a pivot from AI hype to trust and action. Explore how insights leaders navigate synthetic research and the evolving role of j...

From Bots to Breakthroughs: The New Playbook for Market Research
The Exchange

From Bots to Breakthroughs: The New Playbook for Market Research

Karen Lynch and Lenny Murphy unpack AI disruption in research, from agentic tools and survey fraud t...

Innovation Isn’t Linear: Inside Pernod Ricard’s Strategy
Karen Lynch

Karen Lynch

Head of Content at Greenbook

AI Collapsed the Cost of Speed. Now, Certainty is the Only Real Advantage.
Behavioral Insights Academy

AI Collapsed the Cost of Speed. Now, Certainty is the Only Real Advantage.

AI makes creativity abundant, fueling decision paralysis. Behavioral science helps leaders turn intuition into confident, scalable action.

William Leach

William Leach

Founder / Mindstate Group at Mindstate Group

Scaling Human Insights with AI-Ethnographies. Dishwashing Tales.
The Prompt

Partner Content

Scaling Human Insights with AI-Ethnographies. Dishwashing Tales.

AI video ethnography with 91 households in 2 days shows how AI moderators and multi-modal analysis cut cost, scale insight, and overcome fieldwork lim...

Niels Schillewaert

Niels Schillewaert

Head of Research and Methodologies

Why AI Means We Need to Double Down on Insight Communication
Artificial Intelligence and Machine Learning

Why AI Means We Need to Double Down on Insight Communication

GRIT shows clients want better insight communication, business connection, and internal advocacy. In 2026, AI can help teams amplify impact and influe...

Alex Holmes

Alex Holmes

Director at Shape Insight

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers