Categories
March 16, 2026
AI can speed qualitative workflows, but human judgment drives value. Learn why interpretation and consulting define the future of qual.
Editor’s Note: At the 2026 annual QRCA conference, one of the most useful sessions I attended was Susan Saurage-Altenloh’s Human-First Insight in an AI World: Future-Proofing the Qualitative Researcher. I’m pairing this one with my recent piece on AI moderation because the two talks belong together. Lauren McCluskey’s session helped clarify when AI can reasonably conduct a conversation. Susan’s session tackled the next question, and honestly, it may be the more important one: What happens after the conversation, when AI starts shaping the interpretation? That is where a lot of the real risk sits. Not in the transcript cleanup, not in the draft summary … but in the meaning-making.
I mean, that’s the bottomline right now. Early on in her QRCA 2026 conference presentation, Susan Saurage-Altenloh made a point that I think the industry needs to absorb quickly: the question is no longer whether AI will be used in qualitative research. In her framing, “the question for research is no longer whether AI will be used.”
That decision has already been made. AI is already inside the platforms, the workflows, and the client expectations. It is cleaning transcripts, drafting guides, generating summaries, and offering instant “analysis.”
So no, I’m not offering a pov on the “should we use AI in qual” debate. The better question, in my opinion, is how we use AI without compromising the very thing clients actually hire us for?
Not speed. Not formatting. Not a polished summary.
They hire us for judgment.
That framing aligns directly with the signal I wrote about in my earlier QRCA piece: the work is shifting; the value is moving toward synthesis, advisory, and decision support.
The quallies who endure will not be selling hours and/or their moderating skills, no matter how stellar. They’ll be selling interpretation and consulting. They will be those making meaning out of the findings.
Susan was not anti-AI, and that matters. She was practical. AI is genuinely useful for:
Taking those tasks off a quallie's plate is a real efficiency gain. It gives them a running start. As Susan put it in the Q&A, “It gets you going.” That is exactly right. It can absolutely save time on the front end. And if you’re not saving time somewhere right now, you are probably using the tools poorly. (I stand by that.)
The point of the session was not “don’t use AI” – it was to use it where it supports insight, but don’t let it replace the parts that shape insight. That distinction is the whole game.
Susan gave the room a term that is being labeled in various other industries: flattening. And she repeated it clearly: “It [AI] flattens the data.” AI smooths out the tension in participant language and gives you back something polished, coherent, yet emotionally dead.
You’ve seen human versions of this, maybe even included a few of the following phrases in your own reporting:
None of those are technically false. But they potentially hide the useful part. That a human also looks at: the outliers, the emotional hot buttons, the nuggets that are insightful and/or thought provoking.
Susan’s warning here was sharp and worth quoting directly: AI summaries can look authoritative, “but it often erases precisely the components of human expression that should drive strategy.”
In the session, she showed how a human-restored interpretation brings the contrast back:
That is qualitative craft. And it is exactly what gets lost when teams mistake AI outputs for conclusions instead of starting points.
People talk a lot about hallucinations with AI. Fair enough. Hallucinations are a known failure mode.
But flattening is constant. It's more subtle because it often looks professional. It sounds right. It feels client-ready. Again, you’ve likely used flat language to convey ambiguity.
That’s what makes it dangerous. You get a polished output and think, “Great, we’re done.”
But if the tool has collapsed disagreement into consensus, removed emotional intensity, or stripped out minority voices, you may be presenting something clear that is no longer true enough to guide action.
Susan put the core issue in plain language: “AI wants everything to make sense. It wants to please you. Humans don’t work that way.” That is exactly why qual still needs qual researchers.
Susan offered a simple filter I wish more teams would operationalize immediately:
That third question is the one I would put on a wall. Because this is where I see the confusion right now. Many teams are treating all workflow steps as equal. They aren’t.
Some tasks support the work:
Some tasks shape the work:
Susan said this plainly too: “If it shapes insight, keep it human.” That line alone should be in every qual team’s internal AI policy.
If it shapes meaning, it needs a human in it.
Another thing I appreciated about Susan’s session was that she addressed client communication directly.Because this is also where the market is changing.
Clients do not always know what they are losing when they get a neat, flattened summary. They may love the speed. They may love the one-page output. They may not yet see what was smoothed away. That is not a reason to panic. It is an opening.
Susan’s framing here was practical and useful: clients want to know not just that you use AI responsibly, but that “your interpretation is human in the very best possible and professional sense.”
The opportunity for qualitative researchers now is to explain, clearly and confidently:
Not in a defensive way. Not in a “trust me, I’m artisanal” way. In a professional way. This is where your value proposition sharpens: I use AI for speed. I use human judgment for truth.
Here is the signal I’m taking from Susan’s talk, and it complements what I wrote in my AI moderation piece:
Susan closed with a line that captures the moment well: “transparent, ethical, human first practices are becoming a competitive advantage.” That is the shift.
And if we’re serious about protecting the craft, this is where we need to get much more explicit, both in our teams and with our clients. Because “human-first” should be a methodological choice, not something you read in a sales pitch.
Human-first insight means using AI to support efficiency while preserving human responsibility for interpretation, context, and meaning-making. In practice, it means researchers do not outsource nuance, contradiction, or strategic judgment to automated outputs.
Flattening is when AI-generated summaries smooth out emotional intensity, disagreement, and contextual nuance into generic, polished language. The output may sound clear, but it often removes the tension that makes findings actionable.
AI is often useful for support tasks such as transcript cleanup, first-pass summaries, guide drafting, logistics, and early categorization. These tasks can improve speed and reduce admin burden when researchers remain actively involved in reviewing outputs.
Tasks that shape insight should remain human-led, including interpreting contradiction, restoring emotional sequence, identifying outliers, reading cultural context, and connecting findings to business decisions.
Researchers should be transparent about where AI is used, where it is not used, and why. The strongest positioning is not anti-AI. It is a clear, professional explanation that AI supports speed while human expertise protects insight quality and strategic meaning.
Related, but not identical. AI moderation focuses on what happens during the interview itself. Human-first insight focuses on what happens after the interview, especially during analysis and synthesis, where flattened outputs can distort meaning even when fieldwork appears successful.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Karen Lynch
Future List Honoree Dan Parker-Smith shares how video storytelling, creativity, and collaboration are transforming insight communication in MRX.
Qualtrics X4 2026 marked a pivot from AI hype to trust and action. Explore how insights leaders navigate synthetic research and the evolving role of j...
Karen Lynch and Lenny Murphy unpack AI disruption in research, from agentic tools and survey fraud t...
ARTICLES
AI makes creativity abundant, fueling decision paralysis. Behavioral science helps leaders turn intuition into confident, scalable action.
Partner Content
AI video ethnography with 91 households in 2 days shows how AI moderators and multi-modal analysis cut cost, scale insight, and overcome fieldwork lim...
GRIT shows clients want better insight communication, business connection, and internal advocacy. In 2026, AI can help teams amplify impact and influe...
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.