Accessible only to conference ticket holders.
Log in Create account Buy conference recordings
For 90 days after a conference, only paid ticket holders can watch conference videos. After that, all Gold members have access.
If you have purchased recording access and cannot see the video, please contact support.
Summary
Nielsen Norman Group (NNG) has conducted and continues to conduct extensive research testing various large language model (LLM) tools designed for research synthesis and analysis. Our goal was to determine whether these AI-powered tools could meaningfully accelerate the work of experienced UX researchers. Through rigorous testing across multiple models and specialized research tools, we’ve found that while a few tools provide modest speed improvements for experienced researchers, none come close to replacing human expertise in research synthesis and analysis. The core problem is that these tools consistently exhibit critical flaws: they hallucinate findings, fail to identify meaningful patterns in qualitative data, cannot adequately consider nuanced research questions, and produce only superficial, high-level summaries of participant behavior. What makes this particularly dangerous is that these AI-generated outputs often have the veneer of legitimate research results—they look professional and sound plausible. However, closer inspection reveals significant gaps, inaccuracies, and missed insights that would mislead stakeholders and result in poor design decisions. The appearance of competence masks fundamental limitations that make these tools unreliable for serious research work. While we’ve found several places in the research process that can benefit from LLM usage, analysis and synthesis consistently falls short. In this talk, I can share the specific research we’re doing and explain what actually works.
Key Insights
-
•
AI tools frequently produce insight-shaped outputs but often lack the rigor and accuracy of trained human researchers.
-
•
AI moderators cannot currently assess user behavior beyond spoken words, missing key usability observations like failed or inefficient tasks.
-
•
Contextual elements such as environmental interruptions are critical in research but are invisible to AI tools.
-
•
Synthetic users generated by AI tend to produce overly positive, unrealistic feedback that can mislead product teams.
-
•
AI excels at finding semantic connections and grouping codes in large, already coded qualitative datasets quickly.
-
•
Meta-analysis of large repositories using AI can uncover recurring user themes, like change aversion, much faster than manual methods.
-
•
Integrating AI with organizational systems to pull in diverse data sources improves context but requires expert setup and is not yet simple.
-
•
AI’s context window limitations cause it to forget earlier input, affecting the accuracy of multi-turn interactions.
-
•
Even trained researchers must use AI outputs cautiously, vetting insights to maintain research quality.
-
•
Effective user research depends on human synthesis, collaboration, and contextual understanding, areas where AI currently fails.
Notable Quotes
"AI can generate insights, but it does not do them as well as a moderately trained human researcher."
"There is a world of difference between what a participant says and what they actually do, and AI misses that completely."
"AI tells you what you want to hear, which is dangerous if you’re making product decisions based on synthetic feedback."
"Our job as researchers is not making reports or interviewing users; it’s providing actionable, correct insights."
"AI tools are incentivized to produce final deliverables, but that’s an output, not the essence of research."
"AI is pretty good at finding semantic patterns among codes after human researchers have done the initial coding."
"Nobody is going to be satisfied by insight-shaped answers or high-level summaries masquerading as breakthroughs."
"AI cannot notice body language, tone, or environmental context during a research session."
"Using AI to scan large archives of research is a game changer for meta-analyses, even if it’s imperfect."
"Well-set-up AI systems pulling data from multiple company sources will have more context, but it’s still limited compared to human understanding."
Or choose a question:
More Videos
"The secondary first, primary second sequence may seem a numerical mismatch, but it is the way to go."
Joerg Beringer Thomas GeisScaling User Research with AI: Continuous Discovery of User Needs in Minutes
September 10, 2025
"Remote work is totally different from working from home during a worldwide pandemic."
Ana FerreiraDesigning Distributed: Leading Doist’s Fully Remote Design Team in Six Countries
January 8, 2024
"I’m still early on in measuring accessibility impact but tying early bug fixes to ROI is promising."
Deanna Washington Bria Alexander Jon Fukuda Saara Kamppari-Miller Farid SabitovConnecting the Ops: Plenary Panel and Closing Circle
September 9, 2022
"Frameworks are the best because they help structure conversations and align teams."
Sean McKayWhole Product Thinking: Expanding beyond problem and solution space thinking
March 14, 2024
"AI and automation have meaningfully raised the bar for how I operate as an ops."
Aletheia DelivreNew Shapes and Emerging Identities for Design Ops
September 11, 2025
"Users often re-enter the journey process after a pause, but compress previously taken steps when they come back."
Sean DolanA Practical Look at Creating More Usable Enterprise Customer Journeys
October 31, 2019
"Defiance is not loud or theatrical; it is bravery and clarity."
Jemma AhmedTheme 1 Intro
March 10, 2026
"I thought I could skip the architecture step, but I ended up with spaghetti code that broke constantly."
Christian CrumlishThe Pygmalion Effect: In Which a Vibe Coding Experiment Becomes a Million Lines…
August 14, 2025
"The brain is a connection machine; people need to create their own mental maps to change effectively."
Laura WeissThere is No Playbook: Leader as Coach During Challenging Times
April 26, 2024