Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Drs. DeSutter and Scopelitis discussed how User Experience (UX) researchers can triangulate and enrich information from one-on-one interviews by attending to users’ co-speech gestures—the spontaneous movements that humans make with their hands and body when communicating. Gestures are a “window to the mind” and can reveal unspoken information about users’ emotional states as well as the structure and composition of their mental models. They concluded with a practical guide for efficiently implementing gesture research.
Key Insights
-
•
Gestures provide a non-verbal window into users' mental models, often revealing thoughts not expressed in speech.
-
•
Representational gestures, especially those made in personal gesture space, indicate cognitive processes and implicit imagery.
-
•
Users commonly hold multiple, context-dependent mental models rather than a single static one.
-
•
In interviews, interviewer gestures increase participant gesturing and improve conversational rapport.
-
•
Video interviews pose challenges for capturing gestures fully; positioning and prompting can mitigate this.
-
•
Speech-gesture mismatches often signal ongoing mental model construction or word searching by users.
-
•
Gestures can reveal emotional attachment or disengagement with technology, influencing adoption and retention.
-
•
Mental models can be anchored by recent technology prototypes, such as chat GPT for AI understanding.
-
•
Structured interview protocols that elicit gesturing and separate talking from tool use optimize gesture data collection.
-
•
Open source motion tracking and gesture analysis tools can aid qualitative research by quantifying gesture patterns.
Notable Quotes
"Gestures are a window to the mind."
"Gesture and speech form an integrated system; they reinforce one another."
"We’re really leaving half of our data on the table by not attending to gesture when eliciting mental models."
"Gesture is not computer and smartphone gestures, but spontaneous movements people make with hands and arms."
"Four me gestures happen in that personal gesture space and serve as thinking tools for the speaker."
"When gestures and speech mismatch, it often means the speaker is still refining their mental model."
"Without looking at the gesture, we would have come to a less complete mental model."
"Users have more than one mental model; they can be constructed on the fly depending on context."
"The degree to which the user feels in control with an intelligent agent brings up conversational mental models."
"The more you gesture, the more your interviewee will gesture."
Or choose a question:
More Videos
"Emojis and GIFs have become effective tools for remote teams to add expression and prevent miscommunication."
Jilanna WilsonDistributed Design Operations Management
October 23, 2019
"In government, the motivation is reduction of misery: why are we up at 3 AM fixing something avoidable?"
Louis RosenfeldDiscussion: What Operations can teach DesignOps
November 6, 2017
"Researchers are not trained mental health professionals, so having referral paths is essential when issues arise."
Sarah FathallahLessening the Research Burden on Vulnerable Communities
March 30, 2020
"If you want to have tattoos, you can have tattoos and show them."
Jen Crim Jess Quittner Saritha Kattekola Alex Karr Gurbani PahwaCulture, DIBS & Recruiting
June 11, 2021
"Matching is very much an art, not a science; there are no hard and fast rules for what counts as a good match."
Joshua NobleCasual Inference
October 6, 2023
"We underestimated the amount of work; each workshop took about 40 hours per person to prepare and run."
Elizabeth Sklar Jessica ShengCo-creating research enablement with your tech org: a case study
March 10, 2026
"Salary is the most objective data point we have, but many questions here are subjective, like satisfaction and design maturity."
Marc FonteijnFirst Insights from the 2025 Service Design Salary(+) Report
December 4, 2024
"The promise of an LLM assistant requires users to build new mental models while using it."
Katie JohnsonDisrupting generative AI products with just-in-time consumer insights
June 4, 2024
"We have to push every button, pull every lever, tackle ethics at individual, collective, and systemic levels."
Cennydd BowlesResponsible Design in Reality
June 9, 2021