Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the AI and UX playlist and 1 more.
Summary
What are AI-mediated experiences made of? What new interactions and UI patterns should be part of your toolkit? How do these new patterns support trust, critical thinking, usability, and accessibility? Watch Josh Clark and Veronika Kindred, authors of our forthcoming book Sentient Design, explore emerging best practices for the design of machine-intelligent experiences. The session focuses on the practical: new interaction patterns, functional patterns, and UI patterns for AI-powered interfaces. Meet the Pinocchio pattern, see how trait tags work, learn to nudge, become a master of inpainting, and so much more. Learn how these solutions are tailored to suit the curious qualities of machine intelligence. You’ll see how they build healthy mental models for users, creating realistic expectations for system capabilities. Plus, learn to use these new patterns to guide behaviors in ways that amplify user judgment and agency, instead of replacing them.
Key Insights
-
•
AI interfaces must balance open-ended prompts with guided 'nudges' or trait tags to reduce user discovery deficits.
-
•
Version vaults that track multiple AI-generated outputs help manage generation overload and encourage exploration.
-
•
AI systems are probabilistic and often confidently wrong, so designs must clearly communicate uncertainty and provide alternate scenarios.
-
•
Summarization of large data sets, especially product or hotel reviews, is an effective AI UI pattern to reduce cognitive load.
-
•
Users often mistakenly expect AI to provide a single true answer, but AI results more accurately represent a cloud of plausible answers.
-
•
Adaptive feed designs like the 'adventure nav' pattern give users agency to steer algorithmic content, avoiding monotony of typical feeds.
-
•
Sentient design embraces proactive but deferential AI assistance that offers smart defaults without overriding user control.
-
•
In high-stakes AI use (e.g., navigation), interfaces should transparently flag risks and uncertain recommendations to invite critical thinking.
-
•
Plugging in local or custom LLMs brings opportunities for privacy and energy efficiency but also UX challenges around complexity and model choice.
-
•
The future will involve managing and overseeing infant AI agents, requiring new UX paradigms for correction, validation, and goal setting.
Notable Quotes
"AI is supposed to make things easier, but it usually comes on with some knock on problems."
"LLMs don’t give you answers—they give you something that most likely resembles the fact you’re looking for."
"Discovery deficit happens when you don’t know what or how to do things in an AI system, often because of lacking affordances."
"Version vaults save multiple versions of work, letting users review, compare, and revert, which encourages exploration."
"Summarization isn’t just a feature—it’s becoming baked into experiences to help make sense of overwhelming information."
"There’s an illusion of one true answer with AI, but often there’s a cloud of plausible answers depending on how you ask."
"Adventure nav pattern gives users the agency to choose different paths in a feed, rather than consuming a single algorithmic stream."
"LLMs are always confident but not always correct, which makes it hard for users to know when to trust them."
"Sentient design is about proposing direction without imposing it—laying a road in front of the user, but letting them choose."
"When AI systems become unreliable, we need productive humility in our interfaces and ways to engage human agency."
Or choose a question:
More Videos
"If you’re amazing at your job but no one listens to your research or design, does it really have impact?"
Ian SwinsonDesigning and Driving UX Careers
June 8, 2016
"It is impossible for one research ops person to do all the things expected in job descriptions."
Kate TowseyAsk Me Anything (AMA) with Kate Towsey
April 2, 2025
"We prioritize speaking with people who are most in need of support so we can better understand and design for all potential users of PFML."
Lisa Spitz Nikki BrandBuilding Trust Through Equitable Research Practices
November 18, 2022
"Government culture is not like tech culture; IT constraints mean tools like Sketch are still a win."
Elena Naids Liza McRuerThe Power of Difficult Conversations: A Case Study on How We Introduced Design Ops in the Federal Government Space
October 2, 2023
"Most people with my assistive technology might learn to use the system at a different pace than non-disabled users."
Sam ProulxSUS: A System Unusable for Twenty Percent of the Population
December 9, 2021
"You can access the digital swag bag by scanning the QR code or visiting fld.me/cd2022 for cool sponsor offers."
Bria AlexanderOpening Remarks
November 17, 2022
"Design is often framed as problem solving, but conversation is a better framework to respect users as active experts."
Daniel GloydWarming the User Experience: Lessons from America's first and most radical human-centered designers
May 9, 2024
"Designers have obligations not just to users but also to society and our own profession."
Cennydd Bowles Dan Rosenberg Lisa WelchmanDay 1 Panel
June 4, 2024
"We are not just trying to get a seat at the table, we want to get a voice at the table."
Anna Poznyakov Richa PrajapatiGet The Most Out Of Stakeholder Collaboration—and Maximize Your Research Impact
March 12, 2021