Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the AI and UX playlist.
Summary
What are AI-mediated experiences made of? What new interactions and UI patterns should be part of your toolkit? How do these new patterns support trust, critical thinking, usability, and accessibility? Watch Josh Clark and Veronika Kindred, authors of our forthcoming book Sentient Design, explore emerging best practices for the design of machine-intelligent experiences. The session focuses on the practical: new interaction patterns, functional patterns, and UI patterns for AI-powered interfaces. Meet the Pinocchio pattern, see how trait tags work, learn to nudge, become a master of inpainting, and so much more. Learn how these solutions are tailored to suit the curious qualities of machine intelligence. You’ll see how they build healthy mental models for users, creating realistic expectations for system capabilities. Plus, learn to use these new patterns to guide behaviors in ways that amplify user judgment and agency, instead of replacing them.
Key Insights
-
•
AI interfaces must balance open-ended prompts with guided 'nudges' or trait tags to reduce user discovery deficits.
-
•
Version vaults that track multiple AI-generated outputs help manage generation overload and encourage exploration.
-
•
AI systems are probabilistic and often confidently wrong, so designs must clearly communicate uncertainty and provide alternate scenarios.
-
•
Summarization of large data sets, especially product or hotel reviews, is an effective AI UI pattern to reduce cognitive load.
-
•
Users often mistakenly expect AI to provide a single true answer, but AI results more accurately represent a cloud of plausible answers.
-
•
Adaptive feed designs like the 'adventure nav' pattern give users agency to steer algorithmic content, avoiding monotony of typical feeds.
-
•
Sentient design embraces proactive but deferential AI assistance that offers smart defaults without overriding user control.
-
•
In high-stakes AI use (e.g., navigation), interfaces should transparently flag risks and uncertain recommendations to invite critical thinking.
-
•
Plugging in local or custom LLMs brings opportunities for privacy and energy efficiency but also UX challenges around complexity and model choice.
-
•
The future will involve managing and overseeing infant AI agents, requiring new UX paradigms for correction, validation, and goal setting.
Notable Quotes
"AI is supposed to make things easier, but it usually comes on with some knock on problems."
"LLMs don’t give you answers—they give you something that most likely resembles the fact you’re looking for."
"Discovery deficit happens when you don’t know what or how to do things in an AI system, often because of lacking affordances."
"Version vaults save multiple versions of work, letting users review, compare, and revert, which encourages exploration."
"Summarization isn’t just a feature—it’s becoming baked into experiences to help make sense of overwhelming information."
"There’s an illusion of one true answer with AI, but often there’s a cloud of plausible answers depending on how you ask."
"Adventure nav pattern gives users the agency to choose different paths in a feed, rather than consuming a single algorithmic stream."
"LLMs are always confident but not always correct, which makes it hard for users to know when to trust them."
"Sentient design is about proposing direction without imposing it—laying a road in front of the user, but letting them choose."
"When AI systems become unreliable, we need productive humility in our interfaces and ways to engage human agency."
Or choose a question:
More Videos
"David Nicholson is the reason why you don’t have to write very fast."
Bria AlexanderOpening Remarks
October 3, 2023
"Every problem is a research problem."
Ron BronsonDesign, Consequences & Everyday Life
November 18, 2022
"LLMs are always confident but not always correct, which makes it hard for users to know when to trust them."
Josh Clark Veronika KindredSentient Design: New Design Patterns for New Experiences (3rd of 3 seminars)
February 12, 2025
"Prioritization is as much about saying no gracefully as it is about saying yes to the right things."
John Cutler Harry MaxPrioritization for designers and product managers (1st of 3 seminars)
June 13, 2024
"Sometimes the best answer isn’t a chart but a single number augmented with explanatory text."
Theresa NeilJust Build Me a Dashboard!
April 9, 2019
"You need to meet your leaders where they are on their maturity curve to be effective."
Peter MerholzThe Trials and Tribulations of Directors of UX
July 13, 2023
"There’s this kind of Collective unconscious within teams that formed over the pandemic which is deeper than any other team experience I’ve had."
DesignOps and The Great Talent War of 2021
August 19, 2021
"You don’t need to take notes. We want you to pay attention and not be distracted."
Uday Gajendar Louis RosenfeldDay 2 Welcome
June 5, 2024
"Evals define a shared definition of good with tests to measure it, and that is the secret sauce for building great AI products."
Peter Van DijckHands-on AI #2: Understanding evals: LLM as a Judge
October 15, 2025