Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Defining a clear and specific primary use case is crucial before starting any generative AI chatbot project.
-
•
High-quality, thoroughly reviewed training data is foundational to delivering accurate and useful AI outputs.
-
•
Initial state messaging must clearly frame what the chatbot can and cannot help with to reduce irrelevant or off-topic queries.
-
•
Loading indicators for AI text responses should be subtle, with progress reflected by text appearing rather than distracting animations.
-
•
Supporting efficient scrolling and prompt review is vital since users frequently check and refine their inputs against often long answers.
-
•
Error states in AI chatbots shift from traditional fixed errors to helping users write better prompts to get more relevant results.
-
•
Transparency about accuracy, AI limitations, and source citations builds user trust, especially in regulated domains like FinTech.
-
•
Providing users with prompt engineering guidance via documentation significantly improves the quality of chatbot interactions.
-
•
Accessibility considerations, like keyboard navigation and screen reader compatibility, must be integrated from the start, especially given the large text outputs.
-
•
Chatbots can reduce customer support friction and encourage users to ask questions they might not have otherwise, enhancing user engagement with the product.
Notable Quotes
"If you have any doubts about the quality of the training data, do not proceed."
"You want to assist people in framing the interaction and setting their expectations correctly so they know how to be successful."
"The biggest difference with error states in AI bots is helping people write prompts effectively, not just recovering from simple failures."
"Loading text itself is a loading indicator; the letters appearing show progress better than jumpy animations."
"People often forget what they wrote and then want to check their prompt again before refining it."
"Every output should have at least three source links, almost like citations in a research paper."
"We had to be very careful about accuracy because we're in FinTech and compliance is critical."
"People started asking questions to the bot that they wouldn’t have taken the time to email about."
"It’s really important to be clear and transparent about what the tool is good at and what it’s not good at."
"Accessibility testing included making sure everyone could navigate it using a keyboard alone."
Or choose a question:
More Videos
"The user experience will allow us to win."
Doug PowellClosing Keynote: Design at Scale
November 8, 2018
"Power dynamics exist in every session. People don’t want to be embarrassed or feel put on the spot."
Mila Kuznetsova Lucy DentonHow Lessons Learned from Our Youngest Users Can Help Us Evolve our Practices
March 9, 2022
"Backcast from preferred or risky futures to start thinking about what we need to do today to get there or avoid that outcome."
Sarah GallimoreInspire Progress with Artifacts from the Future
November 18, 2022
"If your users are happy, that anecdotal feedback often carries more weight early than quantitative metrics."
Lada Gorlenko Sharbani Dhar Sébastien Malo Rob Mitzel Ivana Ng Michal Anne RogondinoTheme 1: Discussion
January 8, 2024
"Our goal is to be AI fluent, not necessarily AI experts."
Alnie FigueroaThe Future of Design Operations: Transforming Our Craft
September 10, 2025
"Driver modeling tells you which research initiatives matter most based on their impact on key outcome metrics."
Landon BarnesAre My Research Findings Actually Meaningful?
March 10, 2022
"It’s the worst thing when someone feels dumb in front of a customer because the tools don’t support them."
Emily EagleCan't Rewind: Radio and Retail
June 3, 2019
"Designing for change means focusing on learnability and product consumability, not just ease of use."
Malini RaoLessons Learned from a 4-year Product Re-platforming Journey
June 9, 2021
"I learned to be relentless about prioritization and to hone in on specific details and use cases."
Asia HoePartnering with Product: A Journey from Junior to Senior Design
November 29, 2023
Latest Books All books
Dig deeper with the Rosenbot
How can service design and design ops collaborate intentionally to increase UX service capacity?
How do shared ‘watch parties’ or team review sessions enhance understanding of accessibility challenges?
How can service design facilitate collaboration and alignment between public and private sector organizations?