Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Defining a clear and specific primary use case is crucial before starting any generative AI chatbot project.
-
•
High-quality, thoroughly reviewed training data is foundational to delivering accurate and useful AI outputs.
-
•
Initial state messaging must clearly frame what the chatbot can and cannot help with to reduce irrelevant or off-topic queries.
-
•
Loading indicators for AI text responses should be subtle, with progress reflected by text appearing rather than distracting animations.
-
•
Supporting efficient scrolling and prompt review is vital since users frequently check and refine their inputs against often long answers.
-
•
Error states in AI chatbots shift from traditional fixed errors to helping users write better prompts to get more relevant results.
-
•
Transparency about accuracy, AI limitations, and source citations builds user trust, especially in regulated domains like FinTech.
-
•
Providing users with prompt engineering guidance via documentation significantly improves the quality of chatbot interactions.
-
•
Accessibility considerations, like keyboard navigation and screen reader compatibility, must be integrated from the start, especially given the large text outputs.
-
•
Chatbots can reduce customer support friction and encourage users to ask questions they might not have otherwise, enhancing user engagement with the product.
Notable Quotes
"If you have any doubts about the quality of the training data, do not proceed."
"You want to assist people in framing the interaction and setting their expectations correctly so they know how to be successful."
"The biggest difference with error states in AI bots is helping people write prompts effectively, not just recovering from simple failures."
"Loading text itself is a loading indicator; the letters appearing show progress better than jumpy animations."
"People often forget what they wrote and then want to check their prompt again before refining it."
"Every output should have at least three source links, almost like citations in a research paper."
"We had to be very careful about accuracy because we're in FinTech and compliance is critical."
"People started asking questions to the bot that they wouldn’t have taken the time to email about."
"It’s really important to be clear and transparent about what the tool is good at and what it’s not good at."
"Accessibility testing included making sure everyone could navigate it using a keyboard alone."
Or choose a question:
More Videos
"Screen reader users scored significantly lower in usability tests, reflecting the most effort in interpreting content."
Sam ProulxSUS: A System Unusable for Twenty Percent of the Population
December 9, 2021
"We’re trying to build a design operations role that balances top-down governance with bottom-up community building."
Michael LandEstablishing Design Operations in Government
February 18, 2021
"We moved from Evernote to spreadsheets, to a giant affinity board in Mirror, then to UserVoice, and finally to idiomatic software with AI."
Shipra KayanHow we Built a VoC (Voice of the Customer) Practice at Upwork from the Ground Up
September 30, 2021
"A workshop helps people map their skills, identify gaps, and make specific goals for career jumps."
Ian SwinsonDesigning and Driving UX Careers
June 8, 2016
"The chief of staff can be a natural next step from a design program manager or design ops leader of one."
Isaac HeyveldExpand DesignOps Leadership as a Chief of Staff
September 8, 2022
"What if product and business actually manage designs and deliver digital communications?"
Amy EvansHow to Create Change
September 25, 2024
"You should take risks, and if you fail, you need to fail fast and pivot immediately."
Kate Koch Prateek KalliFlex Your Super Powers: When a Design Ops Team Scales to Power CX
September 30, 2021
"The enterprise feels like a big elephant; different people touch different parts and argue what it really is."
Dave GrayLiminal Thinking: Sense-making for systems in large organizations
May 14, 2015
"Product teams want quick, grab-and-go research results, but reusable insights are more abstract and take longer to digest."
Matt DuignanAtomizing Research: Trend or Trap
March 30, 2020