Rosenverse

This video is only accessible to Gold members. Log in or register for a free Gold Trial Account to watch.

Log in Register

Most conference talks are accessible to Gold members, while community videos are generally available to all logged-in members.

Don't botch the bot: Designing interactions for AI
Gold
Tuesday, June 4, 2024 • Designing with AI 2024
Share the love for this talk
Don't botch the bot: Designing interactions for AI
Speakers: Savannah Carlin
Link:

Summary

It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them

Key Insights

  • Defining a clear and specific primary use case is crucial before starting any generative AI chatbot project.

  • High-quality, thoroughly reviewed training data is foundational to delivering accurate and useful AI outputs.

  • Initial state messaging must clearly frame what the chatbot can and cannot help with to reduce irrelevant or off-topic queries.

  • Loading indicators for AI text responses should be subtle, with progress reflected by text appearing rather than distracting animations.

  • Supporting efficient scrolling and prompt review is vital since users frequently check and refine their inputs against often long answers.

  • Error states in AI chatbots shift from traditional fixed errors to helping users write better prompts to get more relevant results.

  • Transparency about accuracy, AI limitations, and source citations builds user trust, especially in regulated domains like FinTech.

  • Providing users with prompt engineering guidance via documentation significantly improves the quality of chatbot interactions.

  • Accessibility considerations, like keyboard navigation and screen reader compatibility, must be integrated from the start, especially given the large text outputs.

  • Chatbots can reduce customer support friction and encourage users to ask questions they might not have otherwise, enhancing user engagement with the product.

Notable Quotes

"If you have any doubts about the quality of the training data, do not proceed."

"You want to assist people in framing the interaction and setting their expectations correctly so they know how to be successful."

"The biggest difference with error states in AI bots is helping people write prompts effectively, not just recovering from simple failures."

"Loading text itself is a loading indicator; the letters appearing show progress better than jumpy animations."

"People often forget what they wrote and then want to check their prompt again before refining it."

"Every output should have at least three source links, almost like citations in a research paper."

"We had to be very careful about accuracy because we're in FinTech and compliance is critical."

"People started asking questions to the bot that they wouldn’t have taken the time to email about."

"It’s really important to be clear and transparent about what the tool is good at and what it’s not good at."

"Accessibility testing included making sure everyone could navigate it using a keyboard alone."

Ask the Rosenbot
Jon Fukuda
Design Planning and Management Support
2023 • DesignOps Summit 2023
Gold
Clara Kliman-Silver
UX Futures: The Role of Artificial Intelligence in Design
2023 • Enterprise UX 2023
Gold
Bria Alexander
Day 3 Welcome
2024 • DesignOps Summit 2024
Gold
Peter Van Dijck
Coffee with Lou #4: Taking a Peek Under the Rosenbot's Hood
2024 • Rosenfeld Community
John Taschek
Making People the X-Factor in the Enterprise
2018 • Enterprise Experience 2018
Gold
John Cutler
The Alignment Trap
2023 • Design in Product 2023
Gold
Mark Boulton
Ops without Designers
2018 • DesignOps Summit 2018
Gold
Heidi Trost
To Protect People, You Have to Protect Information: A Human-Centered Design Approach to Cybersecurity
2025 • Rosenfeld Community
Bria Alexander
Opening Remarks
2023 • Advancing Research 2023
Gold
Holly Cole
Understanding Experiences: When you have to do more than work
2018 • DesignOps Summit 2018
Gold
Louis Rosenfeld
Do you want to work on climate? (Climate UX Discussion Series)
2023 • Climate UX Interest Group
Dawn Ressel
Full-Stack User Experiences: A Marriage of Design and Technology
2016 • Enterprise UX 2016
Gold
Dr. Jamika D. Burge
Theme 3 Intro
2022 • Advancing Research 2022
Gold
Jason Mesut
Shaping design, designers and teams
2018 • DesignOps Summit 2018
Gold
Gina Mendolia
Coordinated collaboration: a Service Design & DesignOps love story
2025 • Advancing Service Design 2025
Conference
Heidi Trost
When AI Becomes the User’s Point Person—and Point of Failure
2025 • Rosenfeld Community

More Videos

Doug Powell

"The user experience will allow us to win."

Doug Powell

Closing Keynote: Design at Scale

November 8, 2018

Mila Kuznetsova

"Power dynamics exist in every session. People don’t want to be embarrassed or feel put on the spot."

Mila Kuznetsova Lucy Denton

How Lessons Learned from Our Youngest Users Can Help Us Evolve our Practices

March 9, 2022

Sarah Gallimore

"Backcast from preferred or risky futures to start thinking about what we need to do today to get there or avoid that outcome."

Sarah Gallimore

Inspire Progress with Artifacts from the Future

November 18, 2022

Lada Gorlenko

"If your users are happy, that anecdotal feedback often carries more weight early than quantitative metrics."

Lada Gorlenko Sharbani Dhar Sébastien Malo Rob Mitzel Ivana Ng Michal Anne Rogondino

Theme 1: Discussion

January 8, 2024

Alnie Figueroa

"Our goal is to be AI fluent, not necessarily AI experts."

Alnie Figueroa

The Future of Design Operations: Transforming Our Craft

September 10, 2025

Landon Barnes

"Driver modeling tells you which research initiatives matter most based on their impact on key outcome metrics."

Landon Barnes

Are My Research Findings Actually Meaningful?

March 10, 2022

Emily Eagle

"It’s the worst thing when someone feels dumb in front of a customer because the tools don’t support them."

Emily Eagle

Can't Rewind: Radio and Retail

June 3, 2019

Malini Rao

"Designing for change means focusing on learnability and product consumability, not just ease of use."

Malini Rao

Lessons Learned from a 4-year Product Re-platforming Journey

June 9, 2021

Asia Hoe

"I learned to be relentless about prioritization and to hone in on specific details and use cases."

Asia Hoe

Partnering with Product: A Journey from Junior to Senior Design

November 29, 2023