Rosenverse

This video is only accessible to Gold members. Log in or register for a free Gold Trial Account to watch.

Log in Register

Most conference talks are accessible to Gold members, while community videos are generally available to all logged-in members.

Journeying toward AI-assisted documentation in healthcare
Gold
Wednesday, June 5, 2024 • Designing with AI 2024
Share the love for this talk
Journeying toward AI-assisted documentation in healthcare
Speakers: Jennifer Kong
Link:

Summary

Documentation technology is the foundation of modern healthcare delivery. Convoluted, redundant, and excessive documentation is a pervasive problem that causes inefficiency in all aspects of the industry. At IncludedHealth, we are developing an AI-assisted documentation that summarizes and documents conversations between patients and their care providers. A care provider can push one button and have their entire patient encounter captured in a succinct and standardized format. Upon a pilot launch, the results were staggering. Within 6 months, we demonstrated a 64% reduction in time per encounter! However, despite our promising results, there still remain challenges specific to the demands of the healthcare domain. As our team continues to develop solutions to meet these challenges, we gain even more clarity on what it takes to design a human-backed, AI-powered healthcare system. Takeaways From this session, you can expect to learn the following: Developing AI design in healthcare requires close collaboration between end users and your data science team Piloting GenAI solutions may be more effective than traditional prototyping Trading accuracy for efficiency is a barrier to adopting GenAI tools in healthcare GenAI design in healthcare requires establishing critical boundaries as well as a good understanding of cognitive processing Other factors to consider when designing AI solutions for service-based industries are understanding how training might be impacted, the importance of standardization vs. personalization of data output and the need for more autonomy and control elements due to consequences of unpredictable output errors

Key Insights

  • Generative AI can reduce healthcare documentation time by over 60% but doesn't eliminate the need for manual editing and human review.

  • Focusing AI tools initially on low-risk use cases like chat encounter summaries mitigates potential clinical harm.

  • Large Language Models (LLMs) excel at summarizing text but struggle with capturing exact clinical details and non-verbal cues.

  • Designing AI tools requires managing user expectations to prevent disappointment and frustration over imperfect outputs.

  • A simple one-button UI with options to regenerate and manually edit notes improves user workflow efficiency and error recovery.

  • Implicit metrics such as the 'edit rate'—the fraction of user edits on AI-generated text—help monitor AI output quality unobtrusively.

  • Traditional prototyping methods are less effective for AI; incremental live pilots provide critical learning about AI's unpredictable outputs.

  • Operational factors like quality assurance metrics impact user attitudes when AI-generated notes lower scores despite time savings.

  • Human-centered collaboration between designers, data scientists, and users is essential to shape AI capabilities and discover unforeseen problems.

  • AI models must remain static (non-learning) in healthcare to comply with regulations, creating challenges for ongoing improvements.

Notable Quotes

"We saw documentation time reduced by 64% after six months of using the AI tool."

"Documentation is important for liability and useful for patient handoffs, so notes must be concise yet detailed."

"Healthcare is not a silver bullet for AI; a lot of context comes from non-verbal cues where generative AI doesn’t apply."

"The unpredictable black box nature of LLMs means we had to zoom in on small problems first before seeing the bigger picture."

"We emphasized a simple one-button workflow with manual editing and regeneration to handle inevitable AI errors."

"The edit rate, our metric of human-added characters over total characters, tracks AI output quality without burdening users."

"Users were initially excited but six months later expressed frustration and uncertainty about the tool’s usefulness."

"We believe cognitive biases like frequency bias and expectation bias affected how users perceived AI errors over time."

"Our AI tool doesn’t learn from ongoing use due to regulatory constraints, which led to user frustration when performance didn’t improve."

"People are the plot twist in this journey — despite promising AI, only human connection reveals new problems and solutions."

Ask the Rosenbot
Dalia El-Shimy
So You've Got a Seat at the Table. Now What?
2020 • Advancing Research 2020
Gold
Todd Healy
Driving Change with CX Metrics
2023 • Enterprise UX 2023
Gold
Brenna Fallon
Learning Over Outcomes
2019 • DesignOps Summit 2019
Gold
Saara Kamppari-Miller
DesignOps for Inclusive Design and Accessibility
2022 • DesignOps Community
Jay Bustamante
Navigating the Ethical Frontier: DesignOps Strategies for Responsible AI Innovation
2023 • DesignOps Summit 2023
Gold
Uday Gajendar
The Rise of Meta-Design: A Starter Playbook
2022 • Enterprise Community
Jacqui Frey
Panel Discussion: Integrating DesignOps
2018 • DesignOps Summit 2018
Gold
Adam Cutler
People + Places + Practices = Outcomes
2016 • Enterprise UX 2016
Gold
Ariba Jahan
Team Resiliency Through a Pandemic
2024 • DesignOps Summit 2020
Gold
James Lang
If you can design an app, you can design a community
2025 • Rosenfeld Community
Devon Powers
Imagining Better Futures
2022 • Advancing Research 2022
Gold
Ari Zelmanow
Dark Side of Democratization
2023 • Advancing Research Community
Kritika Yadav
Optimizing AI Conversations: A Case Study on Personalized Shopping Assistance Frameworks
2025 • Designing with AI 2025
Gold
Nathan Curtis
Design Systems for Us: How Many One-Source(s)-of-Truth Are Enough?
2019 • DesignOps Community
Dan Ward
Failure Friday #1 with Dan Ward
2025 • Rosenfeld Community
Dr. Jamika D. Burge
Bridge Building across Research Disciplines
2021 • Advancing Research Community

More Videos

Sam Proulx

"Most people with my assistive technology might learn to use the system at a different pace than non-disabled users."

Sam Proulx

SUS: A System Unusable for Twenty Percent of the Population

December 9, 2021

Michael Land

"We’re trying to build a design operations role that balances top-down governance with bottom-up community building."

Michael Land

Establishing Design Operations in Government

February 18, 2021

Shipra Kayan

"The rebrand caused our NPS to plummet and freaked out our executives — that was the trigger to get serious."

Shipra Kayan

How we Built a VoC (Voice of the Customer) Practice at Upwork from the Ground Up

September 30, 2021

Ian Swinson

"Leadership doesn’t only mean managing; it can be mentoring, product leadership, or technical influence."

Ian Swinson

Designing and Driving UX Careers

June 8, 2016

Isaac Heyveld

"The chief of staff serves as a bridge between the design program managers, executive assistants, and other ux and product chiefs of staff."

Isaac Heyveld

Expand DesignOps Leadership as a Chief of Staff

September 8, 2022

Amy Evans

"Design takes on a new role as governance practitioner, overseeing branding, content, and accessibility."

Amy Evans

How to Create Change

September 25, 2024

Kate Koch

"Super intelligence is the power to gather and share knowledge consistently to ensure no gaps in communication."

Kate Koch Prateek Kalli

Flex Your Super Powers: When a Design Ops Team Scales to Power CX

September 30, 2021

Dave Gray

"The enterprise feels like a big elephant; different people touch different parts and argue what it really is."

Dave Gray

Liminal Thinking: Sense-making for systems in large organizations

May 14, 2015

Matt Duignan

"We wanted a living network of knowledge so yesterday’s insight connects to today’s evidence and tomorrow’s learning."

Matt Duignan

Atomizing Research: Trend or Trap

March 30, 2020