Summary
Documentation technology is the foundation of modern healthcare delivery. Convoluted, redundant, and excessive documentation is a pervasive problem that causes inefficiency in all aspects of the industry. At IncludedHealth, we are developing an AI-assisted documentation that summarizes and documents conversations between patients and their care providers. A care provider can push one button and have their entire patient encounter captured in a succinct and standardized format. Upon a pilot launch, the results were staggering. Within 6 months, we demonstrated a 64% reduction in time per encounter! However, despite our promising results, there still remain challenges specific to the demands of the healthcare domain. As our team continues to develop solutions to meet these challenges, we gain even more clarity on what it takes to design a human-backed, AI-powered healthcare system. Takeaways From this session, you can expect to learn the following: Developing AI design in healthcare requires close collaboration between end users and your data science team Piloting GenAI solutions may be more effective than traditional prototyping Trading accuracy for efficiency is a barrier to adopting GenAI tools in healthcare GenAI design in healthcare requires establishing critical boundaries as well as a good understanding of cognitive processing Other factors to consider when designing AI solutions for service-based industries are understanding how training might be impacted, the importance of standardization vs. personalization of data output and the need for more autonomy and control elements due to consequences of unpredictable output errors
Key Insights
-
•
Generative AI can reduce healthcare documentation time by over 60% but doesn't eliminate the need for manual editing and human review.
-
•
Focusing AI tools initially on low-risk use cases like chat encounter summaries mitigates potential clinical harm.
-
•
Large Language Models (LLMs) excel at summarizing text but struggle with capturing exact clinical details and non-verbal cues.
-
•
Designing AI tools requires managing user expectations to prevent disappointment and frustration over imperfect outputs.
-
•
A simple one-button UI with options to regenerate and manually edit notes improves user workflow efficiency and error recovery.
-
•
Implicit metrics such as the 'edit rate'—the fraction of user edits on AI-generated text—help monitor AI output quality unobtrusively.
-
•
Traditional prototyping methods are less effective for AI; incremental live pilots provide critical learning about AI's unpredictable outputs.
-
•
Operational factors like quality assurance metrics impact user attitudes when AI-generated notes lower scores despite time savings.
-
•
Human-centered collaboration between designers, data scientists, and users is essential to shape AI capabilities and discover unforeseen problems.
-
•
AI models must remain static (non-learning) in healthcare to comply with regulations, creating challenges for ongoing improvements.
Notable Quotes
"We saw documentation time reduced by 64% after six months of using the AI tool."
"Documentation is important for liability and useful for patient handoffs, so notes must be concise yet detailed."
"Healthcare is not a silver bullet for AI; a lot of context comes from non-verbal cues where generative AI doesn’t apply."
"The unpredictable black box nature of LLMs means we had to zoom in on small problems first before seeing the bigger picture."
"We emphasized a simple one-button workflow with manual editing and regeneration to handle inevitable AI errors."
"The edit rate, our metric of human-added characters over total characters, tracks AI output quality without burdening users."
"Users were initially excited but six months later expressed frustration and uncertainty about the tool’s usefulness."
"We believe cognitive biases like frequency bias and expectation bias affected how users perceived AI errors over time."
"Our AI tool doesn’t learn from ongoing use due to regulatory constraints, which led to user frustration when performance didn’t improve."
"People are the plot twist in this journey — despite promising AI, only human connection reveals new problems and solutions."
Or choose a question:
More Videos
"Diversity is beyond just the right thing to do. Our teams must reflect the world we’re designing for."
Doug PowellClosing Keynote: Design at Scale
November 8, 2018
"Make sure children understand they can stop the session at any time, and confirm they really get that."
Mila Kuznetsova Lucy DentonHow Lessons Learned from Our Youngest Users Can Help Us Evolve our Practices
March 9, 2022
"In Toronto it’s now illegal to transmit a Wi-Fi signal in designated public spaces, with penalties for violations—a policy driven by people wanting to unplug from digital life."
Sarah GallimoreInspire Progress with Artifacts from the Future
November 18, 2022
"Simply asking stakeholders what success looks like for them in a project is a powerful way to start meaningful conversations."
Lada Gorlenko Sharbani Dhar Sébastien Malo Rob Mitzel Ivana Ng Michal Anne RogondinoTheme 1: Discussion
January 8, 2024
"AI is just a better version of the tools you’re already using to refine ideas and express thoughts."
Alnie FigueroaThe Future of Design Operations: Transforming Our Craft
September 10, 2025
"If you measure an infinite number of customers, every change would be statistically significant, but not every change would be meaningful."
Landon BarnesAre My Research Findings Actually Meaningful?
March 10, 2022
"In public radio, your listener can’t rewind. We have to design for thoughtful user progression."
Emily EagleCan't Rewind: Radio and Retail
June 3, 2019
"We neglected our power users during the responsive, one-size-fits-all strategy, and it really hurt us."
Malini RaoLessons Learned from a 4-year Product Re-platforming Journey
June 9, 2021
"Collaborate and involve design early and often to better align on user goals and reduce friction."
Asia HoePartnering with Product: A Journey from Junior to Senior Design
November 29, 2023
Latest Books All books
Dig deeper with the Rosenbot
What tools or formats work best to align diverse teams with different digital literacy levels on a market launch process?
What are key strategies to manage customer trust when introducing AI-powered solutions?
What quality control workflows are recommended for overseeing democratized research initiatives?