Summary
Documentation technology is the foundation of modern healthcare delivery. Convoluted, redundant, and excessive documentation is a pervasive problem that causes inefficiency in all aspects of the industry. At IncludedHealth, we are developing an AI-assisted documentation that summarizes and documents conversations between patients and their care providers. A care provider can push one button and have their entire patient encounter captured in a succinct and standardized format. Upon a pilot launch, the results were staggering. Within 6 months, we demonstrated a 64% reduction in time per encounter! However, despite our promising results, there still remain challenges specific to the demands of the healthcare domain. As our team continues to develop solutions to meet these challenges, we gain even more clarity on what it takes to design a human-backed, AI-powered healthcare system. Takeaways From this session, you can expect to learn the following: Developing AI design in healthcare requires close collaboration between end users and your data science team Piloting GenAI solutions may be more effective than traditional prototyping Trading accuracy for efficiency is a barrier to adopting GenAI tools in healthcare GenAI design in healthcare requires establishing critical boundaries as well as a good understanding of cognitive processing Other factors to consider when designing AI solutions for service-based industries are understanding how training might be impacted, the importance of standardization vs. personalization of data output and the need for more autonomy and control elements due to consequences of unpredictable output errors
Key Insights
-
•
Generative AI can reduce healthcare documentation time by over 60% but doesn't eliminate the need for manual editing and human review.
-
•
Focusing AI tools initially on low-risk use cases like chat encounter summaries mitigates potential clinical harm.
-
•
Large Language Models (LLMs) excel at summarizing text but struggle with capturing exact clinical details and non-verbal cues.
-
•
Designing AI tools requires managing user expectations to prevent disappointment and frustration over imperfect outputs.
-
•
A simple one-button UI with options to regenerate and manually edit notes improves user workflow efficiency and error recovery.
-
•
Implicit metrics such as the 'edit rate'—the fraction of user edits on AI-generated text—help monitor AI output quality unobtrusively.
-
•
Traditional prototyping methods are less effective for AI; incremental live pilots provide critical learning about AI's unpredictable outputs.
-
•
Operational factors like quality assurance metrics impact user attitudes when AI-generated notes lower scores despite time savings.
-
•
Human-centered collaboration between designers, data scientists, and users is essential to shape AI capabilities and discover unforeseen problems.
-
•
AI models must remain static (non-learning) in healthcare to comply with regulations, creating challenges for ongoing improvements.
Notable Quotes
"We saw documentation time reduced by 64% after six months of using the AI tool."
"Documentation is important for liability and useful for patient handoffs, so notes must be concise yet detailed."
"Healthcare is not a silver bullet for AI; a lot of context comes from non-verbal cues where generative AI doesn’t apply."
"The unpredictable black box nature of LLMs means we had to zoom in on small problems first before seeing the bigger picture."
"We emphasized a simple one-button workflow with manual editing and regeneration to handle inevitable AI errors."
"The edit rate, our metric of human-added characters over total characters, tracks AI output quality without burdening users."
"Users were initially excited but six months later expressed frustration and uncertainty about the tool’s usefulness."
"We believe cognitive biases like frequency bias and expectation bias affected how users perceived AI errors over time."
"Our AI tool doesn’t learn from ongoing use due to regulatory constraints, which led to user frustration when performance didn’t improve."
"People are the plot twist in this journey — despite promising AI, only human connection reveals new problems and solutions."
Dig deeper—ask the Rosenbot:
















More Videos

"We all walked away better for it."
Randolph Duke IIWar Stories LIVE! Randy Duke II
March 30, 2020

"This is a moment to rethink identity beyond UX and get creative with income streams and career paths."
Corey Nelson Amy SanteeLayoffs
November 15, 2022

"Customers benchmark your company to their last best experience, often outside of your industry."
Landon BarnesAre My Research Findings Actually Meaningful?
March 10, 2022

"Most of my career has been in health, and it’s really important that we don’t just get people to do something once, but sustain a new behavior."
Amy BucherHarnessing behavioral science to uncover deeper truths
March 12, 2025

"When you observe a quantum particle, its possible states collapse to one—just like user intent collapses to action."
David SternbergUncovering the hidden forces shaping user behavior
July 17, 2025

"We forget a lot of details over time; documentation saves us from repeating mistakes."
Deanna SmithLeading Change with Confidence: Strategies for Optimizing Your Process
September 23, 2024

"Designers have power. We get to decide who gets heard, who gets included, who gets excluded."
Jennifer StricklandAdopting a "Design By" Method
December 9, 2021

"We realized we were solving problems of a design practice when our organization had actually scaled to an organizational level."
Rachel Posman John CalhounA Closer Look at Team Ops and Product Ops (Two Sides of the DesignOps Coin)
November 19, 2020

"Creating and holding space is like putting bumpers up in bowling so people know they won’t fail if they engage."
Gina MendoliaTherapists, Coaches, and Grandmas: Techniques for Service Design in Complex Systems
December 3, 2024