Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
AI systems in healthcare often embed racial biases that exacerbate disparities, particularly impacting Black and Hispanic populations.
-
•
Algorithms based on past healthcare spending can unfairly dictate future levels of care, disadvantaging poorer patients.
-
•
AI hiring tools can perpetuate bias by training on historically discriminatory data, reducing trust among candidates.
-
•
Dark patterns in UI design, when combined with AI, may manipulate users into unintended decisions, especially vulnerable groups.
-
•
Deepfakes, created via AI neural networks, pose serious risks of misinformation, reputational harm, and psychological damage.
-
•
Facial recognition software shows significant racial and gender bias, performing worst on darker-skinned females as shown by Joy Buolamwini.
-
•
Biases in AI extend beyond technical factors to include systemic and human biases embedded in organizations and data collection.
-
•
Ethical failures like the Tuskegee Syphilis Study highlight the urgency of integrating ethics and do-no-harm principles into AI research and design.
-
•
Women and minorities remain vastly underrepresented in technical AI roles, limiting the diversity of perspectives in AI development.
-
•
Emerging regulations like GDPR and CPRA hold organizations accountable for ethical data use, reinforcing the responsibility of AI practitioners.
Notable Quotes
"Imagine going to the hospital for an emergency and the doctor doesn’t believe you’re in pain because AI says so."
"Algorithms that adjust care based on race don’t actually help and embed disparities instead."
"Dark patterns are not new in UX, but AI amplifies their impact by deceiving users in new ways."
"Deepfakes can disrupt democratic elections and inflict lasting psychological harm on victims."
"Biases out of sight are biases out of mind — facial recognition misidentifies Black faces leading to false arrests."
"All commercially available facial recognition software perform worse on darker females."
"Doing no harm is a core tenet of user experience research, whether or not you are trained in IRB processes."
"Only 12% of AI researchers globally are women, and 6% of professional software developers are women of color."
"Policy like the California Privacy Rights Act requires us to explain data use and comply with user deletion requests."
"The best transformation happens by asking the right questions, even when the problem feels overwhelming."
Dig deeper—ask the Rosenbot:
















More Videos

"Laura let out a sigh of relief and said, I don’t think I could ever recover the way that you did."
Randolph Duke IIWar Stories LIVE! Randy Duke II
March 30, 2020

"Keep your LinkedIn profile updated and compelling so you’re findable for passive job opportunities."
Corey Nelson Amy SanteeLayoffs
November 15, 2022

"You shouldn’t send another survey unless you’ve implemented changes and allowed time for behavior to adapt."
Landon BarnesAre My Research Findings Actually Meaningful?
March 10, 2022

"We observed a safety risk on a construction site and that helped open a door to honest stories that people wouldn’t have shared otherwise."
Amy BucherHarnessing behavioral science to uncover deeper truths
March 12, 2025

"User intent exists in a probabilistic superposition of multiple possible actions until a choice is made."
David SternbergUncovering the hidden forces shaping user behavior
July 17, 2025

"Using pilots can get buy-in from stakeholders who might resist broad change initially."
Deanna SmithLeading Change with Confidence: Strategies for Optimizing Your Process
September 23, 2024

"Designers have power. We get to decide who gets heard, who gets included, who gets excluded."
Jennifer StricklandAdopting a "Design By" Method
December 9, 2021

"Trying to do all things well can mean doing all things kind of not so great—specialization became necessary."
Rachel Posman John CalhounA Closer Look at Team Ops and Product Ops (Two Sides of the DesignOps Coin)
November 19, 2020

"Service designers connect the dots, which represent elements, teams, processes, policies, and customer needs that all must align."
Gina MendoliaTherapists, Coaches, and Grandmas: Techniques for Service Design in Complex Systems
December 3, 2024
Latest Books All books
Dig deeper with the Rosenbot
What challenges and opportunities arise when implementing new workflow processes in large legacy companies like Microsoft?
What role does accountability cadence play in sustaining discipline within design ops initiatives?
What advice exists for balancing starting new projects and sustaining them in complex systems?