Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
AI systems in healthcare often embed racial biases that exacerbate disparities, particularly impacting Black and Hispanic populations.
-
•
Algorithms based on past healthcare spending can unfairly dictate future levels of care, disadvantaging poorer patients.
-
•
AI hiring tools can perpetuate bias by training on historically discriminatory data, reducing trust among candidates.
-
•
Dark patterns in UI design, when combined with AI, may manipulate users into unintended decisions, especially vulnerable groups.
-
•
Deepfakes, created via AI neural networks, pose serious risks of misinformation, reputational harm, and psychological damage.
-
•
Facial recognition software shows significant racial and gender bias, performing worst on darker-skinned females as shown by Joy Buolamwini.
-
•
Biases in AI extend beyond technical factors to include systemic and human biases embedded in organizations and data collection.
-
•
Ethical failures like the Tuskegee Syphilis Study highlight the urgency of integrating ethics and do-no-harm principles into AI research and design.
-
•
Women and minorities remain vastly underrepresented in technical AI roles, limiting the diversity of perspectives in AI development.
-
•
Emerging regulations like GDPR and CPRA hold organizations accountable for ethical data use, reinforcing the responsibility of AI practitioners.
Notable Quotes
"Imagine going to the hospital for an emergency and the doctor doesn’t believe you’re in pain because AI says so."
"Algorithms that adjust care based on race don’t actually help and embed disparities instead."
"Dark patterns are not new in UX, but AI amplifies their impact by deceiving users in new ways."
"Deepfakes can disrupt democratic elections and inflict lasting psychological harm on victims."
"Biases out of sight are biases out of mind — facial recognition misidentifies Black faces leading to false arrests."
"All commercially available facial recognition software perform worse on darker females."
"Doing no harm is a core tenet of user experience research, whether or not you are trained in IRB processes."
"Only 12% of AI researchers globally are women, and 6% of professional software developers are women of color."
"Policy like the California Privacy Rights Act requires us to explain data use and comply with user deletion requests."
"The best transformation happens by asking the right questions, even when the problem feels overwhelming."
Or choose a question:
More Videos
"Having a well-thought-out data package really helps us see a fuller picture."
Brad Peters Anne MamaghaniShort Take #1: UX/Product Lessons from Your Industry Peers
December 6, 2022
"You don’t have to be a designer to participate; over 90% of our community are not design professionals."
Lona MooreScaling Design Beyond Designers
June 11, 2021
"The interface becomes a radically adaptive surface—an intelligent canvas that reacts to your behavior and context."
Josh Clark Veronika KindredSentient Design, AI, and the Radically Adaptive Experience (1st of 3 seminars)
January 15, 2025
"People will talk to customers whether you want them to or not. The question is how to make it a better experience."
Erin May Roberta Dombrowski Laura Oxenfeld Brooke HintonDistributed, Democratized, Decentralized: Finding a Research Model to Support Your Org
March 10, 2022
"If I really just needed only quantitative data from a survey, then I would not go the AI moderated route."
Tara TresselInvestigating qualitative depth of AI-moderated interviews
March 10, 2026
"Workshops often reveal gaps in documentation, which is a spicy take but true."
Charles Lee Jennie YipBuilding a New Home for the Atlassian Design System
October 22, 2020
"Technology can revolutionize how we think about education."
Kristin SkinnerFive Years of DesignOps
September 29, 2021
"Without adoption, value is zero."
Megan BlockerGetting to the “So What?”: How Management Consulting Practices Can Transform Your Approach to Research
March 26, 2024
"Service delivery is led by frontline staff who tend to have the least amount of power in the system."
Shanti Mathew Natalie Sims Natalia RadywylCivic Design at Scale: Introducing the Public Policy Layer Cake
December 9, 2021
Latest Books All books
Dig deeper with the Rosenbot
What are Palchinsky’s principles and how do they apply to scaling empathy in research?
How does including metadata like methodology, market, and date improve the quality of research repository search results?
What recruitment differences might exist between AI moderation sessions and traditional surveys?