Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
AI systems in healthcare often embed racial biases that exacerbate disparities, particularly impacting Black and Hispanic populations.
-
•
Algorithms based on past healthcare spending can unfairly dictate future levels of care, disadvantaging poorer patients.
-
•
AI hiring tools can perpetuate bias by training on historically discriminatory data, reducing trust among candidates.
-
•
Dark patterns in UI design, when combined with AI, may manipulate users into unintended decisions, especially vulnerable groups.
-
•
Deepfakes, created via AI neural networks, pose serious risks of misinformation, reputational harm, and psychological damage.
-
•
Facial recognition software shows significant racial and gender bias, performing worst on darker-skinned females as shown by Joy Buolamwini.
-
•
Biases in AI extend beyond technical factors to include systemic and human biases embedded in organizations and data collection.
-
•
Ethical failures like the Tuskegee Syphilis Study highlight the urgency of integrating ethics and do-no-harm principles into AI research and design.
-
•
Women and minorities remain vastly underrepresented in technical AI roles, limiting the diversity of perspectives in AI development.
-
•
Emerging regulations like GDPR and CPRA hold organizations accountable for ethical data use, reinforcing the responsibility of AI practitioners.
Notable Quotes
"Imagine going to the hospital for an emergency and the doctor doesn’t believe you’re in pain because AI says so."
"Algorithms that adjust care based on race don’t actually help and embed disparities instead."
"Dark patterns are not new in UX, but AI amplifies their impact by deceiving users in new ways."
"Deepfakes can disrupt democratic elections and inflict lasting psychological harm on victims."
"Biases out of sight are biases out of mind — facial recognition misidentifies Black faces leading to false arrests."
"All commercially available facial recognition software perform worse on darker females."
"Doing no harm is a core tenet of user experience research, whether or not you are trained in IRB processes."
"Only 12% of AI researchers globally are women, and 6% of professional software developers are women of color."
"Policy like the California Privacy Rights Act requires us to explain data use and comply with user deletion requests."
"The best transformation happens by asking the right questions, even when the problem feels overwhelming."
Or choose a question:
More Videos
"I underestimated how much identity our team tied to UX and how scary that loss felt to many of them."
Nalini KotamrajuResearch After UX
March 25, 2024
"Care less about being certain and more about being effective."
Dean BroadleyNot Black Enough to be White
January 8, 2024
"If you’re not doing the work of addressing your own pain, you won’t be effective in helping others."
Denise Jacobs Nancy Douyon Renee Reid Lisa WelchmanInteractive Keynote: Social Change by Design
January 8, 2024
"What does ‘best work’ mean? It’s different for everyone, so our solutions have to be flexible and meaningful."
Kim Fellman CohenMeasuring the Designer Experience
October 23, 2019
"Why do designers decide how to design this? How do we wean design's addiction from whiteness?"
George AyeThat Quiet Little Voice: When Design and Ethics Collide
November 16, 2022
"I don't want a component library for the account home. I want a system for all of that."
Nathan CurtisBeyond the Toolkit: Spreading a System Across People & Products
June 9, 2016
"Our researchers talk regularly to the CEO and are closely involved in product decisions, making research highly impactful here."
Greg PetroffThe Compass Mission
March 10, 2021
"Global and local cultures are closely entwined and always influencing one another."
Chloe Amos-EdkinsA Cultural Approach: Research in the Context of Glocalisation
March 27, 2023
"Having the qualitative and quantitative data together lets us tell a comprehensive story to stakeholders."
Mackenzie Cockram Sara Branco Cunha Ian FranklinIntegrating Qualitative and Quantitative Research from Discovery to Live
December 16, 2022