Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
AI systems in healthcare often embed racial biases that exacerbate disparities, particularly impacting Black and Hispanic populations.
-
•
Algorithms based on past healthcare spending can unfairly dictate future levels of care, disadvantaging poorer patients.
-
•
AI hiring tools can perpetuate bias by training on historically discriminatory data, reducing trust among candidates.
-
•
Dark patterns in UI design, when combined with AI, may manipulate users into unintended decisions, especially vulnerable groups.
-
•
Deepfakes, created via AI neural networks, pose serious risks of misinformation, reputational harm, and psychological damage.
-
•
Facial recognition software shows significant racial and gender bias, performing worst on darker-skinned females as shown by Joy Buolamwini.
-
•
Biases in AI extend beyond technical factors to include systemic and human biases embedded in organizations and data collection.
-
•
Ethical failures like the Tuskegee Syphilis Study highlight the urgency of integrating ethics and do-no-harm principles into AI research and design.
-
•
Women and minorities remain vastly underrepresented in technical AI roles, limiting the diversity of perspectives in AI development.
-
•
Emerging regulations like GDPR and CPRA hold organizations accountable for ethical data use, reinforcing the responsibility of AI practitioners.
Notable Quotes
"Imagine going to the hospital for an emergency and the doctor doesn’t believe you’re in pain because AI says so."
"Algorithms that adjust care based on race don’t actually help and embed disparities instead."
"Dark patterns are not new in UX, but AI amplifies their impact by deceiving users in new ways."
"Deepfakes can disrupt democratic elections and inflict lasting psychological harm on victims."
"Biases out of sight are biases out of mind — facial recognition misidentifies Black faces leading to false arrests."
"All commercially available facial recognition software perform worse on darker females."
"Doing no harm is a core tenet of user experience research, whether or not you are trained in IRB processes."
"Only 12% of AI researchers globally are women, and 6% of professional software developers are women of color."
"Policy like the California Privacy Rights Act requires us to explain data use and comply with user deletion requests."
"The best transformation happens by asking the right questions, even when the problem feels overwhelming."
Or choose a question:
More Videos
"If you’re doing research with people with disabilities, consider using the Accessible Usability Scale to get more accurate insights."
Sam ProulxSUS: A System Unusable for Twenty Percent of the Population
December 9, 2021
"Government is seven plus years behind industry – not bleeding edge, more like the dull butter knife edge."
Michael LandEstablishing Design Operations in Government
February 18, 2021
"Ownership of VOC is a hot potato, so we built a coalition with a facilitator to keep it collaborative and effective."
Shipra KayanHow we Built a VoC (Voice of the Customer) Practice at Upwork from the Ground Up
September 30, 2021
"Innovation is about being fearless, breaking rules, and refusing to stop at the way things have always been done."
Ian SwinsonDesigning and Driving UX Careers
June 8, 2016
"The chief of staff is the bridge between our executive leadership team and the design ops practitioners."
Isaac HeyveldExpand DesignOps Leadership as a Chief of Staff
September 8, 2022
"Self-service was a model with the most unknowns but possibly the biggest rewards."
Amy EvansHow to Create Change
September 25, 2024
"You should take risks, and if you fail, you need to fail fast and pivot immediately."
Kate Koch Prateek KalliFlex Your Super Powers: When a Design Ops Team Scales to Power CX
September 30, 2021
"It’s impossible to be angry with a warm cup of tea in your hand."
Dave GrayLiminal Thinking: Sense-making for systems in large organizations
May 14, 2015
"Product teams want quick, grab-and-go research results, but reusable insights are more abstract and take longer to digest."
Matt DuignanAtomizing Research: Trend or Trap
March 30, 2020
Latest Books All books
Dig deeper with the Rosenbot
How did the Lab balance flexibility for individual project documentation with the need for an integrated knowledge system?
What mindset shifts occur when adopting a systems thinking perspective in design?
What strategies help design operations influence cross-functional teams without formal authority?