Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
AI systems can unintentionally amplify racial biases in healthcare and hiring processes.
-
•
The healthcare system's reliance on algorithmic scoring can disproportionately harm Black and Hispanic patients.
-
•
Historically biased training data leads AI algorithms to favor certain demographic groups, perpetuating inequality.
-
•
Dark patterns in UX design can mislead users, taking advantage of their trust and leading to uninformed decision-making.
-
•
Deep fakes pose a significant threat by distorting facts, misrepresenting people, and potentially influencing democratic processes.
-
•
Human biases can still affect AI outputs despite the technology's reliance on structured data.
-
•
Research shows that job candidates often distrust AI in hiring decisions over human judgment.
-
•
There is a lack of comprehensive research into the implications of AI bias in hiring practices.
-
•
The history of unethical research, like the Tuskegee study, underscores the need for human subject protections in technology.
-
•
Ethics must be a core consideration in UX research and AI system design to prevent harm.
Notable Quotes
"We have a multitude of data points and factors that go into how our AI models are crafted."
"Racially correlated health data has become commonplace and clinical algorithms that adjust for race aren't very helpful."
"The intention behind implementing AI in hiring processes is often to increase efficiency and objectivity but these systems can inadvertently perpetuate bias."
"Dark patterns make it difficult for people to make the right decisions, leading to exploitation."
"Facial recognition software can lead to false arrests because algorithms can't distinguish between black faces."
"There's more to biases in AI than what's immediately visible; akin to an iceberg."
"Ethical mishandling in research informs the creation of institutional review boards to protect participants."
"We must ensure appropriate representation of consumer experiences when designing AI systems."
"It's our responsibility as researchers to do no harm and to raise alarms when necessary."
"Asking the right questions can transform the work in ways that otherwise would not be possible."
















More Videos

"Collaboration is crucial to navigate ambiguity in government service design projects."
Ed MullenDesigning the Unseen: Enabling Institutions to Build Public Trust
November 16, 2022

"I think the most effective strategic research we can do is work that provides an occasion for expanding our organization's fields of future possibility."
Peter LevinSolve a Problem Here, Transform a Strategy There: Research as an Occasion for Expanding Organizational Possibility
March 25, 2024

"OKRs can suck the air out of the room if top-down direction is enforced."
Bria Alexander Benson Low Natalya Pemberton Stephanie GoldthorpeOKRs—Helpful or Harmful?
January 20, 2022

"This power of telepathy has enabled proactive solutions like expanded resource capacity planning"
Kate Koch Prateek KalliFlex Your Super Powers: When a Design Ops Team Scales to Power CX
September 30, 2021

"Our lenders want the best experience for our borrowers."
Amy MarquezINVEST: Discussion
June 15, 2018

"20% of your time is for you to do whatever you want with it."
Brenna FallonLearning Over Outcomes
October 24, 2019

"You need to give people space to do their jobs without the expectation to overachieve."
Brian MossWhat Does it Mean to be a Resilient Research Team?
March 9, 2022

"We're always gardening our Slack channels to reduce noise and enhance focus."
Adel Du ToitGet Your CFO To Say: 'Our Strategic Goal is User Obsession'
June 10, 2022

"We love to flex our superpowers, but we’re trying to use them for good and not evil."
Kate Koch Jean-Claire FitschenFlex Your Super Powers: When a Design Ops Team Scales to Power CX
September 29, 2021