Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
AI systems can unintentionally amplify racial biases in healthcare and hiring processes.
-
•
The healthcare system's reliance on algorithmic scoring can disproportionately harm Black and Hispanic patients.
-
•
Historically biased training data leads AI algorithms to favor certain demographic groups, perpetuating inequality.
-
•
Dark patterns in UX design can mislead users, taking advantage of their trust and leading to uninformed decision-making.
-
•
Deep fakes pose a significant threat by distorting facts, misrepresenting people, and potentially influencing democratic processes.
-
•
Human biases can still affect AI outputs despite the technology's reliance on structured data.
-
•
Research shows that job candidates often distrust AI in hiring decisions over human judgment.
-
•
There is a lack of comprehensive research into the implications of AI bias in hiring practices.
-
•
The history of unethical research, like the Tuskegee study, underscores the need for human subject protections in technology.
-
•
Ethics must be a core consideration in UX research and AI system design to prevent harm.
Notable Quotes
"We have a multitude of data points and factors that go into how our AI models are crafted."
"Racially correlated health data has become commonplace and clinical algorithms that adjust for race aren't very helpful."
"The intention behind implementing AI in hiring processes is often to increase efficiency and objectivity but these systems can inadvertently perpetuate bias."
"Dark patterns make it difficult for people to make the right decisions, leading to exploitation."
"Facial recognition software can lead to false arrests because algorithms can't distinguish between black faces."
"There's more to biases in AI than what's immediately visible; akin to an iceberg."
"Ethical mishandling in research informs the creation of institutional review boards to protect participants."
"We must ensure appropriate representation of consumer experiences when designing AI systems."
"It's our responsibility as researchers to do no harm and to raise alarms when necessary."
"Asking the right questions can transform the work in ways that otherwise would not be possible."
















More Videos

"The impact of small design ops teams can be significant, yet they often face resource constraints."
Angelos ArnisState of DesignOps: Learnings from the 2021 Global Report
October 1, 2021

"We challenge what we even see as data and push ourselves to understand data's plural nature."
Jemma AhmedTheme Three Intro
March 29, 2023

"What if allows us to think beyond the constraints we know."
Satyam KantamneniDo You Have an Experience Vision?
March 23, 2023

"On behalf of the curation team and Roosevelt media team, I want to thank our audience for your awesome support."
Bria AlexanderOpening Remarks
November 18, 2022

"For us to report concluded that after 33 years of research investing in design practices have a direct impact on benefits."
Gonzalo GoyanesDesign ROI: Cover a Little, Get a Lot
September 8, 2022

"If you feel unsupported, please let us know; we prioritize your safety."
Bria AlexanderOpening Remarks
June 11, 2021

"Encouraging kids to express themselves through various activities can lead to richer data."
Mila Kuznetsova Lucy DentonHow Lessons Learned from Our Youngest Users Can Help Us Evolve our Practices
March 9, 2022

"We cannot do all the research; prioritize qualitative insights to inform the direction."
Christian RohrerInsight Types That Influence Enterprise Decision Makers
May 13, 2015

"We need to facilitate engagement; it's where advocacy lies."
Frank DuranPartnership Playbook: Lessons Learned in Effective Partnership
January 8, 2024