This video is featured in the AI and UX playlist.
Summary
Artificial intelligence (AI) has graduated from science fiction to commoditized widget form, readily able to snap into many processes of daily life. Hence, enterprises of all maturity levels are increasingly eager to explore AI’s roles in their innovation, or outright survival strategies. Concurrently, ethical and responsible development and execution of AI-based solutions will increasingly become critical for purposes of safety and fairness. Ensuring that AI proliferates along the right path will require the infusion of multi-faceted research activities along the entire AI lifecycle. We will discuss the challenges and opportunities regarding this topic in this presentation.
Key Insights
-
•
70% of enterprise AI projects show little to no business impact, and nearly 90% of data science projects fail to reach production.
-
•
AI bias, particularly intersectional bias in facial recognition, remains a critical unresolved challenge, exemplified by the Gender Shades project.
-
•
Black-box AI decision-making hinders stakeholder trust and adoption, due to AI’s probabilistic nature and complexity.
-
•
AI development teams are overly engineer-centric, lacking inclusion of product researchers and ethicists to address societal and user-centered concerns.
-
•
ML ops, adapted from DevOps, offers governance and accountability frameworks but currently remains engineer-focused.
-
•
Expanding ML ops to include human-centered researchers can improve AI explainability, trustworthiness, and fairness.
-
•
Visualization research is essential to analyze and interpret high-dimensional AI data and uncover hidden biases across intersectional subgroups.
-
•
AI explainability requires moving beyond feature importance towards causal reasoning and natural language explanations accessible to non-technical stakeholders.
-
•
AI trust is evolving and hinges on AI’s ability to provide convincing, interpretable answers that humans can understand and scrutinize in dialogue form.
-
•
Humanizing AI is not simply building human-like interfaces but creating governance frameworks that democratize responsible AI development.
Notable Quotes
"AI development is often uninformed and hurried, resulting in deployments that don’t operate well in the real world."
"Humanizing AI means creating governance frameworks that involve a broad array of research competencies for democratizing safe and effective AI."
"Almost 90% of data science projects do not make it into production—they die on the vine."
"Black box decision making is a hallmark problem—information goes in, something comes out, but we have no clue why."
"Bias is fueled by over-engineering without enough participation from non-technical roles that could reduce it."
"The Gender Shades project exposed how facial recognition algorithms had up to a 33% error rate disparity between demographic groups."
"ML ops offers governance, accountability, and a clear stakeholder responsibility framework borrowed from DevOps."
"We want to increase trust and engagement among end users by helping non-technical stakeholders participate in model evaluation."
"Explainability metrics like trustworthiness and understandability are hard, open research problems needing AI-HCI collaboration."
"AI trust will grow when AI can provide back-and-forth justifications like a human would in conversation."
Or choose a question:
More Videos
"Having a dedicated staff member created a point of contact for all things early childhood in the city government."
Sarah AuslanderIncremental Steps to Drive Radical Innovation in Policy Design
November 18, 2022
"Having the clients share their screen and having our actual team members watch these live sessions has become very compelling for the entire extended product, UX, and development teams."
Vicky Teinaki Michele Marut Tim ParmeeShort Take #3: UX/Product Lessons from Your Industry Peers
December 6, 2022
"Final versions often do not look exactly like the original design, so we want to invest our time in process improvement, not arguing over pixels."
George Abraham Stefan IvanovDesign Systems To-Go: Introducing a Starter Design System, and Indigo.Design Overview (Part 1)
September 30, 2021
"Empathy is really just the floor, not the ceiling; it cannot replace someone's actual lived experience."
Sahibzada MayedThe Politics of Radical Research: A Manifesto
March 27, 2023
"Bringing stakeholders into the kitchen early helped us learn even before sessions started and adjust designs accordingly."
Joanna Vodopivec Prabhas PokharelOne Research Team for All - Influence Without Authority
March 9, 2022
"Journeys Spark drove a 50% CTR increase by speeding up persona-driven content optimization with AI before human validation."
Gillian Salerno-Rebic Mark MicheliFrom Insight to Impact: How JourneySpark Used WEVO Pulse + Pro to Drive a 50% Lift in Ad Engagement
June 11, 2025
"There’s an opportunity to fill in cracks around a research practice and accelerate the work people already do."
Gillian Salerno-Rebic Mark MicheliRedefining Speed and Scale: How Accenture’s GrowthOS Uses AI-Simulated Insights to Reduce Risk and Accelerate Innovation
June 10, 2025
"Just throwing numbers at people does not engage them; pairing data with personal stories or quotes makes the findings resonate."
Eduardo Ortiz Robin Beers Rachael Dietkus, LCSW Bruce Gillespie Jess Greco Marieke McCloskey Renee ReidDay 3 Theme Panel
March 13, 2025
"Land acknowledgement done as a checkbox is performative; when embedded, it becomes a practice of respect and inclusion."
Zariah CameronStreamlining an Inclusive Design Practice
October 3, 2023
Latest Books All books
Dig deeper with the Rosenbot
What human oversight processes are necessary to ensure the accuracy and relevance of AI-enhanced research repositories?
How does Steve Krug characterize the current AI hype around AGI and its realistic timeline?
How do cultural beliefs and stories influence maternal health practices and care outcomes?