This video is featured in the AI and UX playlist.
Summary
Artificial intelligence (AI) has graduated from science fiction to commoditized widget form, readily able to snap into many processes of daily life. Hence, enterprises of all maturity levels are increasingly eager to explore AI’s roles in their innovation, or outright survival strategies. Concurrently, ethical and responsible development and execution of AI-based solutions will increasingly become critical for purposes of safety and fairness. Ensuring that AI proliferates along the right path will require the infusion of multi-faceted research activities along the entire AI lifecycle. We will discuss the challenges and opportunities regarding this topic in this presentation.
Key Insights
-
•
70% of enterprise AI projects show little to no business impact, and nearly 90% of data science projects fail to reach production.
-
•
AI bias, particularly intersectional bias in facial recognition, remains a critical unresolved challenge, exemplified by the Gender Shades project.
-
•
Black-box AI decision-making hinders stakeholder trust and adoption, due to AI’s probabilistic nature and complexity.
-
•
AI development teams are overly engineer-centric, lacking inclusion of product researchers and ethicists to address societal and user-centered concerns.
-
•
ML ops, adapted from DevOps, offers governance and accountability frameworks but currently remains engineer-focused.
-
•
Expanding ML ops to include human-centered researchers can improve AI explainability, trustworthiness, and fairness.
-
•
Visualization research is essential to analyze and interpret high-dimensional AI data and uncover hidden biases across intersectional subgroups.
-
•
AI explainability requires moving beyond feature importance towards causal reasoning and natural language explanations accessible to non-technical stakeholders.
-
•
AI trust is evolving and hinges on AI’s ability to provide convincing, interpretable answers that humans can understand and scrutinize in dialogue form.
-
•
Humanizing AI is not simply building human-like interfaces but creating governance frameworks that democratize responsible AI development.
Notable Quotes
"AI development is often uninformed and hurried, resulting in deployments that don’t operate well in the real world."
"Humanizing AI means creating governance frameworks that involve a broad array of research competencies for democratizing safe and effective AI."
"Almost 90% of data science projects do not make it into production—they die on the vine."
"Black box decision making is a hallmark problem—information goes in, something comes out, but we have no clue why."
"Bias is fueled by over-engineering without enough participation from non-technical roles that could reduce it."
"The Gender Shades project exposed how facial recognition algorithms had up to a 33% error rate disparity between demographic groups."
"ML ops offers governance, accountability, and a clear stakeholder responsibility framework borrowed from DevOps."
"We want to increase trust and engagement among end users by helping non-technical stakeholders participate in model evaluation."
"Explainability metrics like trustworthiness and understandability are hard, open research problems needing AI-HCI collaboration."
"AI trust will grow when AI can provide back-and-forth justifications like a human would in conversation."
Or choose a question:
More Videos
"Researchers are trained to see power and power imbalances, but seeing our own agency is uncomfortable because it infers responsibility."
Brigette MetzlerScaling ResearchOps: Helping Researchers do Their Best Work
March 30, 2020
"We should advocate together, not just for ourselves; if you want to go far, go together."
Jon Fukuda Jake Burghardt Jose Coronado Natalie Dunbar Denise TillesAll the Ops: Successful cross-functional collaboration
September 10, 2025
"We should not replace our curiosity and experience with AI but should instead use AI to speed up summarization and then slow down for human sensemaking."
Weidan LiQualitative synthesis with ChatGPT: Better or worse than human intelligence?
June 4, 2024
"Ethics is often seen as a constraint, but I see it as a trellis—a frame through which products grow shaped by our values and compassion."
Cennydd BowlesResponsible Design in Reality
June 9, 2021
"If you want psychological safety, you have to start with self-awareness and deep listening."
Stephen AndersonPuzzled? How to Coordinate Humans for Complex Challenges
May 27, 2021
"We realized there was no way to make sense of the conglomeration of sticky notes."
Francesca Barrientos, PhDYou Need Your Own Definition of Design Maturity
June 8, 2022
"There are many types of bias—mathematical, statistical, cognitive, human—and they affect AI outcomes differently."
Ovetta SampsonResearch in the Automated Future
March 11, 2022
"Civic stamina is the grind of hearing no a lot while pushing toward yes."
Juhan SoninDesign Now! The Agenda for Action
September 4, 2025
"Living personas are online, constantly refreshed market segment profiles that everyone in an organization can consult."
Nathan ShedroffDouble Your Mileage: Use Your Research Strategically
March 31, 2020
Latest Books All books
Dig deeper with the Rosenbot
What strategies help mitigate imposter syndrome when pivoting from tech to law or other industries?
How can service design approaches support innovation decision-making in large public sector institutions like the OECD?
What are the limitations of traditional journalism metrics like page views and bounce rates?