This video is featured in the AI and UX playlist.
Summary
Artificial intelligence (AI) has graduated from science fiction to commoditized widget form, readily able to snap into many processes of daily life. Hence, enterprises of all maturity levels are increasingly eager to explore AI’s roles in their innovation, or outright survival strategies. Concurrently, ethical and responsible development and execution of AI-based solutions will increasingly become critical for purposes of safety and fairness. Ensuring that AI proliferates along the right path will require the infusion of multi-faceted research activities along the entire AI lifecycle. We will discuss the challenges and opportunities regarding this topic in this presentation.
Key Insights
-
•
70% of enterprise AI projects show little to no business impact, and nearly 90% of data science projects fail to reach production.
-
•
AI bias, particularly intersectional bias in facial recognition, remains a critical unresolved challenge, exemplified by the Gender Shades project.
-
•
Black-box AI decision-making hinders stakeholder trust and adoption, due to AI’s probabilistic nature and complexity.
-
•
AI development teams are overly engineer-centric, lacking inclusion of product researchers and ethicists to address societal and user-centered concerns.
-
•
ML ops, adapted from DevOps, offers governance and accountability frameworks but currently remains engineer-focused.
-
•
Expanding ML ops to include human-centered researchers can improve AI explainability, trustworthiness, and fairness.
-
•
Visualization research is essential to analyze and interpret high-dimensional AI data and uncover hidden biases across intersectional subgroups.
-
•
AI explainability requires moving beyond feature importance towards causal reasoning and natural language explanations accessible to non-technical stakeholders.
-
•
AI trust is evolving and hinges on AI’s ability to provide convincing, interpretable answers that humans can understand and scrutinize in dialogue form.
-
•
Humanizing AI is not simply building human-like interfaces but creating governance frameworks that democratize responsible AI development.
Notable Quotes
"AI development is often uninformed and hurried, resulting in deployments that don’t operate well in the real world."
"Humanizing AI means creating governance frameworks that involve a broad array of research competencies for democratizing safe and effective AI."
"Almost 90% of data science projects do not make it into production—they die on the vine."
"Black box decision making is a hallmark problem—information goes in, something comes out, but we have no clue why."
"Bias is fueled by over-engineering without enough participation from non-technical roles that could reduce it."
"The Gender Shades project exposed how facial recognition algorithms had up to a 33% error rate disparity between demographic groups."
"ML ops offers governance, accountability, and a clear stakeholder responsibility framework borrowed from DevOps."
"We want to increase trust and engagement among end users by helping non-technical stakeholders participate in model evaluation."
"Explainability metrics like trustworthiness and understandability are hard, open research problems needing AI-HCI collaboration."
"AI trust will grow when AI can provide back-and-forth justifications like a human would in conversation."
Or choose a question:
More Videos
"Most design ops professionals work fewer than 40 hours per week, with Europeans more likely to keep this balance than their US counterparts."
Angelos ArnisState of DesignOps: Learnings from the 2021 Global Report
October 1, 2021
"Open source lets companies like Dell, IBM, and the UK government collaborate legally and share accessibility improvements."
Sheri Byrne-HaberThe Importance of Accessible Design Systems
January 8, 2024
"I hammer on information architecture probably more than almost any other interviewer because I believe it’s the central part of the system."
Bob BaxleyLeading with Design Operations Past and Present
December 19, 2019
"Validation helps drive action, so feedback and social engagement keep people motivated to share insights."
Etienne FangPower of Insights: Why sharing is better than silos with Uber’s Insights Platform
December 16, 2019
"There's no way to deny it: every industry needs to adapt to AI, but nobody really knows how yet."
Daniel KorczynskiWhy AI Is Bad at Research (and how to make it actually useful)
March 10, 2026
"Self-care is as much about preparing before you enter the field as it is about debriefing afterwards."
Rachael Dietkus, LCSW Uday Gajendar Dr. Dawn Emerick Dawn E. Shedrick, LCSWLeading through the long tail of trauma
July 7, 2022
"We wanted to move fast, but in the right direction, not just anywhere for the sake of going there."
Neema MahdaviOperationalizing DesignOps
November 7, 2018
"Knowing your teammates personally helps in remote settings, especially for encouraging shy or quiet engineers to engage."
Iain McMaster IHan ChengDesign and Product: from Frenemy to Harmony
November 29, 2023
"Sharing meals is a really critical part of building trust which seems kind of strange but breaking bread together makes a difference."
Kim LenoxLeading Distributed Global Teams
May 20, 2019
Latest Books All books
Dig deeper with the Rosenbot
How do research operations teams manage the tension between consolidating tools and needing specialized capabilities?
How can barriers like childcare and internet access be addressed to promote equitable participation?
What are the risks of using AI-generated synthetic users to gather user feedback?