This video is featured in the AI and UX playlist.
Summary
AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.
Key Insights
-
•
Responsible AI systems require humans to retain ultimate responsibility and control.
-
•
Bias in AI data is inevitable, but awareness and mitigation of harmful bias is crucial.
-
•
Human-machine teaming must be designed with clear responsibilities and transparency.
-
•
AI systems are dynamic and constantly evolving, making continuous oversight essential.
-
•
Speculative exercises like 'What Could Go Wrong' support anticipating harms proactively.
-
•
Calibrated trust in AI means users neither overtrust nor undertrust the system.
-
•
Ethical frameworks such as the Three Q Do No Harm help plan for impact on vulnerable groups.
-
•
Diverse teams improve innovation by being more aware of biases and ethical variation.
-
•
UX practitioners should understand AI concepts to effectively contribute without needing deep technical skills.
-
•
Designing safe AI includes making unsafe actions difficult and safe states easy to maintain.
Notable Quotes
"Responsible systems are systems that keep humans in control."
"Data is a function of our history; it reflects priorities, preferences, and prejudices."
"AI will ensure appropriate human judgment, not replace it."
"We want people to gain calibrated levels of trust, not overtrust or undertrust."
"If the system is not confident, it should transparently communicate that and hand off to humans."
"Ethical design is not superficial; if we don't ask the tough questions, who will?"
"We need to be uncomfortable and get used to asking hard questions about AI."
"Humans are still better at many activities and those strengths should be prioritized."
"Adopting technical ethics gives teams permission to question implications beyond opinions."
"These systems aren’t stable like old software; they change as data and models evolve."
Or choose a question:
More Videos
"If people just got in the room and I did nothing except take a seat, everything would be fine."
Alison Rand Sarah BrooksScaling Impact with Service Design
March 25, 2021
"I found the problem solver in the UI developer and identified the project’s sponsor who really felt the pain."
Carl TurnerYou Can Do This: Understand and Solve Organizational Problems to Jumpstart a Dead Project
March 28, 2023
"You drew the boundaries wrong, so we pivoted and re-drew the boundaries, and those foundational features became our marquee product launch."
Sarah FlamionComplex Problem? Add Clarity by Combining Research and Systems Thinking
March 31, 2020
"Please don’t delegate hiring to HR—hiring managers must set clear criteria and be personally involved to avoid bias."
Silke Bochat5 Antifragile Strategies for a DesignOps 2.0
September 23, 2024
"Designers must understand their domain deeply or risk being glorified production artists."
Uday Gajendar Lada Gorlenko Dave Malouf Louis Rosenfeld Dan Willis10 Years of Enterprise UX: Reflecting on the community and the practice
June 18, 2025
"We’re going to say fortunately, and she’s going to say a sentence. Then we say unfortunately, and he’s going to say a sentence connected to that sentence."
Dan WillisEnterprise Storytelling Sessions
June 8, 2017
"We ran community gatherings with themes, presentations, and breaks, because people needed space to just chat and connect."
Kara KaneCommunities of Practice for Civic Design
April 7, 2022
"I believed we could do more — more for the company, customers, stakeholders, and ourselves if we had more agency."
Nalini KotamrajuResearch After UX
March 25, 2024
"If you want to take advantage of asking speakers questions and having discussions, join our Slack."
Bria AlexanderOpening Remarks
October 1, 2021
Latest Books All books
Dig deeper with the Rosenbot
How does testing accessibility at the component level improve overall product accessibility?
How can performance metrics be adapted to reflect resilience rather than just efficiency in AI services?
How can a repeatable market entry framework help reduce launch delays in multi-country B2B expansions?