Rosenverse

This video is only accessible to Gold members. Log in or register for a free Gold Trial Account to watch.

Log in Register

Most conference talks are accessible to Gold members, while community videos are generally available to all logged-in members.

Operationalizing Responsible, Human-Centered AI
Gold
Wednesday, June 7, 2023 • Enterprise UX 2023

This video is featured in the AI and UX playlist.

Share the love for this talk
Operationalizing Responsible, Human-Centered AI
Speakers: Carol Smith
Link:

Summary

AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.

Key Insights

  • Responsible AI systems require humans to retain ultimate responsibility and control.

  • Bias in AI data is inevitable, but awareness and mitigation of harmful bias is crucial.

  • Human-machine teaming must be designed with clear responsibilities and transparency.

  • AI systems are dynamic and constantly evolving, making continuous oversight essential.

  • Speculative exercises like 'What Could Go Wrong' support anticipating harms proactively.

  • Calibrated trust in AI means users neither overtrust nor undertrust the system.

  • Ethical frameworks such as the Three Q Do No Harm help plan for impact on vulnerable groups.

  • Diverse teams improve innovation by being more aware of biases and ethical variation.

  • UX practitioners should understand AI concepts to effectively contribute without needing deep technical skills.

  • Designing safe AI includes making unsafe actions difficult and safe states easy to maintain.

Notable Quotes

"Responsible systems are systems that keep humans in control."

"Data is a function of our history; it reflects priorities, preferences, and prejudices."

"AI will ensure appropriate human judgment, not replace it."

"We want people to gain calibrated levels of trust, not overtrust or undertrust."

"If the system is not confident, it should transparently communicate that and hand off to humans."

"Ethical design is not superficial; if we don't ask the tough questions, who will?"

"We need to be uncomfortable and get used to asking hard questions about AI."

"Humans are still better at many activities and those strengths should be prioritized."

"Adopting technical ethics gives teams permission to question implications beyond opinions."

"These systems aren’t stable like old software; they change as data and models evolve."

Ask the Rosenbot
Dr Chloe Sharp
Using Evidence and Collaboration for Setting and Defending Priorities
2023 • Design in Product 2023
Gold
Alla Weinberg
People Are Sick of Change: Psychological Safety is the Cure
2023 • DesignOps Community
Isaac Heyveld
Expand DesignOps Leadership as a Chief of Staff
2022 • DesignOps Summit 2022
Gold
Simon Wardley
Maps and Topographical Intelligence
2019 • Enterprise Community
Boon Yew Chew
Making Sense of Systems—and Using Systems to Make Sense of the Enterprise
2023 • Enterprise UX 2023
Gold
Max Gadney
Assessing UX jobs for impact in climate
2024 • Climate UX Interest Group
Bria Alexander
Opening Remarks
2022 • DesignOps Summit 2022
Gold
Sylvie Abookire
A Civic Designer's Guide to Mindful Conflict Navigation
2022 • Civic Design 2022
Gold
Tiffany Cheng
Designing in a Pandemic: Integrating Speed and Rigor
2022 • Design at Scale 2022
Gold
Chris Engledowl
A Mixed Method Approach to Validity to Help Build Trust
2023 • QuantQual Interest Group
Husani Oakley
Theme Two Intro
2023 • Enterprise UX 2023
Gold
Dan Willis
Enterprise Storytelling Sessions
2018 • Enterprise Experience 2018
Gold
Jorge Arango
AI as Thought Partner: How to Use LLMs to Transform Your Notes (3rd of 3 seminars)
2024 • Rosenfeld Community
Steve Sanderson
Discussion
2015 • Enterprise UX 2015
Gold
Jessica Norris
ADHD: A DesignOps Superpower
2022 • DesignOps Summit 2022
Gold
Peter Boersma
How to Define and Maintain a DesignOps Roadmap
2023 • DesignOps Summit 2023
Gold

More Videos

Sam Proulx

"Screen reader users scored significantly lower in usability tests, reflecting the most effort in interpreting content."

Sam Proulx

SUS: A System Unusable for Twenty Percent of the Population

December 9, 2021

Michael Land

"The biggest bottleneck is the bureaucracy, like the Paperwork Reduction Act, we have to creatively navigate that."

Michael Land

Establishing Design Operations in Government

February 18, 2021

Shipra Kayan

"Going from 100 to 200 people, suddenly product managers started hearing from a dozen people a week with conflicting requests."

Shipra Kayan

How we Built a VoC (Voice of the Customer) Practice at Upwork from the Ground Up

September 30, 2021

Ian Swinson

"Leadership doesn’t only mean managing; it can be mentoring, product leadership, or technical influence."

Ian Swinson

Designing and Driving UX Careers

June 8, 2016

Isaac Heyveld

"The chief of staff is the bridge between our executive leadership team and the design ops practitioners."

Isaac Heyveld

Expand DesignOps Leadership as a Chief of Staff

September 8, 2022

Amy Evans

"We categorized all our work into may-do, must-do, and desire-to-do buckets to better allocate our efforts."

Amy Evans

How to Create Change

September 25, 2024

Kate Koch

"Super strength is the resilience and endurance to partner and drive strategic impact backed by customer obsession."

Kate Koch Prateek Kalli

Flex Your Super Powers: When a Design Ops Team Scales to Power CX

September 30, 2021

Dave Gray

"The only way to get outside your bubble is to act as if an alternative belief is true and test it."

Dave Gray

Liminal Thinking: Sense-making for systems in large organizations

May 14, 2015

Matt Duignan

"Product teams want quick, grab-and-go research results, but reusable insights are more abstract and take longer to digest."

Matt Duignan

Atomizing Research: Trend or Trap

March 30, 2020