Rosenverse

This video is only accessible to Gold members. Log in or register for a free Gold Trial Account to watch.

Log in Register

Most conference talks are accessible to Gold members, while community videos are generally available to all logged-in members.

Operationalizing Responsible, Human-Centered AI
Gold
Wednesday, June 7, 2023 • Enterprise UX 2023
Share the love for this talk
Operationalizing Responsible, Human-Centered AI
Speakers: Carol Smith
Link:

Summary

AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.

Key Insights

  • Responsible AI systems must keep humans in control and prioritize user experience.

  • Human and machine teaming should be designed with transparency and mutual understanding in mind.

  • AI systems are only as good as the data they are trained on; bias in data must be recognized and mitigated.

  • Continuous oversight of AI systems is necessary to adapt to changing data and environments.

  • Human strengths should be prioritized, as there are still many areas where humans outperform AI.

  • Speculative design can help anticipate potential harms and prevent them before they occur.

  • Inclusive design practices should actively consider and aim to reduce disparities affecting marginalized groups.

  • Technical ethics should guide all teams in making decisions about AI systems.

  • Usability testing is crucial for uncovering vulnerabilities in AI interactions.

  • The design process must include explicit responsibilities for both humans and AI systems.

Notable Quotes

"Responsible systems are designed to work with and for people, providing trustworthy interactions."

"Data is not inherently neutral; it reflects the priorities and prejudices of those who create it."

"Making responsible human-centered AI involves understanding complexity, human-machine teaming, and continuous oversight."

"Speculation is key to keeping people safe and anticipating the non-obvious consequences of AI deployment."

"All systems have a form of bias due to the information curated by humans, and we must acknowledge this."

"AI systems must always be supervised by humans, who retain ultimate responsibility for decisions involved."

"Trust cannot be measured easily, as it is deeply personal and complex."

"Unplugging machines and maintaining ultimate human control is vital in AI system design."

"It’s crucial to design actions that lead to safe states to be easy, while unsafe actions should be harder to execute."

"Ethical design isn’t superficial; it requires deep inquiry into implications and societal impact."

More Videos