Summary
In the very realistic future of an AI-driven world, the responsible and ethical implementation of technology is paramount. In this session, we will dive into the crucial role of DesignOps practitioners in driving ethical AI practices. We'll tackle the challenge of ensuring AI systems align with user values, respect privacy, and avoid biases, while unleashing their potential for innovation. As a UX strategist and DesignOps practitioner, I understand the significance of integrating ethical considerations into AI development. I bring a unique perspective on how DesignOps can shape the future of AI by fostering responsible innovation. This session challenges the status quo by highlighting the intersection of DesignOps and ethics, advancing the conversation in our field and sparking thought-provoking discussions. Attendees will gain valuable insights into the role of DesignOps in navigating the ethical landscape of AI. They will learn practical strategies and best practices for integrating ethical frameworks into their AI development processes. By exploring real-world examples and case studies, attendees will be inspired to push the boundaries of responsible AI and make a positive impact in their organizations. Join me in this exciting session to chart the course for ethical AI, challenge conventional thinking, and explore the immense potential of DesignOps in driving responsible innovation.
Key Insights
-
•
AI tech debt compounds exponentially, making rushed releases far more damaging than traditional tech debt.
-
•
Faulty AI bias can cause severe real-world harm, like wrongful arrests illustrated by Robert Williams' story.
-
•
Design ops leaders must act as 'party planners' to ensure diverse, multidisciplinary teams are involved in AI development.
-
•
Multidisciplinary teams should include legal experts, machine learning engineers, UX researchers, domain experts, business analysts, data scientists, and ethicists.
-
•
AI datasets often inherit societal biases, as revealed by MidJourney’s predominantly white, stereotyped image outputs.
-
•
Key ethical AI questions include verifying data origins, bias testing, and ongoing monitoring mechanisms.
-
•
Ethical prototyping requires simulating AI behavior against varied user personas and challenging scenarios.
-
•
Ethical stress testing evaluates AI responses in morally complex situations, such as autonomous vehicle dilemmas.
-
•
AI must be iterated ethically and continuously to prevent degradation and incorporation of biased or untrusted inputs.
-
•
Advocating for inclusion and ethical data use requires persistent escalation, especially in engineering-led organizations.
Notable Quotes
"AI tech debt has compounding interest to it."
"Rushing to market with AI solutions can irreparably damage not only your product but your entire brand."
"We are the solution to preventing harmful AI outcomes like Robert's wrongful arrest."
"Your role is to ensure that the right people are at the party — a multidisciplinary team."
"MidJourney’s dataset reflects stereotyped images because it’s based on internet image results without specific instruction."
"It is not our job to know all the answers, but to make sure the right questions are asked."
"Ethical stress testing subjects AI to hypothetical morally challenging scenarios to ensure alignment with ethical norms."
"AI learns from the world, sometimes from untrusted sources, so it needs continual ethical iteration."
"You can’t put the toothpaste back in the tube once biased AI harms your brand or users."
"Embrace the role of party planner with your expertise to shape ethical AI innovation."
Or choose a question:
More Videos
"You don’t have to be a CFO or accountant, most of the time you won’t even touch balance sheets or cash flow statements."
Giff ConstableFinancial fluency for product leaders: AMA with Giff Constable
April 11, 2024
"The AI assistant starts with reasoning and not results, engaging users in intelligent conversation to understand needs."
Kritika YadavOptimizing AI Conversations: A Case Study on Personalized Shopping Assistance Frameworks
June 10, 2025
"Without safety, human beings literally can’t think."
Alla WeinbergHow to Build and Scale Team Safety
January 8, 2024
"Designers often say 'has it been approved by legal and compliance? Great, let's not worry about privacy anymore.'"
Harry Brignull Mark Leiser Robert StribleyBeyond Clicks and Tricks: Why deceptive design has grown into a regulatory faultline
January 16, 2026
"These products are often part of an ecosystem of other complex products."
Kit UngerTheme 2: Introduction
June 10, 2021
"You’re constantly training people and optimizing protocols even after successful implementation."
Sofia QuinteroBeyond Tools: The Messy Business of Implementing Research Repositories
March 10, 2022
"If you put one person across three teams, every day they have to decide who they’re going to piss off."
Jeff GothelfThe Intersection of Lean and Design
January 10, 2019
"People may even forget why you are if you’re conversations that bring the best out of you as a human being."
Kit Unger Lada GorlenkoTheme 3 Intro
June 10, 2022
"The goal is not to take notes; the goal is to think effectively, as Andy Matus says."
Jorge ArangoThe Best of Both Worlds: How to Integrate Paper and Digital Notes (1st of 3 seminars)
April 5, 2024