Summary
In the very realistic future of an AI-driven world, the responsible and ethical implementation of technology is paramount. In this session, we will dive into the crucial role of DesignOps practitioners in driving ethical AI practices. We'll tackle the challenge of ensuring AI systems align with user values, respect privacy, and avoid biases, while unleashing their potential for innovation. As a UX strategist and DesignOps practitioner, I understand the significance of integrating ethical considerations into AI development. I bring a unique perspective on how DesignOps can shape the future of AI by fostering responsible innovation. This session challenges the status quo by highlighting the intersection of DesignOps and ethics, advancing the conversation in our field and sparking thought-provoking discussions. Attendees will gain valuable insights into the role of DesignOps in navigating the ethical landscape of AI. They will learn practical strategies and best practices for integrating ethical frameworks into their AI development processes. By exploring real-world examples and case studies, attendees will be inspired to push the boundaries of responsible AI and make a positive impact in their organizations. Join me in this exciting session to chart the course for ethical AI, challenge conventional thinking, and explore the immense potential of DesignOps in driving responsible innovation.
Key Insights
-
•
AI tech debt compounds exponentially, making rushed releases far more damaging than traditional tech debt.
-
•
Faulty AI bias can cause severe real-world harm, like wrongful arrests illustrated by Robert Williams' story.
-
•
Design ops leaders must act as 'party planners' to ensure diverse, multidisciplinary teams are involved in AI development.
-
•
Multidisciplinary teams should include legal experts, machine learning engineers, UX researchers, domain experts, business analysts, data scientists, and ethicists.
-
•
AI datasets often inherit societal biases, as revealed by MidJourney’s predominantly white, stereotyped image outputs.
-
•
Key ethical AI questions include verifying data origins, bias testing, and ongoing monitoring mechanisms.
-
•
Ethical prototyping requires simulating AI behavior against varied user personas and challenging scenarios.
-
•
Ethical stress testing evaluates AI responses in morally complex situations, such as autonomous vehicle dilemmas.
-
•
AI must be iterated ethically and continuously to prevent degradation and incorporation of biased or untrusted inputs.
-
•
Advocating for inclusion and ethical data use requires persistent escalation, especially in engineering-led organizations.
Notable Quotes
"AI tech debt has compounding interest to it."
"Rushing to market with AI solutions can irreparably damage not only your product but your entire brand."
"We are the solution to preventing harmful AI outcomes like Robert's wrongful arrest."
"Your role is to ensure that the right people are at the party — a multidisciplinary team."
"MidJourney’s dataset reflects stereotyped images because it’s based on internet image results without specific instruction."
"It is not our job to know all the answers, but to make sure the right questions are asked."
"Ethical stress testing subjects AI to hypothetical morally challenging scenarios to ensure alignment with ethical norms."
"AI learns from the world, sometimes from untrusted sources, so it needs continual ethical iteration."
"You can’t put the toothpaste back in the tube once biased AI harms your brand or users."
"Embrace the role of party planner with your expertise to shape ethical AI innovation."
Or choose a question:
More Videos
"Don’t tell government teams you’re using design thinking—use their language and let them experience it first."
Steve ChaparroBringing Into Alignment Brand, Culture and Space
August 13, 2020
"Making images and text more legible over complex backgrounds is both a tactical design and accessibility challenge."
Elizabeth ChurchillExploring Cadence: You, Your Team, and Your Enterprise
June 8, 2017
"Participation or user research draws from ideas rooted in agency and control, not just optimizing our work product."
Sha HwangThe First Fifty Years of Civic Design
November 16, 2022
"Layoffs are a collective trauma – it’s okay to acknowledge the emotions and grief you feel."
Corey Nelson Amy SanteeLayoffs
November 15, 2022
"You have to verify key data points carefully because AI can hallucinate details in critical areas."
Noz UrbinaRapid AI-powered UX (RAUX): A framework for empowering human designers
May 1, 2025
"Organizations say they want to break down silos but often create new ones in product teams; service design helps reconnect them."
Lavrans Løvlie Ben ReasonAsk me anything – Authors of Service Design: From Insight to Implementation
November 19, 2025
"Smaller design orgs rate high on some health indicators but often have no growth path; very large orgs get bogged down by bureaucracy."
Peter MerholzThe 2025 State of UX/Design Organizational Health
November 12, 2025
"Design is often framed as problem solving, but conversation is a better framework to respect users as active experts."
Daniel GloydWarming the User Experience: Lessons from America's first and most radical human-centered designers
May 9, 2024
"You can't lay off the infrastructure, like the tooling and operations teams; they're harder to cut."
Louis RosenfeldCoffee with Lou
January 11, 2024
Latest Books All books
Dig deeper with the Rosenbot
What role does service design play in accelerating organizational growth beyond project support?
What challenges do organizations face when trying to connect product silos using service design?
How can parallelizing activities like hiring and market discovery speed up time-to-market in new country launches?