Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Join us for a different type of Quant vs. Qual discussion: instead of discussing how data science and quantitative research methods can power UX research and design, we’re going to talk about designing enterprise data products and tools that put ML and analytics into the hands of users. Does this call for new, different, or modified approaches to UX research and design? Or do these technologies have nothing to do with how we approach design for data products? The session’s host will be Brian T. O’Neill who is also the host of the Experiencing Data podcast and founder of Designing for Analytics, an independent consultancy that helps data products leaders use design-driven innovation to deliver better ML and analytics user experiences. In this session, we’ll be sharing some rapid [slides-optional] anecdotes and stories from the attendees and then open up the conversation to everyone. We hope to get perspectives from both enterprise data teams doing “internal” data analytics or ML/AI solutions development as well as software/tech companies as well who may offer data-related platform tools, intelligence/SAAS products, BI/decision support solutions, etc. Slots are open to both experienced UX practitioners as well as data science / analytics / technical participants who may have participated in design or UX work with colleagues. Please share! If folks are too quiet in the session, you may be subject to a drum or tambourine solo from Brian. Nobody has all of this “figured out yet” and experiments and trials are welcome.
Key Insights
-
•
UX researchers in ML-heavy environments often face steep domain knowledge gaps requiring strong interview facilitation to bridge communication.
-
•
Designing for AI data products demands a shift from generic user interfaces to tools tailored for expert users who need explainability over polish.
-
•
Cross-functional collaboration thrives by creating shared spaces that respect differing expertise while focusing on a unifying product goal.
-
•
Decision culture focusing on how decisions are made is more useful than a vague data culture mindset when designing AI/ML products.
-
•
Besides customers and stakeholders, data labelers who curate training data form a critical, often overlooked human part of the ML ecosystem.
-
•
Model interpretability can be more important than raw accuracy because users must trust a system to adopt it and generate value.
-
•
Prototyping data products requires believable data and can be facilitated by testing edge cases like false positives and trust thresholds early.
-
•
Incremental learning loops involving user feedback can improve models over time and should be designed into AI products.
-
•
Contextualizing data model outputs with business rules and user environment (e.g., seasonal factors) increases relevance and trust.
-
•
No-code data science tools and scenario building help teams quickly experiment with data products despite the usual slow model training cycles.
Notable Quotes
"Smoke comes out of my ears when I'm talking to some of the more advanced applications in machine learning."
"Design is about creating that shared space where we look at problems through different lenses."
"Decision culture is a better lens than data culture for thinking about what decisions we're trying to facilitate with AI."
"Not everyone using machine learning is your user; think also about the people labeling the data feeding these models."
"If nobody uses this because they don’t trust it, it doesn’t matter how accurate the model is."
"You need to prototype with real or believable data so the data doesn’t change the context of use."
"Some things that look like blockers, like domain expertise gaps, actually force you to become a better interviewer."
"We’re not always trying to use machine learning everywhere—sometimes the answer is to ignore it."
"Users want to understand why this happened, what will happen, and how can we make it happen—explainability is critical."
"The presenting problem might be a dashboard with a score, but the real need is deciding what to do with that information."
Dig deeper—ask the Rosenbot:
















More Videos

"The wrong learning pedagogy might be fueling impostor syndrome among design researchers."
Yoel SumitroActions and Reflections: Bridging the Skills Gap among Researchers
March 9, 2022

"Burnout exists because we made rest a reward rather than a right."
Zariah CameronReDesigning Wellbeing for Equitable Care in the Workplace
September 23, 2024

"Disruptive change is the new normal, coming from every direction — not just technology."
Doug PowellDesignOps and the Next Frontier: Leading Through Unpredictable Change
September 11, 2025

"I felt on edge and physically depleted after our interactions."
Darian DavisLessons from a Toxic Work Relationship
January 8, 2024

"To innovate smarter, you need to get access to the roadmap as early as possible and start research even when not asked for it."
Mike OrenWhy Pharmaceutical's Research Model Should Replace Design Thinking
March 28, 2023

"If you can’t tell me your workflow, don’t talk about your tools."
Jacqui FreyScale is Social Work
March 19, 2020

"Our value of connection is brought to life through team offsites and weekly meetings with intentional moments to build connection."
Kim Holt Emma Wylds Pearl Koppenhaver Maisee XiongA Salesforce Panel Discussion on Values-Driven DesignOps
September 8, 2022

"If we think disability as a mismatch between user and environment rather than a medical problem, it becomes easier to understand how accessibility benefits everyone."
Samuel ProulxFrom Standards to Innovation: Why Inclusive Design Wins
September 10, 2025

"The competencies allow managers to celebrate individualism while maintaining consistency across teams."
Laine Riley ProkayHow DesignOps can Drive Inclusive Career Ladders for All
September 30, 2021