Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Join us for a different type of Quant vs. Qual discussion: instead of discussing how data science and quantitative research methods can power UX research and design, we’re going to talk about designing enterprise data products and tools that put ML and analytics into the hands of users. Does this call for new, different, or modified approaches to UX research and design? Or do these technologies have nothing to do with how we approach design for data products? The session’s host will be Brian T. O’Neill who is also the host of the Experiencing Data podcast and founder of Designing for Analytics, an independent consultancy that helps data products leaders use design-driven innovation to deliver better ML and analytics user experiences. In this session, we’ll be sharing some rapid [slides-optional] anecdotes and stories from the attendees and then open up the conversation to everyone. We hope to get perspectives from both enterprise data teams doing “internal” data analytics or ML/AI solutions development as well as software/tech companies as well who may offer data-related platform tools, intelligence/SAAS products, BI/decision support solutions, etc. Slots are open to both experienced UX practitioners as well as data science / analytics / technical participants who may have participated in design or UX work with colleagues. Please share! If folks are too quiet in the session, you may be subject to a drum or tambourine solo from Brian. Nobody has all of this “figured out yet” and experiments and trials are welcome.
Key Insights
-
•
UX researchers in ML-heavy environments often face steep domain knowledge gaps requiring strong interview facilitation to bridge communication.
-
•
Designing for AI data products demands a shift from generic user interfaces to tools tailored for expert users who need explainability over polish.
-
•
Cross-functional collaboration thrives by creating shared spaces that respect differing expertise while focusing on a unifying product goal.
-
•
Decision culture focusing on how decisions are made is more useful than a vague data culture mindset when designing AI/ML products.
-
•
Besides customers and stakeholders, data labelers who curate training data form a critical, often overlooked human part of the ML ecosystem.
-
•
Model interpretability can be more important than raw accuracy because users must trust a system to adopt it and generate value.
-
•
Prototyping data products requires believable data and can be facilitated by testing edge cases like false positives and trust thresholds early.
-
•
Incremental learning loops involving user feedback can improve models over time and should be designed into AI products.
-
•
Contextualizing data model outputs with business rules and user environment (e.g., seasonal factors) increases relevance and trust.
-
•
No-code data science tools and scenario building help teams quickly experiment with data products despite the usual slow model training cycles.
Notable Quotes
"Smoke comes out of my ears when I'm talking to some of the more advanced applications in machine learning."
"Design is about creating that shared space where we look at problems through different lenses."
"Decision culture is a better lens than data culture for thinking about what decisions we're trying to facilitate with AI."
"Not everyone using machine learning is your user; think also about the people labeling the data feeding these models."
"If nobody uses this because they don’t trust it, it doesn’t matter how accurate the model is."
"You need to prototype with real or believable data so the data doesn’t change the context of use."
"Some things that look like blockers, like domain expertise gaps, actually force you to become a better interviewer."
"We’re not always trying to use machine learning everywhere—sometimes the answer is to ignore it."
"Users want to understand why this happened, what will happen, and how can we make it happen—explainability is critical."
"The presenting problem might be a dashboard with a score, but the real need is deciding what to do with that information."
Or choose a question:
More Videos
"AI is very biased. We have a lot of work to do to get the bias out before going further with many systems."
Erin MaloneUnderstanding the past to prepare for the future
July 19, 2024
"Although I turned off the AI feature at Config, I started evaluation with security, privacy, and legal teams for governance."
Changying (Z) ZhengNavigating Innovation with Integrity
September 25, 2024
"Good trouble is part of our craft—it's about making the right amount of disruption to improve healthcare."
Carol MassaDesigning Health: Integrating Service Design, Technology, and Strategy to Transform Patient and Clinician Experiences
December 3, 2024
"You can also reach us quickly by going to help-desk-customer-service for any tech or support issues."
Bria AlexanderOpening Remarks
October 3, 2023
"To innovate smarter, you need to get access to the roadmap as early as possible and start research even when not asked for it."
Mike OrenWhy Pharmaceutical's Research Model Should Replace Design Thinking
March 28, 2023
"We ended up doing an average of five labs per quarter covering 12 to 15 research questions, which helped avoid design delays."
Feleesha SterlingBuilding a Rapid Research Program
May 18, 2023
"You have to understand how the business actually makes money to align UX with it."
Amy MarquezINVEST: Discussion
June 15, 2018
"Rushing to market with AI solutions can irreparably damage not only your product but your entire brand."
Jay BustamanteNavigating the Ethical Frontier: DesignOps Strategies for Responsible AI Innovation
October 2, 2023
"Scheduling syncs with your Google or Outlook calendar so you only book participants at times that work for you."
Lily Aduana Savannah Hobbs Brittany Rutherford5 Reasons to Bring Your Recruiting in-House (and How To Do It)
March 12, 2021
Latest Books All books
Dig deeper with the Rosenbot
How do newer generations of law students view the incorporation of UX principles into legal education?
What tools or formats work best to align diverse teams with different digital literacy levels on a market launch process?
In an AI and code-driven future, how does service design remain relevant for human-centered experiences?