Rosenverse

Log in or create a free Rosenverse account to watch this video.

Log in Create free account

100s of community videos are available to free members. Conference talks are generally available to Gold members.

Garbage in, garbage out? Measuring error rates to get ready for AI
Thursday, January 8, 2026 • Rosenfeld Community
Share the love for this talk
Garbage in, garbage out? Measuring error rates to get ready for AI
Speakers: Caroline Jarrett
Link:

Summary

We’re all aware of a big push to implement AI everywhere, including in the services that many of us are working on. It seems only fair to try to give the AI some good quality input in the hope of getting decent output from it. Or, being more pessimistic: we probably expect to get some level of errors from the AI, but what do we know about the error rates in what we’re putting into the AI? In this session, we will compare our ideas on identifying errors and measuring error rates, including thinking about errors in six ways: 1) Problems along the way 2) Wrong result 3) Unnecessary action 4) Delayed-impact problem 5) Non-uptake or over-uptake 6) Technology problem We’ll wrap up with “tips and next steps”: an opportunity to consider what we now need to find out or do differently.

Key Insights

  • Errors in data collection and user input are foundational issues that compromise AI and service outcomes.

  • Users often 'fudge' answers due to ambiguous questions, privacy concerns, or to achieve a desired outcome.

  • Non-uptake, where users abandon a form or process, is a major source of error but is rarely published or measured.

  • Mistakes can be categorized as problems along the way, wrong results, unnecessary actions, and delayed impact issues.

  • Multiple accounts creation often occurs due to users forgetting existing accounts, leading to data duplicates and service inefficiencies.

  • Measuring error rates is complex; different metrics (per person, per attempt, completion vs. start) yield different perspectives.

  • Elections provide a useful model for measuring data quality, using turnout, participation, and eligibility rates.

  • Data quality deteriorates over time due to changes like moving, name changes, loss of documents, or organizational restructuring.

  • AI initiatives can provide a compelling rationale and funding opportunity for improving longstanding data quality problems.

  • Frameworks like the UK Government Data Quality Framework help organizations systematically assess and address data issues.

Notable Quotes

"If we get garbage in, we get garbage out — this is true for AI as much as for surveys or forms."

"People can make all sorts of inventive mistakes on their forms that AI struggles to interpret."

"Sometimes a form forces you into a wrong answer by giving inappropriate options."

"I’ve seen people fudge their date of birth so their child can attend a summer camp they aren’t technically eligible for."

"A major error in many services is users creating multiple accounts because they can’t find or reuse existing ones."

"An error might not be immediate; data can be fine when collected but deteriorate over time and cause problems later."

"Completion rates (conversion rates) and dropout rates are simple metrics but often not tracked or shared."

"Organizations rarely know their error rates, which limits their ability to improve user experience or data accuracy."

"Linking data quality efforts to AI initiatives can help secure attention and budget for necessary improvements."

"Data quality involves accuracy, completeness, uniqueness, timeliness, and representativeness—not just error reduction."

Ask the Rosenbot
Maish Nichani
Sparking a Service Excellence Mindset at a Government Agency
2021 • Civic Design 2021
Gold
Brad Peters
Short Take #1: UX/Product Lessons from Your Industry Peers
2022 • Design in Product 2022
Gold
Kristin Skinner
8 Types of Measures in Design Operations
2020 • DesignOps Community
Andrew Custage
The Digital Journey: Research on Consumer Frustration and Loyalty
2023 • Advancing Research 2023
Gold
Wendy Johansson
Design at Scale: Behind the Scenes
2021 • Enterprise Community
Charlotte Vorbeck
Pipeline to Civic Design
2021 • Civic Design 2021
Gold
Jeff Ephraim Bander
Eye Tracking Gamechanger: Why Smartphone Eye Tracking will Revolutionize Your UX Research
2022 • Advancing Research 2022
Gold
Uday Gajendar
Day 2 Welcome
2024 • Designing with AI 2024
Gold
Rittika Basu
Age and Interfaces: Equipping Older Adults with Technological Tools
2023 • Advancing Research Community
Farid Sabitov
Theme Four Intro
2022 • DesignOps Summit 2022
Gold
Patrizia Bertini
Designing Within the Lines: How the EU AI Act Can Spark Better AI Innovation
2025 • DesignOps Community
Sarah Auslander
Incremental Steps to Drive Radical Innovation in Policy Design
2022 • Civic Design 2022
Gold
Dan Willis
Enterprise Storytelling Sessions
2017 • Enterprise Experience 2017
Gold
Dane DeSutter
Keeping the Body in Mind: What Gestures and Embodied Actions Tell You That Users May Not
2024 • Advancing Research 2024
Gold
Bria Alexander
Theme 1 Intro
2024 • DesignOps Summit 2024
Gold
Erin Weigel
Failure Friday #6: 90% of Everything I Do is Either Broken or Pointless
2025 • Rosenfeld Community

More Videos

Lin Nie

"It’s our job to see the person in front of us, and if that means having an uncomfortable conversation, have that conversation."

Lin Nie

When Thought-worlds Collide: Collaborating Between Research and Practice

March 10, 2021

Suzan Bednarz

"Using a screen reader demo helps leadership hear firsthand why accessibility matters and gain buy-in."

Suzan Bednarz Hilary Sunderland

AccessibilityOps for All

January 8, 2024

Sofia Quintero

"Choosing a research repository is a big battle in your organization; the right decisions can really help you succeed."

Sofia Quintero

The Product Philosophy Behind EnjoyHQ

March 10, 2021

Sol Mesz

"Knowing the difference between hands and brains is key to managing expectations."

Sol Mesz

Hands or Brains? How to Hire for Strategy, Strategically

January 8, 2024

Matt Bernius

"If you work with humans, you work with trauma. Assume everybody carries some form of vulnerability."

Matt Bernius Sarah Fathallah Hera Hussain Jessica Zéroual-Kara

Trauma-informed Research: A Panel Discussion

October 7, 2021

Gina Mendolia

"In large complex systems, the dots are scattered across silos and time zones, making progress feel elusive."

Gina Mendolia

Therapists, Coaches, and Grandmas: Techniques for Service Design in Complex Systems

December 3, 2024

Verónica Urzúa

"Being those who create constructive discomfort in the industry is necessary to bring our Latin researcher identity forward."

Verónica Urzúa Jorge Montiel

The B-side of the Research Impact

March 12, 2021

Lisanne Norman

"Diversity, equity, inclusion, and belonging goals need to tie into annual reviews and bonuses—it's everybody’s work."

Lisanne Norman

Why I Left Research

March 27, 2023

Ovetta Sampson

"Treat your stakeholders like research participants — listen openly and without judgment."

Ovetta Sampson

Turning UX Passion into Real Product Influence

June 7, 2023