Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the Evals + Claude playlist.
Summary
If you’re a product manager, UX researcher, or any kind of designer involved in creating an AI product or feature, you need to understand evals. And a great way to learn is with a hands-on example. In this talk, Peter Van Dijck of the helpful intelligence company will walk you through writing your first eval. You will learn the basic concepts and the tools, and write an eval together. This talk is hands on; you can follow along, and there will be plenty of time for questions. You will go away with an understanding of the basic building blocks of AI evals, and with the confidence that you know how to write one. And more importantly, you’ll build some intuition, some product sense, around how the best AI products today are built, and how that can help you use them more effectively yourself.
Key Insights
-
•
Evals consist of a task, a golden dataset with known correct outputs, and an evaluator that measures correctness.
-
•
Manual AI prompt testing is slow and inconsistent; automated evals accelerate and scale evaluation.
-
•
UX and product teams can and should learn evals as a practical, non-technical skill.
-
•
Creating your own golden dataset is essential and cannot be outsourced or fully automated.
-
•
Models are fixed once trained; improvements happen by refining prompts and context design, not retraining the model.
-
•
Evaluations measure task performance, not the underlying model itself, allowing comparison across models.
-
•
Outputting a confidence score from models is unreliable due to lack of internal memory and inconsistent scale interpretation.
-
•
Biases are baked into models during training via evals used in post-training refinement.
-
•
LLMs can be used to judge other LLM outputs to evaluate tasks with non-binary answers.
-
•
Effective eval work requires collaboration across data analysts, engineers, subject matter experts, and UX/product teams.
Notable Quotes
"Evals are like a way to define what good looks like."
"The model was baked and once it’s baked, it does not learn again until they bake a new one."
"You need to be looking at the data. Nobody wants to, but that’s core work."
"Without a golden dataset, you have to build the golden dataset yourself."
"We’re not teaching the model anything; we’re improving our prompts and context."
"Confidence scores from the model are not a good idea because the model has no memory."
"Biases are baked in through the evals used during model training and post-training."
"LLMs judging other LLMs might sound crazy, but if you do it right, it works."
"Evals are a product and UX skill; learning them lets you make these systems do what you want."
"There is a large and growing capability overhang in these models we haven’t discovered yet."
Or choose a question:
More Videos
"We don’t want product teams doing research just to check a box—they need to own and act on the insights."
Todd Healy Jess GrecoDriving Change with CX Metrics
June 7, 2023
"Vision is a co-owned artifact created by the VM, design lead, and engineering lead."
Satyam KantamneniDo You Have an Experience Vision?
March 23, 2023
"No one organization has figured it all out; we’re all learning and evolving how to scale and integrate design effectively."
Jose Coronado Julie Gitlin Lawrence LipkinPeople First - Design at JP Morgan
June 10, 2021
"The landing charge is the key thing airlines look at and that needs to stay flat."
Stephen PollardClosing Keynote: Getting giants to dance - what can we learn from designing large and complex public infrastructure?
November 7, 2017
"The risk is that AI will cut designers out of decision making because it’s perceived as cheaper or faster."
Cennydd Bowles Dan Rosenberg Lisa WelchmanDay 1 Panel
June 4, 2024
"If you can follow the simple steps of preparing, aligning, and delivering, you can shift the focus and impact of your team."
Mac SmithMeasuring Up: Using Product Research for Organizational Impact
March 12, 2021
"When you tell the truth, especially hard truths, how you say it matters as much as the truth itself."
Melissa Eggleston Maya Israni Florence Kasule Owen Seely Andrea SchneiderPractical People Skills for Building Trust on Teams and with Partners
December 9, 2021
"If you don't understand what your audience values, you won’t prove value."
Dianne QueReal Talk: Proving Value through a Scrappy Playbook
October 23, 2019
"When we solve accessibility challenges, everyone benefits, not just people with disabilities."
Sam ProulxTo Boldly Go: The New Frontiers of Accessibility
June 10, 2022