Rosenverse

Log in or create a free Rosenverse account to watch this video.

Log in Create free account

100s of community videos are available to free members. Conference talks are generally available to Gold members.

AI in Real Life: Using LLMs to Turbocharge Microsoft Learn
Thursday, February 13, 2025 • Rosenfeld Community
Share the love for this talk
AI in Real Life: Using LLMs to Turbocharge Microsoft Learn
Speakers: Sarah Barrett
Link:

Summary

Enthusiasm for AI tools, especially large language models like ChatGPT, is everywhere, but what does it actually look like to deliver large-scale user-facing experiences using these tools in a production environment? Clearly they're powerful, but what do they need to make them work reliably and at scale? In this session, Sarah provides a perspective on some of the information architecture and user experience infrastructure organizations need to effectively leverage AI. She also shares three AI experiences currently live on Microsoft Learn: An interactive assistant that helps users post high-quality questions to a community forum A tool that dynamically creates learning plans based on goals the user shares A training assistant that clarifies, defines, and guides learners while they study Through lessons learned from shipping these experiences over the last two years, UXers, IAs, and PMs will come away with a better sense of what they might need to make these hyped-up technologies work in real life.

Key Insights

  • Most AI applications no longer require building foundation models from scratch; the focus is now on application development and integration.

  • Single, all-purpose chatbots (everything chatbots) are insufficient because they handle high ambiguity and diverse, often complex tasks poorly.

  • Sarah introduces the ambiguity footprint as a framework to measure AI application complexity and risks across several axes such as task complexity, context, interface, prompt openness, and sensitivity.

  • AI features that support simple, complimentary user tasks, rather than critical or complex ones, are easier and safer to build and scale.

  • Visible AI interfaces, like chatbots, set clearer user expectations but introduce more ambiguity and management overhead compared to invisible AI (e.g., keyboard optimizations).

  • Prompt engineering plays a crucial role in defining the boundaries of AI output, from very open-ended to highly restricted scopes.

  • Retrieval Augmented Generation (RAG) helps manage up-to-date context by dynamically querying relevant data chunks rather than using static corpus.

  • Evaluating AI outputs rigorously is essential but often underprioritized; without clear quality metrics, teams end up relying on subjective or anecdotal assessments.

  • Data ethics and distributed AI implementations can create blind spots, limiting feedback loops necessary for continuous AI model improvement.

  • Incrementally building AI applications with smaller ambiguity footprints helps organizations develop expertise and controls before tackling more complex, open-ended AI products.

Notable Quotes

"You’re not doing IA, but you’re always doing it."

"An everything chat bot is almost certainly not how you’re going to build it; realistically you’re building three apps in a trench coat."

"AI is ambiguous at best because we’re fully in the realm of probabilistic rather than deterministic programming."

"The more complex the task, the less likely it is to be successful with current AI."

"A task where AI adds a little something is honestly easier to get right than one where it’s absolutely critical."

"Visible AI interfaces introduce another place where you can add ambiguity."

"Retrieval Augmented Generation lets you supply specific relevant information to the model dynamically rather than everything at once."

"Evaluation might be the most important part of your entire development effort and is often the hardest to do well."

"You can’t just eyeball results and call it good; AI applications are expensive and complex and require systematic evaluation."

"Never build or buy an everything chat bot again; start with less ambiguous, targeted AI experiences."

Ask the Rosenbot
Steve Baty
Breaking Out of Ruts: Tips for Overcoming the Fear of Change
2016 • Enterprise UX 2016
Gold
Robin Beers
Panel: Excellence in Communicating Insights
2024 • Advancing Research 2024
Gold
Jay Bustamante
Navigating the Ethical Frontier: DesignOps Strategies for Responsible AI Innovation
2023 • DesignOps Summit 2023
Gold
Karen McGrane
AI for Information Architects: Are the robots coming for our jobs?
2024 • Rosenfeld Community
Marisa Bernstein
It Takes GRIT: Lessons from the Small, but Mighty World of Civic Usability Testing
2021 • Civic Design 2021
Gold
Alana Washington
Theme 3 Intro
2021 • DesignOps Summit 2021
Gold
Bria Alexander
Opening Remarks
2022 • DesignOps Summit 2022
Gold
PJ Buddhari
Meet Spectrum, Adobe’s Design System
2021 • Design at Scale 2021
Gold
Liz Ebengo
The Burden on Children: The Cost of Insufficient Post-Conflict Services and Pathways Forward
2024 • Advancing Service Design 2024
Gold
Eniola Oluwole
Lessons From the DesignOps Journey of the World's Largest Travel Site
2019 • DesignOps Summit 2019
Gold
Victor Udoewa
Research in the Pluriverse
2023 • Advancing Research 2023
Gold
Dominique Ward
The Most Exciting Time for DesignOps is Now
2022 • DesignOps Summit 2022
Gold
Laura Schaefer
DesignOps: A Conduit for Inclusion
2022 • DesignOps Summit 2022
Gold
Lija Hogan
Doing more with more: Lessons from the Front Lines of Democratization
2022 • Advancing Research 2022
Gold
Sarah Alvarado
How to make UX research leadership more effective [Advancing Research Community Workshop Series]
2023 • Advancing Research Community
Bill Scott
Lean Engineering: Engineering for Learning and Experimentation in the Enterprise
2015 • Enterprise UX 2015
Gold

More Videos

Christian Crumlish

"What responsibility do we have for creating the situation? What agency do we have for making it better?"

Christian Crumlish

Introduction by our Conference Chair

December 6, 2022

Matt Bernius

"All processes are extractive; even if there’s an element of enrichment, reducing that extraction is crucial."

Matt Bernius Sarah Fathallah Hera Hussain Jessica Zéroual-Kara

Trauma-informed Research: A Panel Discussion

October 7, 2021

Jay Bustamante

"You can’t put the toothpaste back in the tube once biased AI harms your brand or users."

Jay Bustamante

Navigating the Ethical Frontier: DesignOps Strategies for Responsible AI Innovation

October 2, 2023

Jake Burghardt

"Research waste is valuable customer insight that was unseen, ignored, or unintentionally left out of planning."

Jake Burghardt

Stop wasting research: Create new value with insight summaries

July 9, 2025

Brendan Jarvis

"Socrates was known in Athenian society as a gadfly, a fly that bites livestock, for his overt and intentional interference with the Athenian power structure."

Brendan Jarvis

Framing Tomorrow by Questioning Today

June 8, 2022

Chloe Amos-Edkins

"Understanding culture means looking at a spectrum from global trends down to intimate communities."

Chloe Amos-Edkins

A Cultural Approach: Research in the Context of Glocalisation

March 27, 2023

Joerg Beringer

"The output is not a requirements statement but a description of what is happening in the context."

Joerg Beringer Thomas Geis

Scaling User Research with AI: Continuous Discovery of User Needs in Minutes

September 10, 2025

Todd Healy

"Willingness to pilot is not the same as willingness to pay."

Todd Healy Jess Greco

Driving Change with CX Metrics

June 7, 2023

Shelby Switzer

"The primary goal of a hackathon is the community building aspect and getting people to talk to each other and learn from each other."

Shelby Switzer

Making Space for Community Knowledge-sharing in a Distributed World

December 10, 2021