Rosenverse

Log in or create a free Rosenverse account to watch this video.

Log in Create free account

100s of community videos are available to free members. Conference talks are generally available to Gold members.

AI in Real Life: Using LLMs to Turbocharge Microsoft Learn
Thursday, February 13, 2025 • Rosenfeld Community
Share the love for this talk
AI in Real Life: Using LLMs to Turbocharge Microsoft Learn
Speakers: Sarah Barrett
Link:

Summary

Enthusiasm for AI tools, especially large language models like ChatGPT, is everywhere, but what does it actually look like to deliver large-scale user-facing experiences using these tools in a production environment? Clearly they're powerful, but what do they need to make them work reliably and at scale? In this session, Sarah provides a perspective on some of the information architecture and user experience infrastructure organizations need to effectively leverage AI. She also shares three AI experiences currently live on Microsoft Learn: An interactive assistant that helps users post high-quality questions to a community forum A tool that dynamically creates learning plans based on goals the user shares A training assistant that clarifies, defines, and guides learners while they study Through lessons learned from shipping these experiences over the last two years, UXers, IAs, and PMs will come away with a better sense of what they might need to make these hyped-up technologies work in real life.

Key Insights

  • Most AI applications no longer require building foundation models from scratch; the focus is now on application development and integration.

  • Single, all-purpose chatbots (everything chatbots) are insufficient because they handle high ambiguity and diverse, often complex tasks poorly.

  • Sarah introduces the ambiguity footprint as a framework to measure AI application complexity and risks across several axes such as task complexity, context, interface, prompt openness, and sensitivity.

  • AI features that support simple, complimentary user tasks, rather than critical or complex ones, are easier and safer to build and scale.

  • Visible AI interfaces, like chatbots, set clearer user expectations but introduce more ambiguity and management overhead compared to invisible AI (e.g., keyboard optimizations).

  • Prompt engineering plays a crucial role in defining the boundaries of AI output, from very open-ended to highly restricted scopes.

  • Retrieval Augmented Generation (RAG) helps manage up-to-date context by dynamically querying relevant data chunks rather than using static corpus.

  • Evaluating AI outputs rigorously is essential but often underprioritized; without clear quality metrics, teams end up relying on subjective or anecdotal assessments.

  • Data ethics and distributed AI implementations can create blind spots, limiting feedback loops necessary for continuous AI model improvement.

  • Incrementally building AI applications with smaller ambiguity footprints helps organizations develop expertise and controls before tackling more complex, open-ended AI products.

Notable Quotes

"You’re not doing IA, but you’re always doing it."

"An everything chat bot is almost certainly not how you’re going to build it; realistically you’re building three apps in a trench coat."

"AI is ambiguous at best because we’re fully in the realm of probabilistic rather than deterministic programming."

"The more complex the task, the less likely it is to be successful with current AI."

"A task where AI adds a little something is honestly easier to get right than one where it’s absolutely critical."

"Visible AI interfaces introduce another place where you can add ambiguity."

"Retrieval Augmented Generation lets you supply specific relevant information to the model dynamically rather than everything at once."

"Evaluation might be the most important part of your entire development effort and is often the hardest to do well."

"You can’t just eyeball results and call it good; AI applications are expensive and complex and require systematic evaluation."

"Never build or buy an everything chat bot again; start with less ambiguous, targeted AI experiences."

Lisa Gironda
Opener: Chief of Staff–An unexpected journey
2024 • DesignOps Summit 2020
Gold
Gillian Salerno-Rebic
Redefining Speed and Scale: How Accenture’s GrowthOS Uses AI-Simulated Insights to Reduce Risk and Accelerate Innovation
2025 • Designing with AI 2025
Gold
Tatyana Mamut
Opening Keynote: Breaking Conway's Law--or How to Work Differently and Not Ship Your Org Chart
2019 • Enterprise Experience 2019
Gold
Jennifer Strickland
Adopting a "Design By" Method
2021 • Civic Design 2021
Gold
Peter Van Dijck
Designing AI-first products on top of a rapidly evolving technology
2025 • Designing with AI 2025
Gold
Sheryl Cababa
Expanding Your Design Lens with Systems Thinking
2023 • Enterprise Community
Erika Flowers
AI-Readiness: Preparing NASA for a Data-Driven, Agile Future
2025 • Designing with AI 2025
Gold
Jacqui Frey
Flow and Superfluidity for Design Orgs
2018 • DesignOps Summit 2018
Gold
Etienne Fang
The Power of Care: From Human-Centered Research to Humanity-Centered Leadership
2021 • Advancing Research 2021
Gold
Taylor Jennings
Repository Retrospective: Learnings from Introducing a Central Place for UX Research
2022 • Advancing Research 2022
Gold
Jim Kalbach
Jazz Improvisation as a Model for Team Collaboration
2019 • Enterprise Experience 2019
Gold
Sohit Karol
Designing Delightful Listening Experiences: Mixed Methods Research in the Age of Machine Learning
2020 • Advancing Research 2020
Gold
Onur Kocan
Understanding the Strategy for Civic Design in a Complex City: Istanbul
2022 • Civic Design 2022
Gold
Chris Geison
What is Research Strategy?: A Panel of Research Leaders Discuss this Emergent Question
2021 • Advancing Research Community
Steve Baty
Breaking Out of Ruts: Tips for Overcoming the Fear of Change
2016 • Enterprise UX 2016
Gold
Ovetta Sampson
Research in the Automated Future
2022 • Advancing Research 2022
Gold

More Videos

Yoel Sumitro

"Knowing in action is the intuition practiced by competent designers, a core of artistry."

Yoel Sumitro

Actions and Reflections: Bridging the Skills Gap among Researchers

March 9, 2022

Zariah Cameron

"Share everything, own nothing but credit everyone."

Zariah Cameron

ReDesigning Wellbeing for Equitable Care in the Workplace

September 23, 2024

Doug Powell

"This is a just-do-it moment. Ask for forgiveness, not permission."

Doug Powell

DesignOps and the Next Frontier: Leading Through Unpredictable Change

September 11, 2025

Darian Davis

"A common toxic behavior is glory seeking, like presenting work as your own when it was a team effort."

Darian Davis

Lessons from a Toxic Work Relationship

January 8, 2024

Mike Oren

"New ideas and changes to business trajectories rank pretty far down at only 18% of business leaders seeing it as a value of design."

Mike Oren

Why Pharmaceutical's Research Model Should Replace Design Thinking

March 28, 2023

Jacqui Frey

"System thinking is the cornerstone competency of design operations."

Jacqui Frey

Scale is Social Work

March 19, 2020

Kim Holt

"We do this with intention of fostering a more inclusive experience for audience members who might be sight impaired."

Kim Holt Emma Wylds Pearl Koppenhaver Maisee Xiong

A Salesforce Panel Discussion on Values-Driven DesignOps

September 8, 2022

Samuel Proulx

"If we think disability as a mismatch between user and environment rather than a medical problem, it becomes easier to understand how accessibility benefits everyone."

Samuel Proulx

From Standards to Innovation: Why Inclusive Design Wins

September 10, 2025

Laine Riley Prokay

"The competencies allow managers to celebrate individualism while maintaining consistency across teams."

Laine Riley Prokay

How DesignOps can Drive Inclusive Career Ladders for All

September 30, 2021