Rosenverse

Log in or create a free Rosenverse account to watch this video.

Log in Create free account

100s of community videos are available to free members. Conference talks are generally available to Gold members.

When AI Becomes the User’s Point Person—and Point of Failure
Thursday, August 7, 2025 • Rosenfeld Community
Share the love for this talk
When AI Becomes the User’s Point Person—and Point of Failure
Speakers: Heidi Trost
Link:

Summary

Imagine slipping on a sleek pair of smart glasses. Not only do you look sharp, the glasses capture everything you see, hear, and do. Your AI assistant—built into the glasses and synced to your email, social media accounts, health apps, and finances—manages your life. It’s tasked with paying bills, booking trips, replying to messages, even helping you swipe right. Over time, you find yourself chitchatting with your AI assistant. You call him Charlie. Now imagine you’re a threat actor. That trust between user and AI assistant? It’s your entry point. If your product is powered by AI, you’re not just designing features—you’re designing an entire relationship. You’re designing Charlie. Let’s talk about where that goes wrong—and how to get it right.

Key Insights

  • Users often do not understand why AI-powered systems request extensive personal data, increasing privacy risks.

  • Trust in AI agents can become excessive, creating new vectors for manipulation by threat actors.

  • Security issues typically occur beneath the surface until alerts disrupt the user experience, often causing frustration.

  • Prompt injection attacks pose a novel threat where malicious inputs manipulate AI agents to access sensitive user data.

  • Multimodal AI interfaces introduce complexity in security decisions, increasing chances for user errors.

  • Secure by default settings reduce burden on users and improve overall protection without requiring user intervention.

  • Cross-disciplinary collaboration between UX, security, product, legal, and compliance teams is crucial for safer AI design.

  • Users need clear, contextual guidance during onboarding to make informed decisions about data sharing and security settings.

  • Transparency about AI limitations and giving users the option to reverse AI actions are essential for building trust.

  • Threat actors are likely to exploit growing AI access to personal data and automate vulnerabilities discovery.

Notable Quotes

"When a product is powered by AI, you're not just designing the features; you are designing an entire relationship."

"Charlie is like the most annoying coworker who constantly surfaces problems but never offers solutions to Alice."

"Threat actors probably know your system better than you do and are looking for any entry points to exploit."

"Alice often perceives Charlie as just another barrage of alerts filled with jargon she doesn't understand."

"Prompt injection attacks can trick AI agents into accessing private data like emails without the user realizing."

"People become incrementally more comfortable giving away data because they see the value AI provides."

"We need secure defaults that protect users out of the box without them having to figure it out."

"Alert fatigue is real; users can't be burdened with constant security decisions or they'll ignore them."

"Giving users the ability to reverse AI-driven actions is critical but currently underexplored."

"If Charlie has been tampered with, Alice needs a clear way to be alerted that she shouldn't trust it."

Ask the Rosenbot
Laura Smith
Embedding Service Design and Agile Practice within UK Planning Teams to Create Services that Last
2024 • Advancing Service Design 2024
Gold
Neema Mahdavi
Operationalizing DesignOps
2018 • DesignOps Summit 2018
Gold
Jose Coronado
People First - Design at JP Morgan
2021 • Design at Scale 2021
Gold
Uday Gajendar
10 Years of Enterprise UX: Reflecting on the community and the practice
2025 • Enterprise Community
Deanna Washington
Connecting the Ops: Plenary Panel and Closing Circle
2022 • DesignOps Summit 2022
Gold
Charlotte Vorbeck
Pipeline to Civic Design
2021 • Civic Design 2021
Gold
Gabriela Barneva
Operationalizing Inclusive Design in Design Ops
2025 • DesignOps Summit 2025
Gold
John Calhoun
Have we Reached Our Peak? Spotting the Next Mountain For DesignOps to Climb
2021 • DesignOps Summit 2021
Gold
Jen Briselli
Learning Is The Engine: Designing & Adapting in a World We Can’t Predict
2025 • Rosenfeld Community
Sean McKay
Coexisting with non-researchers: Practical strategies for a democratized research future
2025 • Advancing Research 2025
Gold
Bria Alexander
Opening Remarks
2023 • DesignOps Summit 2023
Gold
Andrea Gallagher
The Problem Space
2019 • Advancing Research Community
Suzan Bednarz
AccessibilityOps for All
2024 • DesignOps Summit 2020
Gold
Chris Geison
What is Research Strategy?
2021 • Advancing Research 2021
Gold
Dave Hoffer
UX Job Search AMA #2 with Joanne Weaver and Dave Hoffer
2025 • Rosenfeld Community
Dr. Jamika D. Burge
A Genuine Conversation about the Future of UX Research
2024 • Advancing Research Community

More Videos

Milan Guenther

"Edgy asks three fundamental questions: Why are we here? What’s the role this enterprise plays in people’s lives? And what do we need to make it work?"

Milan Guenther Benjamin Kumpf

The $212 billion ‘so what?’: unlocking impact in development cooperation

November 20, 2025

Alnie Figueroa

"People product managers, program managers, and other partners will start to ask for design ops investment because they know it makes their product more successful."

Alnie Figueroa

Teamwork: Strategies for Effective Collaboration with Other Program Management Teams

September 8, 2022

Saara Kamppari-Miller

"Innovation is invention multiplied by adoption multiplied by inclusion."

Saara Kamppari-Miller Nicole Bergstrom Shashi Jain

Key Metrics: Comparing Three Letter Acronym Metrics That Include the Word “Key”

November 13, 2024

Christian Crumlish

"This is a topic near and dear to my heart and I feel really long overdue conversation."

Christian Crumlish

Introduction by our Conference Chair

December 6, 2022

Helen Armstrong

"AI has no understanding of consequences — humans are the ones to bring that understanding."

Helen Armstrong

Augment the Human. Interrogate the System.

June 7, 2023

Verónica Urzúa

"The Silicon Valley dream is a problematic ideology that normalizes what the correct research looks like and excludes others."

Verónica Urzúa Jorge Montiel

The B-side of the Research Impact

March 12, 2021

Yoel Sumitro

"Knowing in action is the intuition practiced by competent designers, a core of artistry."

Yoel Sumitro

Actions and Reflections: Bridging the Skills Gap among Researchers

March 9, 2022

Jim Kalbach

"Jazz has those rules of engagement."

Jim Kalbach

Jazz Improvisation as a Model for Team Collaboration

June 4, 2019

Bud Caddell

"Sometimes you’re just laying the foundational bricks so an idea or process can get off the ground."

Bud Caddell Kristin Skinner Alana Washington

DesignOps Community Sensing Session

May 13, 2021