Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Imagine slipping on a sleek pair of smart glasses. Not only do you look sharp, the glasses capture everything you see, hear, and do. Your AI assistant—built into the glasses and synced to your email, social media accounts, health apps, and finances—manages your life. It’s tasked with paying bills, booking trips, replying to messages, even helping you swipe right. Over time, you find yourself chitchatting with your AI assistant. You call him Charlie. Now imagine you’re a threat actor. That trust between user and AI assistant? It’s your entry point. If your product is powered by AI, you’re not just designing features—you’re designing an entire relationship. You’re designing Charlie. Let’s talk about where that goes wrong—and how to get it right.
Key Insights
-
•
Users often do not understand why AI-powered systems request extensive personal data, increasing privacy risks.
-
•
Trust in AI agents can become excessive, creating new vectors for manipulation by threat actors.
-
•
Security issues typically occur beneath the surface until alerts disrupt the user experience, often causing frustration.
-
•
Prompt injection attacks pose a novel threat where malicious inputs manipulate AI agents to access sensitive user data.
-
•
Multimodal AI interfaces introduce complexity in security decisions, increasing chances for user errors.
-
•
Secure by default settings reduce burden on users and improve overall protection without requiring user intervention.
-
•
Cross-disciplinary collaboration between UX, security, product, legal, and compliance teams is crucial for safer AI design.
-
•
Users need clear, contextual guidance during onboarding to make informed decisions about data sharing and security settings.
-
•
Transparency about AI limitations and giving users the option to reverse AI actions are essential for building trust.
-
•
Threat actors are likely to exploit growing AI access to personal data and automate vulnerabilities discovery.
Notable Quotes
"When a product is powered by AI, you're not just designing the features; you are designing an entire relationship."
"Charlie is like the most annoying coworker who constantly surfaces problems but never offers solutions to Alice."
"Threat actors probably know your system better than you do and are looking for any entry points to exploit."
"Alice often perceives Charlie as just another barrage of alerts filled with jargon she doesn't understand."
"Prompt injection attacks can trick AI agents into accessing private data like emails without the user realizing."
"People become incrementally more comfortable giving away data because they see the value AI provides."
"We need secure defaults that protect users out of the box without them having to figure it out."
"Alert fatigue is real; users can't be burdened with constant security decisions or they'll ignore them."
"Giving users the ability to reverse AI-driven actions is critical but currently underexplored."
"If Charlie has been tampered with, Alice needs a clear way to be alerted that she shouldn't trust it."
Or choose a question:
More Videos
"Edgy asks three fundamental questions: Why are we here? What’s the role this enterprise plays in people’s lives? And what do we need to make it work?"
Milan Guenther Benjamin KumpfThe $212 billion ‘so what?’: unlocking impact in development cooperation
November 20, 2025
"People product managers, program managers, and other partners will start to ask for design ops investment because they know it makes their product more successful."
Alnie FigueroaTeamwork: Strategies for Effective Collaboration with Other Program Management Teams
September 8, 2022
"Innovation is invention multiplied by adoption multiplied by inclusion."
Saara Kamppari-Miller Nicole Bergstrom Shashi JainKey Metrics: Comparing Three Letter Acronym Metrics That Include the Word “Key”
November 13, 2024
"This is a topic near and dear to my heart and I feel really long overdue conversation."
Christian CrumlishIntroduction by our Conference Chair
December 6, 2022
"AI has no understanding of consequences — humans are the ones to bring that understanding."
Helen ArmstrongAugment the Human. Interrogate the System.
June 7, 2023
"The Silicon Valley dream is a problematic ideology that normalizes what the correct research looks like and excludes others."
Verónica Urzúa Jorge MontielThe B-side of the Research Impact
March 12, 2021
"Knowing in action is the intuition practiced by competent designers, a core of artistry."
Yoel SumitroActions and Reflections: Bridging the Skills Gap among Researchers
March 9, 2022
"Jazz has those rules of engagement."
Jim KalbachJazz Improvisation as a Model for Team Collaboration
June 4, 2019
"Sometimes you’re just laying the foundational bricks so an idea or process can get off the ground."
Bud Caddell Kristin Skinner Alana WashingtonDesignOps Community Sensing Session
May 13, 2021