Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Are you building or using AI in your products? Do you have customers in the EU, or plan to? If so, the EU AI Act isn’t just another piece of legislation. It’s a blueprint for the future of digital product design, a signal of where responsible, human-centered technology is headed. As the world’s first comprehensive AI law, the Act sets new expectations for trust, transparency, and accountability. And it doesn’t matter whether your team is in San Francisco, Singapore, or Sydney: if you build or use AI, this shift affects you. Let’s stop treating AI like a race, and start seeing it as a design challenge: how to build products that people trust, adopt, and keep using. From chatbots to recommendation engines, every AI-powered experience will soon need to meet standards for safety, explainability, and fairness. But this isn’t only about compliance. It’s about turning responsibility into a competitive edge, building systems that last, and products that lead. In this session, we’ll unpack what the EU AI Act means for design, product, and operations leaders, in plain language, no legal jargon required. You’ll walk away understanding: - How AI systems are classified and why “high risk” might apply to your product even if you’re not in the EU - Why transparency, documentation, and human oversight are becoming core design principles - How acting early can strengthen user trust, reduce risk, and set your team apart - How design and product roles are evolving and why that shift is an opportunity, not a threat This isn’t about avoiding fines. It’s about leading with integrity, designing technology people believe in, and using regulation as a launchpad for innovation. Whether you’re a designer, product manager, or operations leader, this session will help you understand what’s changing, take action now, and turn the EU AI Act into your next innovation catalyst.
Key Insights
-
•
The EU AI Act introduces penalties up to 35 million euros or 7% of global turnover for non-compliance, affecting any company serving European customers.
-
•
AI is agency with impact, not just a tool; it can make decisions dramatically affecting human lives.
-
•
Transparency and accountability are core to the AI Act, requiring detailed documentation and auditable logs of all AI decisions, preserved for up to 10 years.
-
•
High-risk AI systems, such as those used in healthcare or hiring, require bias testing, human oversight, and clear escalation paths.
-
•
Many current AI systems cannot be retrofitted to comply, making early compliance and documentation essential.
-
•
Chatbots are generally low or limited risk but become high risk if their decisions significantly impact humans, such as insurance claims denial.
-
•
Synthetic content generated or altered by AI must be persistently labeled as such to maintain transparency for users and auditors.
-
•
Human oversight must prevail in AI interactions; humans retain accountability and must be able to override AI decisions.
-
•
Designers need to create flows that embed human control, transparency, and accountability, moving beyond traditional user flows to 'agent flows' managing AI interactions.
-
•
Organizations should map their AI systems, classify risk levels, assign responsibility, and prioritize compliance as part of their product strategies.
Notable Quotes
"The AI Act can spark better innovation with AI rather than constraining it."
"Speed is not everything we need velocity—speed plus direction is key."
"AI is not a tool. AI has agency. AI makes decisions that impact people's lives."
"Machines cannot be accountable; responsibility is always human."
"If you are building or deploying AI that impacts people, you need to document everything."
"Transparency requires all decisions to be clear and understandable to non-technical users."
"Humans need to override and dominate AI because machines have no context or experience."
"Most AI systems today cannot be retrofitted easily to comply with regulation."
"Everything touched by AI becomes synthetic and needs persistent labeling."
"Real innovation is not being first; it's about doing it right and sustainably."
Or choose a question:
More Videos
"Intuition is knowing without knowing how you know, I just know it."
Kristen Guth, Ph.D.Out of the FOG: A Non-traditional Research Approach to Alignment
March 28, 2023
"We saw one insurance form took 14 minutes to fill, and that was longer than competitor journeys, helping identify a clear target for improvement."
Caroline VizeThe State of UX: Five Lessons from 2021 to Accelerate Digital Experience in 2022
March 9, 2022
"Whose story gets told, the participant or the researcher, is also a question of knowledge production."
Bilan HashiThe Tension Between Story Collecting and Story Telling in Research
March 10, 2021
"Scaling training happened by finding people in the community who could run sessions independently."
Kara KaneCommunities of Practice for Civic Design
April 7, 2022
"Dynamic permission management and turning on SSO or SIEM help manage tools at scale without nano work."
Farid SabitovTheme Four Intro
September 9, 2022
"We’re moving from theory of change to theory of service: starting with what people actually need before creating anything."
Patrick BoehlerFishing for Real Needs: Reimagining Journalism Needs with AI
June 10, 2025
"Don’t overlap sprints. Context slips and decisions unravel."
Bethany BrownRewiring operations with service design and AI
November 20, 2025
"You guys have the harder job."
Dan WillisEnterprise Storytelling Sessions
June 8, 2017
"Without a design ops function, teams make their own flavors that dilute brand and accessibility."
Rachael Greene Alison DavisBuilding a Design Ops Practice that Really Works (Most of the Time)
October 2, 2025