Summary
AI tools like ChatGPT have exploded in popularity with good reason: they allow users to draft, summarize, and edit content with unprecedented speed. While these generic tools can generate any type of content or perform any type of content task, the user needs to craft an effective prompt to get high-quality output, and often needs to exchange multiple messages with additional guidance and requirements in order to improve results. When you’re building an AI-powered text generation feature, such as a product description or email writer, you typically can’t expect users to craft their own prompts. And unless you’re building a chat interface, you’re unlikely to offer the ability to iteratively improve the output. Instead, your feature needs a robust prompt skeleton that combines with user input to produce high-quality output in a single response. For the designer, this means building an interface that helps users provide the exact information that creates a successful prompt. This process is more complex than simple form design or a mad-lib prompt completion tool. The user input, often including free form text fields, might be required to fill in prompt variables, but it also could change the prompt structure itself, or even override base instructions. The effectiveness of the user input significantly influences the quality of the output, underscoring the need for designers to be deeply familiar with the backend prompt architecture so they can design the frontend. Drawing on recent text generation projects, I'll demonstrate how the interface design can respond to and evolve with the prompt architecture. I’ll talk about how to determine which prompt components to make invisible to the user, which to provide as predefined options, and which should be authored by the user in free-form text fields. Takeaways How prompt structure can impact user interface design and conversely, how design can impact prompt structure Techniques to provide effective user guidance within AI generation contexts to ensure consistently high-quality output Real-world examples and learnings from recent generative AI projects in an e-commerce software product
Key Insights
-
•
Building specialized AI features requires predefining most of the prompt to guide the LLM effectively rather than having users write their own prompts.
-
•
Users refine prompts iteratively in chat interfaces, but specialized tools often allow only one shot with limited input variables.
-
•
Understanding the backend prompt structure is critical for designers to decide which parts are fixed and which are configurable by users.
-
•
Tone of voice is a complex variable that significantly impacts product description differentiation and brand personality.
-
•
Free-text inputs for tone proved difficult for users; predefined tone options simplify user choices while maintaining quality.
-
•
AI models inconsistently detect tone from existing text samples and report high confidence in contradictory results, making this approach unreliable.
-
•
Successful tone options must be sufficiently distinct to allow users to easily identify and select the best fit for their brand.
-
•
Embedding detailed tone attribute descriptions (e.g., vocabulary, pronouns, punctuation) in prompts improves AI output quality.
-
•
The default confident and positive tone of models like ChatGPT may not suit all contexts and needs to be explicitly adjusted in prompts.
-
•
Providing a minimal, intuitive UI that maps to rich, complex backend instructions balances user simplicity with output quality.
Notable Quotes
"In a chat interface, you’re able to fill in the information that you missed and iterate on the output as it’s being generated."
"When you’re building a specific AI feature, you don’t want to make your user write their own prompt."
"It becomes the designer’s responsibility to figure out what aspects of the instructions to the LLM should be configurable and what should be fixed."
"A prompt is just a set of instructions to an AI in the context of text generation."
"People get stuck during the sessions when asked to describe their brand voice in just one or two words."
"The LLM gave many different answers for the same passage, each time reporting it was one hundred percent confident."
"Tones need to be distinct enough so that merchants can spot the tone that best fits their brand."
"Even though we’ve got a single dropdown field, it maps to a far richer set of instructions in the prompt."
"The confident tone built into these tools may be one of the more problematic features of Gen AI."
"By default, you’re getting I am very confident of my answers."
Or choose a question:
More Videos
"Finance is the common language that all domains can speak, especially critical if you want a seat at the executive table."
Giff ConstableFinancial fluency for product leaders: AMA with Giff Constable
April 11, 2024
"It's not just about fixing the chatbot, it's about bridging the gap between user expectations and actual experiences."
Kritika YadavOptimizing AI Conversations: A Case Study on Personalized Shopping Assistance Frameworks
June 10, 2025
"Offering anonymity in safety surveys helps people be honest when they don’t feel safe to share openly."
Alla WeinbergHow to Build and Scale Team Safety
January 8, 2024
"Systemic deceptive design is harder to regulate because manipulation happens across time and user context."
Harry Brignull Mark Leiser Robert StribleyBeyond Clicks and Tricks: Why deceptive design has grown into a regulatory faultline
January 16, 2026
"We’re typically designing products to solve really complex issues."
Kit UngerTheme 2: Introduction
June 10, 2021
"The biggest mistake in any implementation is ignoring the emotional side of change management."
Sofia QuinteroBeyond Tools: The Messy Business of Implementing Research Repositories
March 10, 2022
"The hardest part is changing the measure of success from output to outcomes, specifically changes in customer behavior."
Jeff GothelfThe Intersection of Lean and Design
January 10, 2019
"In our desire to see the big table, we often forget that sometimes it’s more important not to seek validation at the table but to offer it to someone else."
Kit Unger Lada GorlenkoTheme 3 Intro
June 10, 2022
"We think with things; our brain is part of us but we extend our cognition into our environments and tools."
Jorge ArangoThe Best of Both Worlds: How to Integrate Paper and Digital Notes (1st of 3 seminars)
April 5, 2024
Latest Books All books
Dig deeper with the Rosenbot
How does Rally’s research infrastructure facilitate participant management and integration with enterprise data systems?
How can barriers like childcare and internet access be addressed to promote equitable participation?
How can AI help automate the population of a UX research repository from unstructured data sources?