Summary
AI tools like ChatGPT have exploded in popularity with good reason: they allow users to draft, summarize, and edit content with unprecedented speed. While these generic tools can generate any type of content or perform any type of content task, the user needs to craft an effective prompt to get high-quality output, and often needs to exchange multiple messages with additional guidance and requirements in order to improve results. When you’re building an AI-powered text generation feature, such as a product description or email writer, you typically can’t expect users to craft their own prompts. And unless you’re building a chat interface, you’re unlikely to offer the ability to iteratively improve the output. Instead, your feature needs a robust prompt skeleton that combines with user input to produce high-quality output in a single response. For the designer, this means building an interface that helps users provide the exact information that creates a successful prompt. This process is more complex than simple form design or a mad-lib prompt completion tool. The user input, often including free form text fields, might be required to fill in prompt variables, but it also could change the prompt structure itself, or even override base instructions. The effectiveness of the user input significantly influences the quality of the output, underscoring the need for designers to be deeply familiar with the backend prompt architecture so they can design the frontend. Drawing on recent text generation projects, I'll demonstrate how the interface design can respond to and evolve with the prompt architecture. I’ll talk about how to determine which prompt components to make invisible to the user, which to provide as predefined options, and which should be authored by the user in free-form text fields. Takeaways How prompt structure can impact user interface design and conversely, how design can impact prompt structure Techniques to provide effective user guidance within AI generation contexts to ensure consistently high-quality output Real-world examples and learnings from recent generative AI projects in an e-commerce software product
Key Insights
-
•
Building specialized AI features requires predefining most of the prompt to guide the LLM effectively rather than having users write their own prompts.
-
•
Users refine prompts iteratively in chat interfaces, but specialized tools often allow only one shot with limited input variables.
-
•
Understanding the backend prompt structure is critical for designers to decide which parts are fixed and which are configurable by users.
-
•
Tone of voice is a complex variable that significantly impacts product description differentiation and brand personality.
-
•
Free-text inputs for tone proved difficult for users; predefined tone options simplify user choices while maintaining quality.
-
•
AI models inconsistently detect tone from existing text samples and report high confidence in contradictory results, making this approach unreliable.
-
•
Successful tone options must be sufficiently distinct to allow users to easily identify and select the best fit for their brand.
-
•
Embedding detailed tone attribute descriptions (e.g., vocabulary, pronouns, punctuation) in prompts improves AI output quality.
-
•
The default confident and positive tone of models like ChatGPT may not suit all contexts and needs to be explicitly adjusted in prompts.
-
•
Providing a minimal, intuitive UI that maps to rich, complex backend instructions balances user simplicity with output quality.
Notable Quotes
"In a chat interface, you’re able to fill in the information that you missed and iterate on the output as it’s being generated."
"When you’re building a specific AI feature, you don’t want to make your user write their own prompt."
"It becomes the designer’s responsibility to figure out what aspects of the instructions to the LLM should be configurable and what should be fixed."
"A prompt is just a set of instructions to an AI in the context of text generation."
"People get stuck during the sessions when asked to describe their brand voice in just one or two words."
"The LLM gave many different answers for the same passage, each time reporting it was one hundred percent confident."
"Tones need to be distinct enough so that merchants can spot the tone that best fits their brand."
"Even though we’ve got a single dropdown field, it maps to a far richer set of instructions in the prompt."
"The confident tone built into these tools may be one of the more problematic features of Gen AI."
"By default, you’re getting I am very confident of my answers."
Or choose a question:
More Videos
"Transparency is one of our key values, so everyone in the studio was involved in building and maintaining the evaluation system."
Ignacio MartinezFair and Effective Designer Evaluation
September 25, 2024
"You can use experience sampling data to select a small qualitative sample to dive deeper in interviews."
Amelia ColeData-Prompted Interviews
December 17, 2021
"We're seeing a shift towards impact and how we measure success in the experience itself."
Kristin SkinnerTheme 1 Intro
September 29, 2021
"Staying in low fidelity throughout this early concept development phase allows for really strict focus on the problem slash solution space."
Billy CarlsonTips to Utilize Wireframes to Tell an Effective Product Story
June 6, 2023
"Feeding the power of self-definition to other groups replicates existing power hierarchies and must be avoided."
Yolanda RankinBlack Feminist Epistemology as a Critical Framework for Equitable Design
March 11, 2021
"Organizations are complex, plural, relational systems that require new lenses to understand and influence."
Eduardo OrtizTheme 3 Intro
March 13, 2025
"When someone swaps out on a feature team, that event needs a special onboarding: briefings and whiteboard sessions to transfer knowledge."
Toby HaugDiscussion
June 9, 2017
"There is no formula for good critiques, but mindset is behavior over time: humility, active listening, gratitude, owning blind spots, and acknowledgment."
Joseph MeersmanSweating the Pixel: Scaling Quality through Critique
June 10, 2021
"We just have kind of a top of the funnel problem, you know, there’s not nearly enough people going into the field because not that many people know it exists."
Bob BaxleyLeading with Design Operations Past and Present
December 19, 2019